AI pilot projects are helping us predict, prevent, and respond to human rights risks around the world.
Amazon is committed to respecting the human rights of all people connected to our business, and we’re always innovating to improve our ability to identify, prevent, and mitigate human rights risks. That includes exploring the use of machine learning and artificial intelligence (AI) to optimize and scale social audits—regular on-site checks of our third-party suppliers’ facilities to assess human rights and environmental issues like worker health and safety—and building AI-driven models to help us better predict risk in our value chain.
"The vast scope of today’s global supply chains requires new tools to effectively target human right risks," said Leigh Anne DeWine, Amazon’s director of human rights and social impact. "With a global network of hundreds of thousands of suppliers, we're developing AI tools to enhance—not replace—human judgment in upholding our standards. We are experimenting with machine learning and AI to help transform massive amounts of data into actionable insights, enabling more effective risk assessment, audit processing, and decision-making across our complex supplier network."
Amazon is still evaluating and improving these tools, but early results are promising.
Pre-audit: smart risk prediction
We developed an AI model that analyzes data from tens of thousands of historical social audits to identify risk patterns and flag whether suppliers are likely to meet Amazon’s Supply Chain Standards, our code of conduct for all third-party suppliers. This allows us to focus our auditing resources on higher-risk suppliers.