•  
  •  
 

Abstract

Calls for audits to expose and mitigate harms related to algorithmic decision systems are proliferating,3 and audit provisions are coming into force—notably in the E.U. Digital Services Act.4 In response to these growing concerns, research organizations working on technology accountability have called for ethics and/or human rights auditing of algorithms and an Artificial Intelligence (AI) audit industry is rapidly developing, signified by the consulting giants KPMG and Deloitte marketing their services.5 Algorithmic audits are a way to increase accountability for social media companies and to improve the governance of AI systems more generally. They can be elements of industry codes, prerequisites for liability immunity, or new regulatory requirements.6 Even when not expressly prescribed, audits may be predicates for enforcing data-related consumer protection law, or what U.S. Federal Trade Commissioner Rebecca Slaughter calls “algorithmic justice.” 7 The desire for audits reflect a growing sense that algorithms play an important, yet opaque, role in the decisions that shape people’s life chances—as well as a recognition that audits have been uniquely helpful in advancing our understanding of the concrete consequences of algorithms in the wild and in assessing their likely impacts.8

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.