•  
  •  
 

Authors

Abstract

The shift in the cause of machine-induced harm from mechanical failures to algorithmic decision- making is challenging the applicability of products liability. Because algorithms now operate machines analogously to humans, a doctrinally coherent response is to subject algorithmic torts to a negligence framework that evaluates the reasonableness of decisions rather than the content of algorithms. This approach offers a theoretically grounded, formally neutral, and normatively appealing solution. In practice, however, it may result in unequal liability. Even under a negligence regime, algorithmic decision- makers may face systematically greater liability if injured parties are more inclined to pursue litigation against algorithmic tortfeasors than against human tortfeasors due to algorithmic aversion. A survey experiment using autonomous vehicles as a representative algorithmic tortfeasor shows that victims are significantly more likely to sue algorithmic actors than they are to sue human actors when both are subject to negligence. These findings suggest that doctrinal coherence and neutrality in theory do not necessarily translate into equality in practice. Negligence, while appealing as a formal solution, may impose higher expected liability and ownership costs on algorithm-operated machines than on their human-operated counterparts, potentially hindering the adoption of socially beneficial technologies. The results highlight the importance of considering behavioral responses in the design and evaluation of tort regimes for algorithmic decision-makers.

Included in

Law Commons

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.