Innovation vs. Regulation in the World of AI through the Lens of Tort Law
- WULR Team
- Nov 11, 2024
- 4 min read
One states attempt at grappling the role of the government in the world of AI
Published November 11 2024
Analysis by Abhita Chakravarti
In an era where artificial intelligence (AI) assistants can draft emails, create code, and write articles, California’s Senate Bill 1047 has introduced legislation to govern quickly-evolving technology. Introduced in February 2024, SB 1047 was proposed in response to the increasing need to regulate AI from threats to public safety. The bill would require developers to implement cybersecurity protections. They would need to create a protocol in the case of needing a full shutdown, assess potential critical harm, and provide audit reports (1). However, on September 29th, California Governor Gavin Newsom vetoed SB 1047 due to concerns about regulatory overreach. This decision has left legislators to face the ongoing challenge of creating regulatory frameworks that can keep up with the rapidly evolving AI space.
Offering convenience and increased efficiency, AI has become an essential part of our daily lives. Consider the case of autonomous vehicles such as Tesla’s Full Self-Driving (FSD) program. These vehicles use advanced technology to interpret road conditions and predict the behavior of other drivers (2). Similarly, large language models like GPT-4 have changed how we interact with technology daily. From searching the web to analyzing complex data, these advancements raise concerns about safety, privacy, and accountability (3).
The potential benefits of AI are vast, but so are the risks. In 2022, a Tesla vehicle operating in FSD mode was involved in an eight-car pile-up in San Francisco (4). This incident raised critical questions about liability: Who was at fault for this? Tesla? The driver? When lawmakers passed existing traffic laws, they did not have these scenarios in mind. This begs the question: Should we use existing laws or create new ones to address these situations?
Associate Professor at Yale Law School Ketan Ramakrishnan argues that existing tort law principles could provide an effective structure for regulating AI. In his Wall Street Journal article, “Tort Law Is the Best Way to Regulate AI,” Ramakrishnan explains that tort law’s adaptability makes it well suited to address the challenges posed by AI systems. Tort law deals with civil wrongs that cause harm or loss. This could be applied to hold AI companies and developers accountable for negligence or harm caused by their products. This approach could incentivize responsible development of AI technology by making it clear that companies are liable if they negligently cause harm. Ramakrishnan emphasizes the adaptability of tort law noting that, “unlike detailed statutory or regulatory regimes, which can quickly become obsolete, tort law articulates broad standards of conduct that can be applied to novel technologies.”(5) This flexibility is essential in the rapidly evolving field of AI where new potential risks are constantly emerging.
The application of tort law to AI-related incidents can be seen in prior legal cases. Going back to autonomous vehicles, in Huang v. Tesla, Inc. (2019), the family of Walter Huang sued Tesla after he died in a crash while his Model X was on Autopilot. This case was primarily based on product liability law and Tesla argued that Huang was distracted and did not engage the brakes. The family contested that Tesla falsely marketed Autopilot as self-driving software and knew that there were flaws that made it unsafe (6). If there was a more comprehensive tort law framework in place specifically for AI, it might have provided clearer guidelines on where the responsibility fell. This could potentially have led to a quicker resolution and set a stronger precedent for future cases.
For AI-generated content, tort law principles could be applied to cases of defamation or copyright infringement. For example, if an AI system like GPT-4 produced false and damaging information about an individual or organization, the developers of the AI could potentially be held liable under existing defamation laws. In Andersen v. Stability AI Ltd. (2023), artists filed a class-action lawsuit against AI image generation companies for copyright infringement (7). This case utilized copyright law, but tort law principles of negligence could also clarify standards for what constitutes “reasonable care” on AI materials.
Despite the benefits of using tort law to regulate AI, there are concerns that regulation may negatively impact innovation. Critics argue that overly strict rules could hinder creativity and push AI development to areas with fewer restrictions. A 2023 report by Goldman Sachs, a leading investment bank and financial services firm, stated that generative AI could contribute to a 7% annual increase in GDP growth (8). From an economic standpoint, companies do not want to restrict growth as a result of regulations.
The vetoing of California’s SB 1047 highlights the challenge of regulating AI technology. While tort law provides a unique and flexible framework for addressing these issues, there are looming concerns about hindering innovation. Finding a balance between technological advancement and ensuring public safety is essential. The application of tort law along with potential enhanced legislation will help navigate the complexities of AI regulation. As technology continues to evolve, the legal framework set will play a crucial role in shaping our technological future.
1. California Legislative Information. “SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.” Accessed October 27, 2024. https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047.
2. Sergent, Jim.“Tesla ‘full Self-Driving’ in My Model Y: Lessons from the Highway.” USA Today, May 2, 2024, https://www.usatoday.com/story/graphics/2024/05/02/tesla-full-self-driving-model-y-review/73168546007/.
3. Brynjolfsson, Erik, Tom Mitchell, and Daniel Rock. "What Can Machines Learn, and What Does It Mean for Occupations and the Economy?" American Economic Association Papers and Proceedings 108, (2018): 43-47. https://www.aeaweb.org/articles?id=10.1257/pandp.20181019.
4. McFarland, Matt. “Tesla ‘full Self-Driving’ Triggered 8-Car Crash on Bay Bridge, Driver Tells Police.” ABC7 San Francisco, December 22, 2022. https://abc7news.com/tesla-autopilot-crash-sf-bay-bridge-8-car-self-driving/12599448/.
5. Ramakrishnan, Ketan. “Tort Law Is the Best Way to Regulate AI.” The Wall Street Journal, September 24, 2024. https://www.wsj.com/opinion/tort-law-is-the-best-way-to-regulate-ai-california-legal-liability-065e1220.
6. Goldman, David. “Tesla Settles with Apple Engineer’s Family Who Said Autopilot Caused His Fatal Crash.” CNN, April 8, 2024. https://www.cnn.com/2024/04/08/tech/tesla-trial-wrongful-death-walter-huang/index.html.
7. Loeb & Loeb LLP. “Andersen v. Stability AI Ltd..” Accessed October 13, 2024. https://www.loeb.com/en/insights/publications/2023/11/andersen-v-stability-ai-ltd.
8. “Generative AI Could Raise Global GDP by 7%.” Goldman Sachs, April 5, 2023. https://www.goldmansachs.com/insights/articles/generative-ai-could-raise-global-gdp-by-7-percent.
Comments