The complex, rapidly evolving field of artificial intelligence raises legal, national security and civil rights concerns that can’t be ignored.
Photographer: Sebastian Gollnow/picture alliance/Getty Images
Governments don’t have a great track record of keeping up with emerging technology. But the complex, rapidly evolving field of artificial intelligence raises legal, national security and civil rights concerns that can’t be ignored. The European Union passed a sweeping law that would put guardrails on the technology; in China, no company can produce an AI service without proper approvals. The US is still working on its regulatory approach. While Congress considers legislation, some American cities and states have already passed laws limiting use of AI in areas such as police investigations and hiring, and President Joe Biden directed government agencies to vet future AI products for potential national or economic security risks.
1. Why does AI need regulating?
Already at work in products as diverse as toothbrushes and drones, systems based on AI have the potential to revolutionize industries from health care to logistics. But replacing human judgment with machine learning carries risks. Even if the ultimate worry — fast-learning AI systems going rogue and trying to destroy humanity — remains in the realm of fiction, there already are concerns that bots doing the work of people can spread misinformation, amplify bias, corrupt the integrity of tests and violate people’s privacy. Reliance on facial recognition technology, which uses AI, has already led to people being falsely accused of crimes. A fake AI photo of an explosion near the Pentagon spread on social media, briefly pushing US stocks lower. Alphabet’s Google, Microsoft, IBM and OpenAI have encouraged lawmakers to implement federal oversight of AI, which they say is necessary to guarantee safety.
2. What’s been done in the US?
Biden’s executive order on AI sets standards on security and privacy protections and builds on voluntary commitments adopted by more than a dozen companies. Members of Congress have shown intense interest in passing laws on AI, which would be more enforceable than the White House effort, but an overriding strategy has yet to emerge. Two key senators said they would welcome legislation that establishes a licensing process for sophisticated AI models, an independent federal office to oversee AI, and liability for companies for violating privacy and civil rights. Among more narrowly targeted bills proposed so far, one would prohibit the US government from using an automated system to launch a nuclear weapon without human input; another would require that AI-generated images in political ads be clearly labeled. At least 25 US states considered AI-related legislation in 2023, and 15 passed laws or resolutions, according to the National Conference of State Legislatures. Proposed legislation sought to limit use of AI in employment and insurance decisions, health care, ballot-counting and facial recognition in public settings, among other objectives.
3. What has the EU done?
The European Parliament in March passed a bill setting up the most comprehensive regulation of AI in the Western world. The legislation would ban the use of AI for detecting emotions in workplaces and schools, as well as limit how it can be used in high-stakes situations like sorting job applications. It would also place the first restrictions on generative AI tools, which captured the world’s attention last year with the popularity of ChatGPT. Highly capable models, such as OpenAI’s GPT-4, would be subject to additional rules, including reporting their energy consumption and setting up protections from hackers. The legislation still needs to be formally approved by EU member states. Companies that violate the rules would face fines of up to €35 million ($37.7 million) or 7% of global revenue, depending on the infringement and size of the company. But as talks reached the final stretch last year, the French and German governments pushed back against some of the strictest ideas for regulating generative AI, arguing that the rules will hurt European startups like France’s Mistral AI and Germany’s Aleph Alpha GmbH, leaving some watchdog groups disappointed with the result.
4. What has China done?
A set of 24 government-issued guidelines took effect on Aug. 15, targeting generative AI services, such as ChatGPT, that create images, videos, text and other content. Under those guidelines, AI-generated content must be properly labeled and respect rules on data privacy and intellectual property. A separate set of rules governing the AI-aided algorithms used by technology companies to recommend videos and other content took effect in 2022.
5. What do the companies say?
Leading technology companies including Amazon.com, Alphabet, IBM and Salesforce pledged to follow the Biden administration’s voluntary transparency and security standards, including putting new AI products through internal and external tests before their release. In September, Congress summoned tech tycoons including Elon Musk and Bill Gates to advise on its efforts to create a regulatory regime. One concern for companies is the degree to which US rules could apply to the developers of AI products, not just to users of them. That mirrors the debate in Europe, where Microsoft, in a position paper, contended that it’s crucial to focus on the actual use cases of AI, because companies can’t “anticipate the full range of deployment scenarios and their associated risks.”
6. Why is the US effort in focus?
Since American tech companies and specialized American-made microchips are at the forefront of AI innovation, US leaders wield particular sway over how the field is overseen. Many of the participants in the Senate meetings stressed that the US should play a leading role in the shaping of global governance of AI, and some referenced China’s advancements in the field as a specific concern. Critics have raised concern about the potential for tech executives to have too much influence over legislation, creating a form of regulatory capture that could enhance the power of a few large companies and hamper efforts by so-called open-source organizations to build competing AI platforms.
Comments