Government regulation of AI – when is it coming?

This article, written by attorney Bryanna Devonshire and Nicolas Harris, a 2023 Summer Associate, was originally published by seacoastonline.com and can be found here.


Hate it or love it, an AI revolution is going to impact our job market. With the launch of ChatGPT in November of 2022, it is not hyperbole to suggest that our economy sits at a historic inflection point. The impact of AI depends on many variables. Importantly, government regulation will serve a critical role in mitigating some of the doomsday projections faced by certain industries.

AI Regulation in the United States is Still in its Infancy

The first domino to fall in AI regulation came at a Senate subcommittee hearing on May 16, 2023, in Washington D.C. Sam Altman, the CEO of OpenAI, testified before a panel of Senators that his own company’s product needs to be regulated. Mr. Altman encouraged the Senators to adopt regulations that creates licensing and testing requirements. He welcomed the idea of creating a new federal agency tasked to oversee AI development in the United States. For example, in order to enter the market, the agency could require AI to meet certain thresholds on security. In addition, the agency could restrict the capabilities of the intelligence to ensure reliance. However, this level of oversight and regulation in the realm of development could be met with firm resistance because of the potential consequences for stifling product advancement.

There is also the threat of monopolization. The FTC may be called upon in the near future to investigate major tech companies trying to corner the AI market. In fact, the FTC opened an investigation in mid-July into OpenAI’s alleged mishandling of personal data and consumer protections. Other companies invested in the AI arms race, such as Google and Meta, should take note of the FTC’s actions. Whether it be consumer protection violations or unfair business practices that limit competitiveness in the market, the FTC has turned its watchful eye towards enforcing fairness and security in the AI market.

The White House has not remained idle either. The Biden-Harris Administration recently hosted personnel from Google, Meta, OpenAI, and four other major tech companies to discuss the associated risks and security measures needed when, inevitably, more advanced AI rolls out into the market. According to a fact sheet released by the executive branch, the Biden-Harris Administration has secured voluntary commitments from these companies to manage the risks posed by the rapidly growing development and use of AI technology. While these commitments remain surface-level, it demonstrates a willingness for open channels and the likelihood that deeper—and preferably more substantial—commitments are possible down the road.

European Union Set to Lead on AI Regulation

In contrast to the United States, the European Union has adopted a distinct approach to regulate artificial intelligence through the introduction of the “AI Act.” Representing a groundbreaking piece of legislation, this act marks the EU’s first significant effort to govern AI technologies. The act classifies AI into three categories of risk: “low or minimal;” “unacceptable;” and “high.”

AI with “low or minimal” risk is deemed not a risk or linked to the protection of people’s health, safety, or fundamental human rights. For instance, an AI algorithm that translates Italian to English is an example of such technology. These applications of AI are widely accepted because their impacts on human health, safety, or fundamental rights are limited.

On the other hand, AI is deemed “unacceptable” if it poses threats to health, safety, or violates fundamental human rights. An example of such a scenario would be an AI algorithm designed to perpetuate harmful biases on the basis of race, gender, or religion. The act strictly prohibits the use of these technologies in its market.

The crux of the AI Act, however, lies in the regulation of “high-risk” AI technologies, which have significant implications for health, safety, or fundamental rights but do not reach the level of being considered “unacceptable.” Examples of such high-risk include AI-powered financial trading systems, AI in employment decision-making, and AI technologies in medicine. To ensure compliance, these technologies must meet mandatory requirements and demonstrate conformity through a rigorous process of assessment. The act mandates that conformity assessment for the most critical high-risk AI technologies be conducted by independent “notified bodies,” ensuring an extra layer of oversight.

The AI Act represents the first significant step towards the regulation of AI technologies. The EU Council and EU Parliament are in the process of finalizing the exact text. Once the details are agreed upon, the act will be signed into law most likely within the year. Whether it becomes the global standard remains to be seen.

What Happens Next?

Forecasting where the federal government will ultimately come down on AI regulation is impossible at this point. It remains unclear even how effective any regulation of AI will be. Recent actions by Congress, the FTC, and the White House suggest that they are interested, want to stay on top of the developing technology, and are already collecting information to inform future decisions. The federal government can wait and see how the AI Act in Europe will play out. Using that Act as a model, the government can adopt similar legislation or pivot in another direction if the Act falls short. Whatever path forward it takes, businesses should stay alert on how the new legal structure unfolds and, ultimately, affects their interests.