Regulating Artificial Intelligence: An Impossible Task?

Authored by Stephen Boatwright
Published by the Phoenix Business Journal

Regulating Artificial Intelligence: An Impossible Task?

Steve Boatwright explains the latest legal attempts in regulating artificial intelligence (AI).

With billions in venture or investment bank backing, Amazon, Uber, and PayPal successfully competed with “brick and mortar” bookstores, taxicab companies, and banks, without complying with legal regulations governing these traditional companies’ operations.

Instead, their approach was to aggressively challenge or sometimes violate outright federal and state laws, allocating untold millions to their law firms to defend their strategies. Of course, the laws passed prior to the internet could not have anticipated these businesses and their reliance on cell phones and the internet to sell directly to consumers, and frankly, federal and state regulations never caught up. The consumer was smitten and the rest is history.

Now, with the advent of artificial intelligence (“AI”), the ability to create, draft, and enforce regulations may prove impossible. A fundamental tenet of society is that a person has a right to define his or her identity. A related fundamental right is that an individual owns what he or she creates. AI, however, is not constrained by either protecting the rights of identity or an individual’s intellectual property.

Internet platforms have historically been regulated like the telephone. For example, Meta is protected by Section 230 of the Communications Decency Act of 1996 from derogatory speech made on Facebook, as it is viewed simply as a medium of communication. It appears that Section 230 will offer no protection to generative AI because AI chatbots are not simply mediums for communication and action, but are actually communicating and acting. AI chatbots don't and often can't distinguish between content that is subject to copyrights and content that is not. How copyright laws will be enforced and how images and identity will be protected are among the big questions being asked.

Reid Hoffman, a founder of LinkedIn, recently displayed his “double” on YouTube, which is programmed to replicate his voice, facial expressions, and thoughts. There have already been many lookalike “doubles” of political figures perpetuating misinformation purportedly from presidential candidates, as well as bots being used as sexual predators.

It is virtually impossible to distinguish the AI Bot from the real individual.

So, what is the state of the laws today, and what is regulated?

The European Union’s Artificial Intelligence Act (The “Euro AI Act”) is the world’s first comprehensive legal framework for AI. The Euro AI Act classifies AI systems into different risk categories—unacceptable, high-risk, and limited risk—and imposes varying obligations and requirements based on the risk level. Overall, the Euro AI Act aims to improve and promote trustworthy AI systems, while also protecting fundamental rights and addressing potential risks posed by AI. The Euro AI Act outlines specific requirements and obligations for each tier, in accordance with the level of risk each category poses: (1) AI systems that pose an unacceptable risk are expressly prohibited; (2) AI systems that present a high-risk are subject to strict regulation including conformity assessments and risk management systems; (3) AI systems that present limited risk must only abide by transparency and disclosure requirements; and (4) AI systems that present no risk are unregulated.

Category (1) risks that are outright prohibited are: cognitive behavioral manipulation causing significant harm; exploiting age, disability, or socio-economic vulnerabilities; “social scoring”; racial or personality profiling; creating or explaining facial recognition databases; inferring emotions of a natural person; and any biometric categorization system (with exemptions for law enforcement).

Category (2) risks are those that pose health and safety concerns on fundamental rights of natural persons, such as law enforcement, healthcare, education, and border patrol. To comply with Category (2) requires a risk management system, data governance, and human oversight.

Category (3) applies to general purpose models, which perform a wide range of tasks. These “standard” models must (i) create and provide technical documentation of the model to the supervisory authority upon request, e.g., detailed information about model architecture, training, testing processes, etc.; (ii) create and keep up to date and make available instructions for use to enable downstream users to comprehend the GPAI model’s capabilities and limitations; (iii) adopt a policy to adhere to EU copyright laws; and (iv) make publicly available a sufficiently detailed summary about the content used for training.1

Providers of AI systems that interact directly with natural persons must inform the user they are interacting with an AI system, unless obvious to the average user, and must disclose the output is artificially generated or manipulated.

According to the Transparency Coalition, a not-for-profit organization that advocates for regulation of AI on which AI is trained, 400 bills related to AI are currently in state legislatures. There is no pending federal regulation of AI.

Not surprisingly, California has taken the lead on regulating the AI space. In September, California became the first state to enact legislation with SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act and AB 2013: Generative Artificial Intelligence: Training Data Transparency.

SB 1047 requires AI developers to implement certain safety measures before and after training covered models. It also imposes a duty of care on developers to prevent unreasonable risks of causing or materially enabling critical harm. In essence, developers are required to develop and comply with safety and security protocols and to have the capability to fully shut down the model. It is similar to the Euro AI Act, however, with specific references to weapons causing mass casualties or significant monetary damage. Audit and reporting requirements will be mandated by January 1, 2026. Interestingly, California’s legislation provides whistleblower protection to employees, something not provided by the Euro AI Act.

AB 2013 requires developers to publicly release specified documentation regarding the data used to train their AI systems or services as follows:

  • the sources or owners of the training data sets;
  • a description of how the data sets further the intended purpose of the artificial intelligence system or service;
  • the number and description of data points in the data sets;
  • whether the data sets include any data protected by copyright, trademark, or patent or whether the data sets are entirely in the public domain; and
  • whether the generative artificial intelligence system or service used or continuously uses synthetic data generation in its development, etc.

In general, the other state regulations are focused on sexually explicit content directed to minors and material used to influence elections. Colorado, Illinois, Massachusetts, New Jersey, New York, Rhode Island, and Virginia have expressed some of the same concerns as California, but have not yet issued as robust a set of regulations.

Even when regulations are enacted, enforcement will be overwhelmingly difficult. AI Bots are generative by nature and can be developed from any country in the world. For example, WormGPT can write code automatically and create highly convincing fake emails tailored to the recipient. FraudGPT can create phishing emails and mimic legitimate websites at scale.

In addition, a tool called Business Invoice Swapper will scan inbound emails to compromised accounts, identify those that include invoices or request payment, and automatically replace the vendor’s actual bank account information with that of the hackers. Age-old practices such as asking employees to review emails for grammatical errors aren’t going to ward off AI-powered spear phishing attacks, because AI-generated emails don’t contain the historic flaws generated by humans.

Even legitimate companies can find themselves using AI software programs that violate anti-trust laws. The Department of Justice is now focused on rooting out anti-competitive price-fixing AI software used in real estate leasing, airline pricing, and hotel pricing.

The court system is not designed to handle millions of offending AI Bots located all over the world. As relief from ransomware is rarely provided by the judiciary, fraud propagated by AI Bots will need alternative solutions to stop. It is highly likely that generative AI Bots will need to be developed to “enforce” the laws, acting somewhat like white blood cells do to shut down infections.

Those who fear machines taking over humanity may have exaggerated concerns, but it appears the advent of the AI Bot may, in reality, not prevent fraud or “critical harm” before it’s too late.

  1. Regulation of the European Parliament and of the Council of Laying Down Harmonized Rules on
    Artificial Intelligence (Artificial Intelligence Act), April 19, 2024
    ↩︎

Click here to read Steve's article published by the Phoenix Business Journal.


about the author

Stephen Boatwright is a shareholder and member of the Board of Directors at Gallagher & Kennedy in Phoenix. He represents several AI clients and serves on the Board of Directors of the AI Venture Network in Arizona. Steve handles public and private business mergers and acquisitions for buyers and sellers of businesses of all kinds and advises companies in raising private equity and public financing.

RELATED NEWS & RESOURCES: