Regulating A.I.: Handcuffs, Hammers, or Shared Keys?
Big Tech’s offer to self-regulate highlights the need for a new approach to the problems of A.I.
Friday’s news that seven of the world’s biggest A.I. firms—including Microsoft, OpenAI, Google, Amazon, and Meta—signed a joint commitment to take voluntary action to safeguard against the dangers of A.I., raises an important set of questions: How should we think about dealing with the risks of A.I. when the technology is moving so fast? What should regulation be focused on? And what tools should regulators be using?
This last question is critical. It is easy to imagine that imposing strict rules on the designers of A.I. will be the swiftest route to protecting our safety, well-being, and values. But, looking at the host of regulations being considered, I would argue for a different approach—one that aims to accelerate our understanding of the technology itself. This approach, which I call “shared keys,” focuses on mandatory open sharing by A.I.’s creators. I believe it will be essential to inform all other regulation that may follow.
Still life of handcuffs, hammer, and key. Created with DALL-E 2.
The Problems of A.I.
Before we can seriously begin to regulate A.I., we need to first ask, What problems are we trying to solve? I would suggest the following list of A.I. problems that are already serious today, a source of public concern, or both:
Unreliability—Both factual “hallucinations” in generative A.I., as well as false patterns in predictive A.I.
Misrepresentation—From disinformation campaigns using “deep fake” images, to students submitting homework written entirely by ChatGPT.
Unintended bias—Inaccurate diagnoses in medicine, as well as wrongful discrimination in hiring, credit scoring, or policing.
Rights to privacy—Most notably, the freedom from omniscient public surveillance.
Rights to data—Control over how your words, actions, and images are used to train A.I. systems, and rights to derivative content that is created with your data. (A top concern of both striking television and screen actors and companies like Reddit.)
Independent action—What choices do we want an algorithm to make without human supervision? In many cases, automated decision-making will be critical to the benefits of A.I. But in other cases, letting A.I. decide on its own could be disastrous. Note that this problem is distinct from the next one…
Self-agency (the new re-definition of “A.G.I.”)—A.I. pursues an agenda of its own, thanks to emerging self-awareness or some other means. Note that this a purely hypothetical problem, untethered to the reality of any technology actually being built. Still, this is the most talked about problem in A.I., so I don’t want to dismiss it out of hand.
Three Tools for Regulation
How should we go about managing the diverse and complex problems stemming from the rapid advance of A.I.? It is important to recognize that governments have more than one tool at their disposal.
I see three major tools for regulation that are being considered. Each will be important in its own way.
Handcuffs: Governments constrain themselves. This includes statutes for national and local government that limit their own use of A.I. to protect citizens’ rights. It would also include collective agreements and treaties among national governments, such as the IAEA for nuclear technology.
Hammers: Governments regulate businesses. This tool has been the subject of the most discussion in the halls of government and in the press. Most talk focuses on rules that could force A.I.’s makers (like the signers of Friday’s pledge) to build safeguards into their products. But in many cases, regulation is more effectively applied to those who use a new technology, rather than those who build it. We did not enforce copyright in the era of print media by regulating photocopying machines.
Shared Keys: Government mandates openness to testing and research. Some of the thorniest issues around A.I. stem from the obscurity of how companies’ algorithms work. Beyond the issue of “black box” models, companies have an inherent interest in maintaining secrecy for competitive reasons. Therefore, an important tool for regulators is the ability to force companies to share data and allow testing, auditing, and research on their models by outside observers, to better understand the emergent problems that we hope to head off.
The Right Tool for the Problem
As we think about how to tackle the challenges of A.I., we should think about matching the right tool(s) to the different problems we are seeking to address.
Market forces
Of course, regulation is not the only way to solve problems in technology. In some circumstances, A.I.’s problems will be best solved through market forces. The problem of unreliability in Large Language Models (their tendency to fabricate facts) would fall in this category. The makers of these models are highly motivated to improve their reliability, to the degree it is technically feasible. To the degree that it is not, LLMs will be adopted by users in some contexts (e.g., drafting a thank you letter) but not in others (e.g., drafting a legal brief in a courtroom).
Handcuffs on government
In other cases, government self-restraint (i.e. “Handcuffs”) will be a clear solution. International treaties will be required to address the problem of independent action by A.I. in warfare. Autonomous weapons are no longer a hypothetical, as battlefield drones have flown right up to the ethical line between identifying targets on their own and pulling the trigger to use lethal force without a human decision-maker in the loop. Domestically, “handcuffs” are already being tried to constrain government surveillance (e.g., laws against facial recognition A.I.) in order to protect rights to privacy. As algorithms are given greater independence to make decisions, similar laws may be needed to constrain independent action, e.g., to prevent A.I. acting as police, judge, and jury for sentencing.
Hammers on industry
The traditional “hammer” of regulation on business will be clearly needed for other problems—with regulations crafted for specific industries, to prevent their greatest risks and spell out the legal liability for businesses. In transportation, the risks of unreliability have spurred states to regulate autonomous vehicles. In consumer financial services, black box algorithms are largely forbidden in the U.S, because of existing laws that preclude unwanted bias in deciding which customers to give credit to, etc.
In the media industries, I expect we will see the biggest need for new regulatory frameworks. The latest wave of generative A.I. scrambles our longstanding idea of what it means to “reuse” someone else’s work. Maintaining any meaningful rights to data for content creators will likely require a complete reworking of intellectual property law.
Hammers on A.I.’s makers
For some problems, though, regulators will seek to apply rules to the makers of A.I. themselves. Lina Khan, head of the U.S. Federal Trade Commission, has advocated suing companies for fraud perpetrated using A.I. But prohibitions against A.I. misrepresentation may be too hard to enforce. So, regulators are looking to require that A.I.’s designers build in protections instead. Hence, the notion of “watermarks” in generative A.I. to indicate the algorithmic source of a text, image, or audio. (This notion was one of the few tangible goals in Friday’s corporate pledge.) But history shows that using technology to address a problem of human behavior is often a Sisyphean effort (see: spam).
Shared keys
The one regulatory tool that should undergird our efforts to manage all of the problems of A.I. is “shared keys,” an approach based on open research, outside audits, and testing of A.I. technology by experts from outside the companies that are developing it.
Whether we want to mitigate the risks of unintended bias, or independent action, or unreliability, we will do far better if our regulators start from a better understanding of the A.I. systems at work in the marketplace. That will mean tech companies must be required to allow research and testing by independent third parties to assess their models and better understand the bias at play, the sources and frequency of hallucinations, the way that source material is used to create new content, and every other question that is central to fathoming A.I.’s problems.
Even the questionable problem of self-agency calls out for a “shared keys” approach. How do you regulate a hypothetical problem with no basis in current reality? The normal response would be to ignore it. But, to err on the side of caution (and respect public concerns), why not start by mandating that tech companies participate in open research on the subject?
Let me be clear: the “shared keys” approach will require three things: First, it must be backed by legal mandates, to prevent companies using trade secrets as an excuse to block outside auditors and researchers. Second, it will require new, empowered third parties—i.e., independent researchers allowed to audit and test companies’ A.I. models. Third, it will require government funding to support this work, perhaps paid by a tax on the A.I. firms themselves.
Summary
Rapidly advancing A.I. technologies are fast becoming embedded in every industry, appearing every day in new applications and use cases. As the technology improves, we should expect ever-deeper integration into our economies and our public sector.
Society and governments must address the many complex risks of A.I. while we also pursue its positive potential. Doing so will require a multi-pronged approach that includes self-imposed restraints on governments, regulation of how different industries use A.I., and regulation of A.I.’s designers. But most importantly, it should start with a “shared keys” approach that combines mandates and investment in open research to foster a greater understanding of A.I.’s emerging problems as the technology continues to advance.
This is the first in a series of periodic articles on A.I.