On Generative AI Policy
[This post is adapted from some remarks to be delivered on 28 July 2024 in Melbourne]
Part of a rapidly evolving stack
Generative AI is just one part of a rapidly evolving stack of frontier technologies. Engineers and entrepreneurs are building out a stack of digital infrastructure: blockchains, quantum, encryption and smart contracts. They are digital infrastructure for digital monies, contracts, organisations and property rights.
This radical and rapid expansion in infrastructure choice hasn’t come by begging governments for better institutions. Radical institutional progress won’t happen through political coalition building or consensus making. In 2024 and beyond it is being driven by entrepreneurs, engineers and communities building.
When regulating markets we know the importance of competition, and so too should be the case in regulating frontier digital technologies like generative AI. These New Technologies of Freedom are pushing competition into our institutional stack in unprecedented ways. In global, open and permissionless ways. This phenomena should urge governments to err on the side of permissionless innovation.
From chatbots to autonomous economic agents
It’s rare that we know what new technologies are for.
Consider bubble wrap, initially designed as wallpaper, or the internet, which began as military communications. Even the slinky was created for ship instruments.
We discover the uses for new technologies through experimentation. Entrepreneurs take new technologies and apply them in new ways. They apply judgement and make guesses. Through experimentation, entrepreneurs apply new technologies in innovative ways, and the market ultimately validates their success.
Today it’s easy to dismiss generative AI as better chatbots. But as a nascent technology it’s highly likely that we simply don’t know what they’re for yet. Discovering what technologies are for is a long, messy and unpredictable process (even without entrenched regulatory systems holding back that experimentation).
Consider a more ambitious application of generative AI: autonomous economic agents.
We can use the predictive powers of generative AI to create new types of economic agents. We can train general foundation models on our preferences (e.g. budget, risk), give them some capital (e.g. cryptocurrency), and delegate power to them to act on our behalf (e.g. trade).
Such autonomous economic agents aren’t far away. They could expand our capacity in voluntary market exchange. They could be deployed for a wide variety of purposes: from investing to voting in organisations.
The point here is that we simply don’t know what generative AI is good for. There is a jaggard frontier. The only way to navigate this frontier is experimentation in markets.
Unfortunately that experimentation, like the origins or many new technologies, comes alongside a technopanic.
Bias in outputs (such as political correctness) including a lack of explainability
Hallucinations (yes, the models lie)
Control by a few large companies
Fuzzy data rights
These problems are real and hard. But any policy solutions should drive towards more openness and choice.
Openness and choice as antidote
Take the problem of AI bias. There are regular claims that the outputs of generative AI models are biassed on different dimensions (e.g. political correctness). While it is easy to consider some form of perfectly “unbiased AI”, we must also realise the world is imperfect:
Bias is inevitable. Major foundation models are trained on large swathes of data from the internet. Then there are various processes of fine tuning and reinforcement learning, including direct manipulation to get effective (or politically correct) outputs for the user. The question here is “what kind of bias?”.
Bias is comparative. Governments, corporations, committees and people are biassed too, just in different ways. The question is “biassed compared to what?”.
Bias isn’t always bad. As consumers of media we often gravitate towards outlets that confirm our own biases. Here bias is a feature not a bug. Similarly, many people will want generative AI models to have degrees of bias. The question here is “what kind or bias do you want?”
Many proposed solutions to AI bias centre on some type of enforced “auditing” or “explainability”. These are attempts to make the models unbiased. There are several challenges with this. First, it generates other threats, such as expanding political power. Second, it is simply not possible. Unbiased generalised AI is a myth.
Like many of the problems with frontier technologies, the way to navigate the “problem” of bias is to gear towards openness and choice. You should want a world where you can choose between different types of bias across different models. An alternative world is one where bias is controlled by some government department, handing out licences to the (few remaining) AI models.
Trends towards open source AI are welcome. But that openness should not be forced through overarching and burdensome licensing regimes by governments. Licensing regimes will reduce competition and suppress consumer choice, not drive us towards perfect unbiased AI.
Regulate application not development
Beyond bias, generative AI faces a broader scope of policy issues. Governments can approach these in two ways:
Regulate AI development
Regulate AI applications
Regulating development
Regulating development means directing regulatory power at developers of AI models. Approaches here include licensing (or banning) the development of new models and forcing audits and transparency onto developers. The potential costs of these regimes are enormous, much like many other areas of policy that apply such as the permissioning approach. Directing regulatory attention to the development of AI will not only concentrate power, but will have a chilling effect on innovation and experimentation.
Regulating applications
Alternatively, and preferably, we can focus on the applications of AI in specific contexts. Here we apply existing policy to the application of generative AI models in practice, whether it is fraudulent or misleading. The application of generative AI, like all general purpose technologies, is highly contextual. We should regulate it that way.
We need a culture of openness and experimentation with AI. Seeking this culture is an acknowledgement that new technologies aren’t perfect, but that realising their values means that we must let them move fast.