Generative AI radically lowers the costs of making autonomous artificial economic agents. We can train these economic agents on our preferences, give them goals, and delegate powers to act on our behalf. Continued technological progress will only make these agents more powerful and agentic.
While many generative AI debates grapple with productivity gains or labour displacement, my aim is to explore some questions about the economics of these new agents. What type of knowledge is embedded in them? Are there predictable governance hazards? What economic institutions do they need?
I argue that making artificial economic agents:
Is an ongoing co-production process to sufficiently economically align our agents with us and our preferences. This view contrasts with the common perspective of aligning LLMs to some universal aggregate societal values. Training an autonomous economic agent is a process that happens under uncertainty by embedding the local knowledge of the user.
Generates principal-agent governance hazards. A long history of principal-agent theory teaches us that there will be governance challenges between us and our agents (whether those agents are robots or people). While these hazards are inevitable they can be ameliorated through governance design. These hazards might change over, resembling familiar patterns such as ‘fundamental transformation’.
Requires us to develop new robust digital institutions. To be agentic, our agents must act on our behalf. But how? The agents we make need robust digital and machine-readable institutions (e.g. smart contracts). These institutions are necessary not only to make agent-to-agent trade possible but also for security (to constrain chaotic robots).
While AI will not solve societal planning problems, at the individual level autonomous agents will augment and improve our capacity to coordinate and trade with others. While Nobel Laureate Freidrich Hayek taught us that this capacity is fundamentally limited by local knowledge, leveraging these agentic capabilities necessarily extends the order of the market.
The emergence and alignment of artificial economic agents
Generative AI models are prediction machines. As predictions become cheaper and of higher quality (e.g. accuracy) then we will use more predictions. We will replace traditional methods of managing uncertainty (e.g. organisational rules) with more bespoke prediction-based decisions.
While a generative AI model holds structured general knowledge about the world, the latent information in the model is only revealed in a specific context. This structure implies that making these prediction machines useful requires knowledge such as a problem, objectives and context.
Even basic non-agent uses of generative AI are a process of co-production. Much of this happens through prompting. Prompting a large language model (LLM) is the process of combining your knowledge with knowledge in the model. The result is a co-produced output.
Basic uses of generative AI are iterative. They involve ongoing judgements by the user. Effective prompting to produce valuable outputs is a combination of a user's entrepreneurial judgement combined with their capacity to reveal latent information within the trained model.
Making an artificial economic agent is similarly a co-produced and iterative process. The person making the agent (the principal) must train the agent by promoting and updating data, such as preferences, constraints or goals. The agent itself is co-produced and emerges over time.
The AI “alignment problem” is typically seen from some aggregate societal level. The aim is to align LLMs with society’s values or preferences, ensuring they act in the best interest of society at large.
But artificial economic agents are best understood to create an alignment problem at the principal-agent level. There is an incomplete contract between the principal and agent that is shaped and evolves through co-production.
Making artificial economic agents is a co-production process of economic alignment. This process of embedding local knowledge and structured knowledge of the model seeks to make the agent (to the individual) over time.
Towards sufficient autonomous agent alignment
But when is an economic agent aligned enough?
For me to decide to delegate some authority to an autonomous agent I must believe that they are sufficiently aligned. I must evaluate the costs and benefits associated with aligning and trusting the agent to act on my behalf.
One critical factor in the principal-agent relationship is trust (or contractual safeguards). That trust partly comes through my subjective belief of alignment, where the agent’s actions are fine-tuned to reflect my preferences, goals, and risk tolerances. This alignment process involves continuous interaction, feedback, and adjustment.
Aligning my agent requires time and effort. I have to provide it with data, argue with it, be misunderstood, and establish protocols for future decision-making. Once I think that my agent is sufficiently aligned and trustworthy, the cost of monitoring and supervising the agent decreases, making delegation (or more delegation) possible.
Alignment is not a one-time event but an ongoing process. The principal, agent and the environment are not static. The principal provides data and feedback, while the AI adapts and learns. This iterative process involves significant uncertainty, requires entrepreneurial judgement to navigate effectively.
While sufficient agent alignment is a problem for me at the individual level (I want more agents in my life), it is also a constraint on expanding the number of autonomous economic agents in the economy.
A fundamental transformation
Often governance challenges are pinpointed at those companies and people building foundation generative AI models. But there are also governance challenges at the principal-agent level. As with any principal-agent relationship, autonomous economic agents come with problems of searching, negotiating and enforcing the relationship.
Nobel Laureate Oliver Williamson described the “fundamental transformation” as a governance challenge in contractual relationships. He described that a “transformation” occurs when a competitive relationship evolves into a dependent one. That dependence emerges because of significant and asset-specific investments.
In the context of AI, this could mean that once a principal invests heavily in training and aligning an agent, switching to another provider becomes costly and impractical. Governance mechanisms are needed to manage this dependency and mitigate risks such as opportunism and bounded rationality.
Private governance solutions, such as long-term contracts, relational norms, and technological safeguards might help. Regulation may also be useful but is constrained by the individual contextual nature of the principal-agent relationship. Furthermore, regulation might inhibit the process of alignment discovery. Often alignment is only revealed through letting my agent out into the world.
The need for robust institutional architecture
If I have sufficiently aligned an economic agent to me, then how can it act in the world? For autonomous agents to function effectively, they require a robust institutional infrastructure.
There is going to be an increasing reliance on other parts of the technology stack. For instance, blockchains and smart contracts are crucial. Blockchains can establish the rules and protocols that govern interactions between agents. They can embed deterministic security into non-deterministic chaotic robots. They might even help for dispute resolution.
Creating these digital, machine-readable institutions is itself an entrepreneurial process. It involves designing systems that can lower transaction costs and facilitate seamless trade between autonomous agents and us. This infrastructure ensures that agents can operate efficiently, transparently, and securely.
An economy of autonomous economic agents
As transaction costs for aligning and trusting autonomous agents decrease, we can start to see new types of economic activity emerging.
Principals will continue to delegate more tasks to their autonomous economic agents. These tasks will expand from routine operations to more strategic and complex decisions.
Autonomous agents will make more real-time decisions and transactions. New types of markets might emerge that are more responsive, efficient and competitive.
Ultimately the digital economy that we are building will come with increasing interactions between us and artificial economic agents. There will be a co-evolution between these agents, institutions and other frontier technologies.
Economic theory helps us to better understand the alignment, dynamics and governance of autonomous economic agents. But ultimately to propel the creation of more aligned artificial economic agents we need more openness, competition, and robust digital institutions.
Darcy Allen, RMIT University