Let Coase Drive Us Home: AI, Institutions, and the Economics of the Future
In 2015, I co-authored a short piece titled Let Ronald Coase Drive Us Home, where we made a simple but powerful argument: the real bottleneck for self-driving cars wasn’t technological. It was institutional.
The prototypes were already functional. Engineers were rapidly improving the hardware and software. But there was no clear rule about liability: if an autonomous car caused an accident, who was responsible? The manufacturer? The owner? The software developer? This legal ambiguity created insurance uncertainty, which in turn discouraged adoption.
We argued that what was missing was a Coasian solution: the government should define and enforce clear property rights over accident liability. Once that institutional framework was in place, the market could price risk efficiently, and the insurance sector would adjust premiums accordingly. Whether the liability fell on the car owner or the manufacturer was a second-order issue. The key was clarity and enforceability.
We even explored scenarios where self-driving cars might only be viable if all cars were autonomous, due to network effects and coordination externalities. In such cases, the transition from a bad equilibrium (mixed traffic with ambiguity and risk) to a good one (fully autonomous networks with low accidents) could only be achieved through institutional design that gradually enabled the shift. Coase, we argued, could open the door.
Now, nearly a decade later, I see the same logic applying to the broader field of artificial intelligence.
AI is no longer just about vehicles. It is diagnosing diseases, writing legal briefs, filtering job applications, and even advising judges. The algorithms are here. What’s missing, again, are the institutions.
Who owns the rights to AI-generated content? Who is liable when an algorithm makes a harmful decision? Who audits bias? Who enforces transparency? Who defines consent in a world of data-hungry predictive systems?
In 2025, many of the most urgent questions surrounding AI are no longer about technical capacity—though those remain important. The systems are increasingly capable, and the frontier continues to evolve. But alongside debates about performance, productivity, and risk, a deeper issue has emerged: the absence of a clear institutional framework. Who is responsible when things go wrong? Who owns what AI creates? Who enforces fairness, transparency, or consent? The answers to these questions will shape not just adoption, but trust.
The problem is not that our machines are too intelligent. It’s that our legal and political systems are too slow. And just like with self-driving cars, this lag imposes costs: higher uncertainty, less adoption, and growing distrust.
Most countries have not even begun to legislate seriously in these areas. And those who do often focus on superficial fixes rather than systemic change.
Ronald Coase taught us that markets don’t operate in a vacuum—they depend on rules, on clearly defined and enforceable rights. And he taught us something even more subtle: the exact allocation of those rights matters less than the fact that they are well-defined.
In 2015, that insight helped us see why autonomous vehicles weren’t spreading as fast as expected. In 2025, the same applies to AI more broadly. The breakthroughs are real. The obstacles are institutional.
So if we want to unlock the next wave of productivity and trust, we don’t just need better models or faster chips. We need better institutions.
And that means, once again, it’s time to let Coase drive us home.
*** Disclaimer: I used ChatGPT-4.0. as an editorial and language-refinement tool. The ideas and arguments are entirely my own, and I take full responsibility for them.


