From John Cochrane.
Introduction “AI poses a threat to democracy and society. It must be extensively regulated.”
Or words to that effect, are a common sentiment.
They must be kidding.
Have the chattering classes—us—speculating about the impact of new technology on economics, society, and politics, ever correctly envisioned the outcome? Over the centuries of innovation, from moveable type to Twitter (now X), from the steam engine to the airliner, from the farm to the factory to the office tower, from agriculture to manufacturing to services, from leeches and bleeding to cancer cures and birth control, from abacus to calculator to word processor to mainframe to internet to social media, nobody has ever foreseen the outcome, and especially the social and political consequences of new technology? Even with the benefit of long hindsight, do we have any historical consensus on how these and other past technological innovations affected the profound changes in society and government that we have seen in the last few centuries? Did the industrial revolution advance or hinder democracy?
Sure, in each case one can go back and find a few Cassandras who made a correct prediction—but then they got the next one wrong. Before anyone regulates anything, we need a scientifically valid and broad-based consensus.
Have people ever correctly forecast social and political changes, from any set of causes? Representative democracy and liberal society have, in their slow progress, waxed and waned, to put it mildly. Did our predecessors in 1910 see 70 years of communist dictatorship about to envelop Russia? Did they understand in 1925 the catastrophe waiting for Germany?
Society is transforming rapidly. Birth rates are plummeting around the globe. The U.S. political system seems to be coming apart at the seams with unprecedented polarization, a busting of norms, and the decline of our institutions. Does anyone really know why?
The history of millenarian apocalyptic speculation is littered with worries that each new development would destroy society and lead to tyranny, and with calls for massive coercive reaction. Most of it was spectacularly wrong. Thomas Malthus predicted, plausibly, that the technological innovations of the late 1700s would lead to widespread starvation. He was spectacularly wrong. Marx thought industrialization would necessarily lead to immiseration of the proletariat and communism. He was spectacularly wrong. Automobiles did not destroy American morals. Comic books and TV did not rot young minds.
Our more neurotic age began in the 1970s, with the widespread view that overpopulation and dwindling natural resources would lead to an economic and political hellscape, views put forth, for example, in the Club of Rome report and movies like Soylent Green. (2) They were spectacularly wrong. China acted on the “population bomb” with the sort of coercion our worriers cheer for, to its current great regret. Our new worry is global population collapse. Resource prices are lower than ever, the U.S. is an energy exporter, and people worry that the “climate crisis” from too much fossil fuel will end Western civilization, not “peak oil.” Yet demographics and natural resources are orders of magnitude more predictable than whatever AI will be and what dangers it poses to democracy and society.
“Millenarian” stems from those who worried that the world would end in the year 1000, and people had better get serious about repentance for our sins. They were wrong then, but much of the impulse to worry about the apocalypse, then to call for massive changes, usually with “us” taking charge, is alive today.
Yes, new technologies often have turbulent effects, dangers, and social or political implications. But that’s not the question. Is there a single example of a society that saw a new developing technology, understood ahead of time its economic effects, to say nothing of social and political effects, “regulated” its use constructively, prevented those ill effects from breaking out, but did not lose the benefits of the new technology?
There are plenty of counterexamples—societies that, in excessive fear of such effects of new technologies, banned or delayed them, at great cost. The Chinese Treasure fleet is a classic story. In the 1400s, China had a new technology: fleets of ships, far larger than anything Europeans would have for centuries, traveling as far as Africa. Then, the emperors, foreseeing social and political change, “threats to their power from merchants,” (what we might call steps toward democracy) “banned oceangoing voyages in 1430.” (3) The Europeans moved in.
Genetic modification was feared to produce “frankenfoods,” or uncontrollable biological problems. As a result of vague fears, Europe has essentially banned genetically modified foods, despite no scientific evidence of harm. GMO bans, including vitamin A-enhanced rice, which has saved the eyesight of millions, are tragically spreading to poorer countries. Most of Europe went on to ban hydraulic fracking. U.S. energy policy regulators didn’t have similar power to stop it, though they would have if they could. The U.S. led the world in carbon reduction, and Europe bought gas from Russia instead. Nuclear power was regulated to death in the 1970s over fears of small radiation exposures, greatly worsening today’s climate problem. The fear remains, and Germany has now turned off its nuclear power plants as well. In 2001, the Bush administration banned research on new embryonic stem cell lines. Who knows what we might have learned.
Climate change is, to many, the current threat to civilization, society, and democracy (the latter from worry about “climate justice” and waves of “climate refugee” immigrants). However much you believe the social and political impacts—much less certain than the meteorological ones—one thing is for sure: Trillion dollar subsidies for electric cars, made in the U.S., with U.S. materials, U.S. union labor, and page after page of restrictive rules, along with 100% tariffs against much cheaper Chinese electric cars, will not save the planet—especially once you realize that every drop of oil saved by a new electric car is freed up to be used by someone else, and at astronomical cost. Whether you’re Bjorn Lomborg or Greta Thunberg on climate change, the regulatory state is failing.
We also suffer from narrow-focus bias. Once we ask “what are the dangers of AI?” a pleasant debate ensues. If we ask instead “what are the dangers to our economy, society, and democracy?” surely a conventional or nuclear major-power war, civil unrest, the unraveling of U.S. political institutions and norms, a high death-rate pandemic, crashing populations, environmental collapse, or just the consequences of an end to growth will light up the scoreboard ahead of vague dangers of AI. We have almost certainly just experienced the first global pandemic due to a human-engineered virus. It turns out that gain-of-function research was the one needing regulating. Manipulated viruses, not GMO corn, were the biological danger.
I do not deny potential dangers of AI. The point is that the advocated tool, the machinery of the regulatory state, guided by people like us, has never been able to see social, economic, and political dangers of technical change, or to do anything constructive about them ahead of time, and is surely just as unable to do so now. The size of the problem does not justify deploying completely ineffective tools.
Preemptive regulation is even less likely to work. AI is said to be an existential threat, fancier versions of “the robots will take over,” needing preemptive “safety” regulation before we even know what AI can do, and before dangers reveal themselves.
Most regulation takes place as we gain experience with a technology and its side effects. Many new technologies, from industrial looms to automobiles to airplanes to nuclear power, have had dangerous side effects. They were addressed as they came out, and judging costs vs. benefits. There has always been time to learn, to improve, to mitigate, to correct, and where necessary to regulate, once a concrete understanding of the problems has emerged. Would a preemptive “safety” regulator looking at airplanes in 1910 have been able to produce that long experience-based improvement, writing the rule book governing the Boeing 737, without killing air travel in the process? AI will follow the same path.
I do not claim that all regulation is bad. The Clean Air and Clean Water Acts of the early 1970s were quite successful. But consider all the ways in which they are so different from AI regulation. The dangers of air pollution were known. The nature of the “market failure,” classic externalities, was well understood. The technologies available for abatement were well understood. The problem was local. The results were measurable. None of those conditions is remotely true for regulating AI, its “safety,” its economic impacts, or its impacts on society or democratic politics. Environmental regulation is also an example of successful ex post rather than preemptive regulation. Industrial society developed, we discovered safety and environmental problems, and the political system fixed those problems, at tolerable cost, without losing the great benefits. If our regulators had considered Watt’s steam engine or Benz’s automobile (about where we are with AI) to pass “effect on society and democracy” rules, we would still be riding horses and hand-plowing fields.
Who will regulate? Calls for regulation usually come in the passive voice (“AI must be regulated”), leaving open the question of just who is going to do this regulating.
We are all taught in first-year economics classes a litany of “market failures” remediable by far-sighted, dispassionate, and perfectly informed “regulators.” That normative analysis is not logically incorrect. But it abjectly fails to explain the regulation we have now, or how our regulatory bodies behave, what they are capable of, and when they fail. The question for regulating AI is not what an author, appointing him or herself benevolent dictator for a day, would wish to see done. The question is what our legal, regulatory, or executive apparatus can even vaguely hope to deliver, buttressed by analysis of its successes and failures in the past. What can our regulatory institutions do? How have they performed in the past?
Scholars who study regulation abandoned the Econ 101 view a half-century ago. That pleasant normative view has almost no power to explain the laws and regulations that we observe. Public choice economics and history tell instead a story of limited information, unintended consequences, and capture. Planners never have the kind of information that prices convey. (4) Studying actual regulation in industries such as telephones, radios, airlines, and railroads, scholars such as Buchanan and Stigler found capture a much more explanatory narrative: industries use regulation to get protection from competition, and to stifle newcomers and innovators. (5) They offer political support and a revolving door in return. When telephones, airlines, radio and TV, and trucks were deregulated in the 1970s, we found that all the stories about consumer and social harm, safety, or “market failures” were wrong, but regulatory stifling of innovation and competition was very real. Already, Big Tech is using AI safety fear to try again to squash open source and startups, and defend profits accruing to their multibillion dollar investments in easily copiable software ideas. (6) Seventy-five years of copyright law to protect Mickey Mouse is not explainable by Econ 101 market failure.
Even successful regulation, such as the first wave of environmental regulation, is now routinely perverted for other ends. People bring environmental lawsuits to endlessly delay projects they dislike for other reasons.
The basic competence of regulatory agencies is now in doubt. On the heels of the massive failure of financial regulation in 2008 and again in 2021, (7) the obscene failures of public health in 2020–2022, do we really think this institutional machinery can artfully guide the development of one of the most uncertain and consequential technologies of the last century?
And all of my examples asked regulators only to address economic issues, or easily measured environmental issues. Is there any historical case in which the social and political implications of any technology were successfully guided by regulation?
It is AI regulation, not AI, that threatens democracy.
Large Language Models (LLMs) are currently the most visible face of AI. They are fundamentally a new technology for communication, for making one human being’s ideas discoverable and available to another. As such, they are the next step in a long line from clay tablets, papyrus, vellum, paper, libraries, moveable type, printing machines, pamphlets, newspapers, paperback books, radio, television, telephone, internet, search engines, social networks, and more. Each development occasioned worry that the new technology would spread “misinformation” and undermine society and government, and needed to be “regulated.”
The worriers often had a point. Gutenberg’s moveable type arguably led to the Protestant Reformation. Luther was the social influencer of his age, writing pamphlet after pamphlet of what the Catholic Church certainly regarded as “misinformation.” The church “regulated” with widespread censorship where it could. Would more censorship, or “regulating” the development of printing, have been good? The political and social consequences of the Reformation were profound, not least a century of disastrous warfare. But nobody at the time saw what they would be. They were more concerned with salvation. And moveable type also made the scientific journal and the Enlightenment possible, spreading a lot of good information along with “misinformation.” The printing press arguably was a crucial ingredient for democracy, by allowing the spread of those then-heretical ideas. The founding generation of the U.S. had libraries full of classical and enlightenment books that they would not have had without printing.
More recently, newspapers, movies, radio, and TV have been influential in the spread of social and political ideas, both good and bad. Starting in the 1930s, the U.S. had extensive regulation, amounting to censorship, of radio, movies, and TV. Content was regulated, licenses given under stringent rules. Would further empowering U.S. censors to worry about “social stability” have been helpful or harmful in the slow liberalization of American society? Was any of this successful in promoting democracy, or just in silencing the many oppressed voices of the era? They surely would have tried to stifle, not promote, the civil rights and anti-Vietnam War movements, as the FBI did.
Freer communication by and large is central to the spread of representative democracy and prosperity. And the contents of that communication are frequently wrong or disturbing, and usually profoundly offensive to the elites who run the regulatory state. It’s fun to play dictator for a day when writing academic articles about what “should be regulated.” But think about what happens when, inevitably, someone else is in charge.
“Regulating” communication means censorship. Censorship is inherently political, and almost always serves to undermine social change and freedom. Our aspiring AI regulators are fresh off the scandals revealed in Murthy v. Missouri, in which the government used the threat of regulatory harassment to censor Facebook and X. (8) Much of the “misinformation,” especially regarding COVID-19 policy, turned out to be right. It was precisely the kind of out-of-the-box thinking, reconsidering of the scientific evidence, speaking truth to power, that we want in a vibrant democracy and a functioning public health apparatus, though it challenged verities propounded by those in power and, in their minds, threatened social stability and democracy itself. Do we really think that more regulation of “misinformation” would have sped sensible COVID-19 policies? Yes, uncensored communication can also be used by bad actors to spread bad ideas, but individual access to information, whether from shortwave radio, samizdat publications, text messages, Facebook, Instagram, and now AI, has always been a tool benefiting freedom.
Yes, AI can lie and produce “deepfakes.” The brief era when a photograph or video provided by itself evidence that something happened, since photographs and videos were difficult to doctor, is over. Society and democracy will survive.
AI can certainly be tuned to favor one or the other political view. Look only at Google’s Gemini misadventure. (9) Try to get any of the currently available LLMs to report controversial views on hot-button issues, even medical advice. Do we really want a government agency imposing a single tuning, in a democracy in which the party you don’t support eventually might win an election? The answer is, as it always has been, competition. Knowing that AI can lie produces a demand for competition and certification. AI can detect misinformation, too. People want true information, and will demand technology that can certify if something is real. If an algorithm is feeding people misinformation, as TikTok is accused of feeding people Chinese censorship, (10) count on its competitors, if allowed to do so, to scream that from the rafters and attract people to a better product.
Regulation naturally bends to political ends. The Biden Executive Order on AI insists that “all workers need a seat at the table, including through collective bargaining,” and “AI development should be built on the views of workers, labor unions, educators, and employers.” (11) Writing in the Wall Street Journal, Ted Cruz and Phil Gramm report: “Mr. Biden’s separate AI Bill of Rights claims to advance ‘racial equity and support for underserved communities.’ AI must also be used to ‘improve environmental and social outcomes,’ to ‘mitigate climate change risk,’ and to facilitate ‘building an equitable clean energy economy.’” (12) All worthy goals, perhaps, but one must admit those are somewhat partisan goals not narrowly tailored to scientifically understood AI risks. And if you like these, imagine what the likely Trump executive order on AI will look like.
Regulation is, by definition, an act of the state, and thus used by those who control the state to limit what ideas people can hear. Aristocratic paternalism of ideas is the antithesis of democracy.
Economics What about jobs? It is said that once AI comes along, we’ll all be out of work. And exactly this was said of just about every innovation for the last millennium. Technology does disrupt. Mechanized looms in the 1800s did lower wages for skilled weavers, while it provided a reprieve from the misery of farmwork for unskilled workers. The answer is a broad safety net that cushions all misfortunes, without unduly dulling incentives. Special regulations to help people displaced by AI, or China, or other newsworthy causes is counterproductive.
But after three centuries of labor-saving innovation, the unemployment rate is 4%. (13) In 1900, a third of Americans worked on farms. Then the tractor was invented. People went on to better jobs at higher wages. The automobile did not lead to massive unemployment of horse-drivers. In the 1970s and 1980s, women entered the workforce in large numbers. Just then, the word processor and Xerox machine slashed demand for secretaries. Female employment did not crash. ATM machines increased bank employment. Tellers were displaced, but bank branches became cheaper to operate, so banks opened more of them. AI is not qualitatively different in this regard.
One activity will be severely disrupted: Essays like this one. ChatGPT-5, please write 4,000 words on AI regulation, society, and democracy, in the voice of the Grumpy Economist…(I was tempted!). But the same economic principle applies: Reduction in cost will lead to a massive expansion in supply. Revenues can even go up if people want to read it, i.e., if demand is elastic enough. (14) And perhaps authors like me can spend more time on deeper contributions.
The big story of AI will be how it makes workers more productive. Imagine you’re an undertrained educator or nurse practitioner in a village in India or Africa. With an AI companion, you can perform at a much higher level. AI tools will likely raise the wages and productivity of less-skilled workers, by more easily spreading around the knowledge and analytical abilities of the best ones.
AI is one of the most promising technical innovations of recent decades. Since social media of the early 2000s, Silicon Valley has been trying to figure out what’s next. It wasn’t crypto. Now we know. AI promises to unlock tremendous advances. Consider only machine learning plus genetics and ponder the consequent huge advances coming in health. But nobody really knows yet what it can do, or how to apply it. It was a century from Franklin’s kite to the electric light bulb, and another century to the microprocessor and the electric car.
A broad controversy has erupted in economics: whether frontier growth is over or dramatically slowing down because we have run out of ideas. (15) AI is a great hope this is not true. Historically, ideas became harder to find in existing technologies. And then, as it seemed growth would peter out, something new came along. Steam engines plateaued after a century. Then diesel, electric, and airplanes came along. As birthrates continue to decline, the issue is not too few jobs, but too few people. Artificial “people” may be coming along just in time!
Conclusion As a concrete example of the kind of thinking I argue against, Daron Acemoglu writes,
We must remember that existing social and economic relations are exceedingly complex. When they are disrupted, all kinds of unforeseen consequences can follow…
We urgently need to pay greater attention to how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Getting the most out of creative destruction requires a proper balance between pro-innovation public policies and democratic input. If we leave it to tech entrepreneurs to safeguard our institutions, we risk more destruction than we bargained for. (16)
The first paragraph is correct. But the logical implication is the converse—if relations are “complex” and consequences “unforeseen,” the machinery of our political and regulatory state is incapable of doing anything about it. The second paragraph epitomizes the fuzzy thinking of passive voice. Who is this “we”? How much more “attention” can AI get than the mass of speculation in which we (this time I mean literally we) are engaged? Who does this “getting”? Who is to determine “proper balance”? Balancing “pro-innovation public policies and democratic input” is Orwellianly autocratic. Our task was to save democracy, not to “balance” democracy against “public policies.” Is not the effect of most “public policy” precisely to slow down innovation in order to preserve the status quo? “We” not “leave[ing] it to tech entrepreneurs” means a radical appropriation of property rights and rule of law.
What’s the alternative? Of course AI is not perfectly safe. Of course it will lead to radical changes, most for the better but not all. Of course it will affect society and our political system, in complex, disruptive, and unforeseen ways. How will we adapt? How will we strengthen democracy, if we get around to wanting to strengthen democracy rather than the current project of tearing it apart?
The answer is straightforward: As we always have. Competition. The government must enforce rule of law, not the tyranny of the regulator. Trust democracy, not paternalistic aristocracy—rule by independent, unaccountable, self-styled technocrats, insulated from the democratic political process. Remain a government of rights, not of permissions. Trust and strengthen our institutions, including all of civil society, media, and academia, not just federal regulatory agencies, to detect and remedy problems as they occur. Relax. It’s going to be great.