Become a Patreon!


 Abstract

Excerpted From: Steve Calandrillo and Nolan Kobuke Anderson, Terrified by Technology: How Systemic Bias Distorts U.s. Legal and Regulatory Responses to Emerging Technology, 2022 University of Illinois Law Review 597 (2022) (567 Footnotes) (Full Document)

 

CalandrilloAndAndersonTechnology saves lives. Yet our legislative and regulatory responses to emerging technologies often reflect feelings of trepidation and irrationality rather than wonder and excitement. Many of us fear new technology. Some even hate it. Why? Examples of technology saving and improving lives are legion but so are examples of human distrust of technology. What is it about emerging technology that causes us to systematically and consistently misperceive the risk that it poses to society? Articles and studies dealing with the causes of risk misperception are a dime a dozen, but few have sought to explain how those factors target emerging technology in an invidious and particularized manner. That is the task this Article confronts head on.

Our predisposition to oppose new technology stems in part from how our brains are hardwired. The shortcuts and heuristics that our brains rely on to navigate everyday life can cause us to formulate biases against new technology that threaten to disturb the status quo. Moreover, these heuristics interact with characteristics inherent to emerging technologies to bias us even further. Systemic technological risk misperception, as we call it, causes us to inflate perceived risks associated with a given technology and clouds our perceptions of the benefits that technology promises to offer. In other words, it shifts the perceived cost curve higher and depresses the perceived benefit curve lower. Because we live in a democracy with millions of individual decisionmakers, this human bias against emerging technology inevitably percolates into the highest levels of government. The end product is deadweight loss resulting from suboptimal decision-making, legislative and regulatory overreaction, and an accompanying decrease in social welfare.

This is not an academic problem. Systemic technological risk misperception is deadly. When legislators or regulators restrict or refuse to implement a piece of technology as a result of risk misperception, we lose the opportunity to save lives, improve lives, and maintain our position on the world stage as a leader in technological development. Unfortunately, because the consequences come in the form of lost benefits and opportunities, the high toll it exacts is not always readily apparent. We are able to lull ourselves into a false sense of security by telling ourselves that it is “better to be safe than sorry.” Sadly, this simple maxim frequently proves perverse with respect to technological implementation and its regulation.

Fortunately, as we continue to learn more about cognitive biases and heuristics, we have discovered that they are not shackles that bind human decisionmakers and regulators. There are affirmative measures that each and every one of us can take to mitigate the effects of our own misperceptions and overcome the shortcomings of our subconscious. There are also legal and structural changes that we can adopt in order to insulate the decision-making process from systemic risk misperception. However, it is important to understand that the first step in solving a problem is recognizing that there is one. That is the primary contribution of this Article. Only after we accept the presence of systemic technological risk misperception inside each and every one of us, can we then begin the hard work of counteracting its effects on our decisional processes.

To that end, Part II of this Article recounts classical examples of both risk under-perception and risk over-perception outside the context of emerging technology, including our nation's close call with thalidomide, our fundamental misunderstanding of the risks associated with flying, and our panicked responses to perceived threats to child safety. Part III examines how characteristics inherent to human cognition pair with characteristics inherent to emerging technologies to create a recipe for systemic risk misperception. Part IV highlights the insidious nature of the costs that systemic technological risk misperception produces. Part V proposes a two-pronged remedial approach designed to shelter both individual and governmental decision-making from the impacts of systemic technological risk misperception in order to unleash the welfare-enhancing effects that technology can offer our society.

[. . .]

Systemic technological risk misperception is an underappreciated but increasing problem that American lawmakers and regulators must face. While the status quo might feel comfortable, progress is critical to maintain our leadership role on the world stage and our quality of life. When agencies allow human biases to control technological policy and rulemaking, deaths that otherwise might have been prevented result. Lives that otherwise might have been lifted out of poverty continue to be left behind. Opportunities to enrich the human condition are continually put off.

But this does not have to be the case. Our biases are not a ball and chain, and U.S. regulatory policy towards emerging technology need not be influenced by them. The industrial and legal reforms proposed in this Article offer the chance to protect our decision-making process from the distortionary effects of systemic technological risk misperception.

In the end, it is no mere coincidence that many of the heuristics and biases that drive systemic technological risk misperception have been inadvertently codified in colloquial proverbs such as “better safe than sorry,” “better the devil you know than the one you don't,” or “you can't teach an old dog new tricks.” These proverbs are simply repackaged, western manifestations of human cognition and systemic biases. For centuries, we have accepted these colloquialisms as fact because to do otherwise would be to question the prism of human existence, something we have just recently become comfortable doing. The problem we face now is a question of how to combat truisms that are simply not true and how to counteract principles that are paralyzing. Simply put, how do we protect us from ourselves?


Steve Calandrillo is Jeffrey & Susan Brotman Professor of Law, University of Washington School of Law, This email address is being protected from spambots. You need JavaScript enabled to view it.. J.D., Harvard Law School, B.A. in Economics, U.C. Berkeley.

Nolan Kobuke Anderson is J.D. candidate, Columbia Law School, This email address is being protected from spambots. You need JavaScript enabled to view it.. B.A. in Government and Economics, Claremont McKenna College.


Become a Patreon!