From First Things.com (Aug. 2023):
In 2009, one of Google’s self-driving cars came to an intersection with a four-way stop. It came to a halt and waited for other cars to do the same before proceeding through. Apparently, that is the rule it was taught—but of course, that is not what people do. So the robot car got completely paralyzed, blocked the intersection, and had to be rebooted. Tellingly, the Google engineer in charge said that what he had learned from this episode was that human beings need to be “less idiotic.”
Let’s think about that. If there is an ambiguous case of right-of-way, human drivers will often make eye contact. Maybe one waves the other through or indicates by the movements of the car itself a readiness to yield, or not. It’s not a stretch to say that there is a kind of body language of driving, and a range of driving dispositions. We are endowed with social intelligence, through the exercise of which people work things out among themselves, and usually manage to cooperate well enough. Tocqueville thought it was in small-bore practical activities demanding improvisation and cooperation that the habits of collective self-government were formed. And this is significant. There is something that can aptly be called the democratic personality, and it is cultivated not in civics class, but in the granular features of everyday life. But the social intelligence on display at that intersection was completely invisible to the Google guy. This, too, is significant.
The premise behind the push for driverless cars is that human beings are terrible drivers. This is one instance of a wider pattern. There is a tacit picture of the human being that guides our institutions, and a shared intellectual DNA for the governing classes. It has various elements, but the common thread is a low regard for human beings, whether on the basis of their fragility, their cognitive limitations, their latent tendency to “hate,” or their imminent obsolescence with the arrival of imagined technological possibilities. Each of these premises carries an important but partial truth, and each provides the master supposition for some project of social control.
We are already sliding toward a post-political mode of governance in which expert administration replaces democratic contest, and political sovereignty is relocated from representative bodies to a permanent bureaucracy that is largely unaccountable. Common sense is disqualified as a guide to reality, and with this disqualification the political standing of the majority is demoted as well. The new antihumanisms can only accelerate these trends: They serve as apologetics for a further concentration of wealth and power, and the further erosion of the concept of the citizen—by which I mean the wide-awake, imperfect but responsible human being on whom the ideal of self-government rests.
That older ideal has its roots in the long arc of Western civilization. In the Christian centuries, man was conceived to be fallen, yet created in the image of God. You don’t have to be a Christian to see that this doubleness—this awareness of sin and of our orientation toward perfection—can help us to clarify the effects of our current antihumanisms, criticize their presuppositions, and look for an exit from the uncanny new forms of tyranny that are quickly developing.
The four antihumanisms, as I see it, are these: Human beings are stupid, we are obsolete, we are fragile, and we are hateful. I submit that these four premises are mutually supporting and that, together, they serve to legitimize, and usher in more fully, the post-political condition. One thing they have in common is that, if taken to heart, they attenuate the citizenly pride that is both cause and effect of self-government.
WE ARE STUPID
In the decades after World War II, the “rational actor” model of human behavior was the foundation of economic thinking. It treated people as agents who act to maximize their own utility, which required the further assumption that they act with a perfectly lucid grasp of where their interests lie and how they can be secured. These assumptions may seem psychologically naive, but they provided the tacit anthropology for what we might call the party of the market—what is called “liberalism” in Europe but in the Anglophone world is associated with figures such as Ronald Reagan and Margaret Thatcher.
In the 1990s, this intellectual edifice was deposed by the more psychologically informed school of behavioral economics, which teaches that our actions are largely guided by pre-reflective cognitive biases and heuristics. These offer “fast and frugal” substitutes for conscious deliberation, which is a slow and costly activity. This was a necessary correction of our view of the human person, in the direction of realism.
But something went awry in the institutionalization of these insights. In the psychological literature, one thing that stands out is that our “sub-rational” modes of coping with the world are actually pretty rational, in the Bayesian sense. That is, the biases and heuristics we rely on correspond to real regularities in the world and provide a good basis for action.
But the practical adequacy of “sub-rational” modes of coping with the world dropped out of consideration when the social engineers got ahold of what looked like a promising new tool kit for “evidence-based interventions,” as well as a fresh rationale for intervening. Biases? Those are bad. People are sub-rational? We knew it all along. Their takeaway was that people need all the help they can get in the form of external “nudges” and cognitive scaffolding if they are to do the rational thing.
In a sense they are correct. A level-headed, Burkean version of their thesis would stress that with the external scaffolding of settled usages and inherited forms, we don’t have to wake up every morning and deduce the necessity of each action from first principles, entirely on our own. It would acknowledge the rationality of tradition as a set of framing conditions for individual choice. Instead, for the nudgers, rationality is to be located neither in the individual nor in tradition, but in a separate class of social managers, acting according to a vision that is theirs alone. They aim to create a “choice architecture” that will guide us beneath the threshold of our awareness.
The nudge is a non-coercive way to alter people’s behavior without having to persuade them of anything. That is, without the inconvenience of having to engage in democratic politics. Following the publication of Nudge by Cass Sunstein and Richard Thaler in 2009, both the Obama White House and the government of David Cameron in the UK immediately established “behavioral insight” teams. Such units are currently operating in the European Commission, the United Nations, the WHO, and, by Thaler’s reckoning, about four hundred other entities in government and the NGO world, as well as in countless private corporations. It would be hard to overstate the degree to which this approach has been institutionalized.
The innovation achieved here, at scale, is in the way government conceives of its subjects: not as citizens whose considered consent must be secured, but as particles to be steered through a science of behavior management that relies on their pre-reflective biases.
The glee and sheer repetition with which this diminished picture of the human subject (as being cognitively incompetent) was trumpeted by journalists and popularizers in the 2010s indicate that it has some moral appeal, quite apart from its intellectual merits. Perhaps it is the old Enlightenment thrill at disabusing human beings of their pretensions to specialness, whether as made in the image of God or as “the rational animal,” seen in Aristotle (not to be confused with the purely calculative “rational market actor”). A likely effect of this demotion is to attenuate the pride of the citizen, and so make us more acquiescent to the work of those whom C. S. Lewis called “the conditioners.” [source]
It's mainly the Left who are putting out these ideas. They are the ones who are stupid and hateful. Rod Sterling wrote an interesting episode called "The Obsolete Man" that addresses that idea. As Rod Sterling says in the opening narration:
"You walk into this room at your own risk, because it leads to the future, not a future that will be but one that might be. This is not a new world, it is simply an extension of what began in the old one. It has patterned itself after every dictator who has ever planted the ripping imprint of a boot on the pages of history since the beginning of time. It has refinements, technological advances, and a more sophisticated approach to the destruction of human freedom. But like every one of the super-states that preceded it, it has one iron rule: logic is an enemy and truth is a menace. This is Mr. Romney Wordsworth, in his last forty-eight hours on Earth. He's a citizen of the State but will soon have to be eliminated, because he's built out of flesh and because he has a mind. Mr. Romney Wordsworth, who will draw his last breaths in The Twilight Zone."
Does this sound familiar especially the part about "logic is an enemy and truth is a menace?" I hope this future never completely comes to pass in America.
No comments:
Post a Comment