[Surreal] Reactions to "Possible Minds: 25 Ways of Looking at AI", part 4 (Daniel C. Dennett)
Or, Anthropopathism is dead
Preview image: set photo of the 1931 film “Frankenstein”.
John Searle, one of the greatest philosophers, may want to knock Daniel Dennett over. For the simple reason that the latter — perhaps the most eminent Materialist philosopher currently alive — denies that Free Will and Consciousness, as framed by western philosophy for 2 millennia, exist. Never mind those smarty word pairs, the Materialist said. For they are all manifestations and emergent properties of the physical world. Nothing metaphysical about them. Nothing abstract about them. We just could not touch Free Will and Consciousness, that is all. That’s the Daniel Dennett that e encounter in this chapter.
So whatever Dennett wrote in this piece about Free Will and Consciousness, he never really meant them as (a) we think he means them and (b) as we are used to understand. But to sum up this chapter, it’s fair to conclude that Mr. Dennett would be a benevolent dictator in a future world: he does not want smart robots and humanoids to feel pain, no matter how smart they are. And how would he do that? By making robots and humanoids smart, but not consciousness. Feeling pain is consciousness delivering a message, and by removing consciousness in the equation, no pain is felt. Voila.
However, the most important point that belie Mr. Dennet’s piece, is this resounding message: anthropopathism is dead. In plain language, that means: robots and humanoids, however smart they might become, are and will be tools, not people. Therefore, humans should not have too many reasons to worry about the hyped-up “super intelligence”. Tools can be very intelligent, and so what? You can name your favorite SUV “David” or “Deborah”, but thee fact that you ascribe a human name to that fancy car, does not make it human, to make it feel happiness, joy, pain, and grief
So far, Mr. Dennett’s chapter stands in stark contrast to those that precede it in the book. Up until now, all previous chapters have more or less said that we need more communications with AI agents, because the authors worry about (1) unintended consequences of “super intelligence” (whatever that means); and (2) technology is not merely technology, it has impact on society. However, going against the grain, Mr. Dennett made his case on “No pain no problem” proposition for intelligent agents. A most classic Materialist position!
While I appreciate that Dennet set those preceding doomsayers in their place, I have a gripes about his position. Indeed, tools are not people, tools do not feel pain, but in the mean time, tools need NOT be conscious or sentient to exert wide-ranging and deep impact on human society, culture, ethics, and institutions. Exhibit A: firearms in the US. Are firearms conscious or sentient in any sensible philosophical tradition? Of course not. But look at how it shaped society, culture, laws, ethics, human lives, and institutions in the US. AI agents, as inventions no less consequential than firearms, would most likely bring more and sharper conflicts and shape human society in profound ways.
Or, using a much less controversial exhibit, I think Mr. Dennett wants our relationship with AI agents, to resemble our relationships with cars. Cars are certainly not sentient, but it still delivered broad and deep impact in society: demographics, environment, pop culture, and daily life. And cars are a great tool laying around your house in case you want to use them. And yes, you can always give your Ferrari a name. And no, your Ferrari is still not a person. And no, your Ferrari has no human feelings, just as the AI agent living in the computer. Indeed, for all the human feelings we ascribe to our favorite toys and tools, anthropopathism is still dead.
Next up: Reactions to part 5, a chapter by George Dyson.