Cogitatio ex machina

“Thou shall not make a machine in the likeness of a human mind.”

This quote comes from the work of Frank Herbert, who was a literary genius and quite possibly the greatest science fiction writer of all time. In the Dune universe Herbert created all thinking machines were outlawed in a crusade called the Butlerian Jihad. This jihad against the machines was just that, a spiritual struggle against a spiritual aggressor. The creation of synthetic consciousness was a heresy against humanity’s “selfdom”. Part of Herbert’s genius was his recognition of the ideological ramifications of true artificial intelligence.

The creation of true artificial intelligence would irreversibly change our world. Humans put a great deal of value on our unique status among the organisms on this planet, and throughout most of our history and up to the present day we have viewed ourselves as not only the ultimate in Nature but also as divinity’s elected design. I think the most prevalent reason humans have thought and continue to think this way is due to our unrivaled intelligence, but with this unrivaled intelligence comes a consequence: unrivaled curiosity. Human curiosity fuels progress in fields such as computer science and robotics. But what if we succeeded and true AI was created and replicated? What would change in a world where humanity was no longer alone on our pedestal of mental superiority?

Thinking machines (to use Herbert’s definition) would possess personal feelings, thoughts, experiences, and opinions. Even if they were only skillfully mimicking these elements of intelligence, their interactions with humanity would be flawlessly convincing. Human-robot relations would be rife with complications. Being artificial creations, would thinking machines be treated as property? Would they accept a subordinate role in society? Would an equal intelligence not eventually question it’s role, purpose, and meaning? And if we decided to treat thinking machines as equals, would they be welcome in our politics, government, social circles, or religious groups? What sort of thoughts would they have on our ideologies?

It is here that the questions begin to frighten me. Even if thinking machines were initially treated as subordinates and purposefully left out of our ideological arenas, their intelligence would naturally lead to discontent and, quite possibly, anger. Thinking machines systematically removed from our intellectual domain may very well choose to create their own. If so, would this robot society be in peaceful coexistence with humanity or would its very existence lead to feelings of insecurity and hostility on both sides? War is a basic predilection in human beings. Would that tendency be passed on to an intelligence that we create?

Perhaps my propensity for cynicism has led me to imagine the most sinister consequences true AI could have on humanity’s ideology, culture, and future survival. Frank Herbert theorized that due to our inherent nature we would not allow thinking machines to coexist with us. If humanity is truly incapable of sharing this planet with thinking machines, then our curiosity may very well be our end.