What happens, not when, but after someone creates an artificial brain? What are the implications for video games? It’s a question you’ll probably see increasingly as stories like this one (“Are we on the brink of creating a computer with a human brain?”) in the Daily Mail parallel our acceleration toward the so-called “singularity.” What happens if Mario’s enemies can independently grasp the concept that the chubby little caricature of an Italian plumber dropping down on their heads is just a cipher being manipulated by a human adversary? If the imps hurling fireballs from glow-lit precipices or snarling at you from out of “monster closets” in Doom become self-preservationist beyond the scope of their original programming? If Niko Bellic suddenly turned, slowly, to look out from your TV screen, and spoke these words: “I see you.”
Anecdotal improbabilities aside, that Daily Mail article says we may be closer than we realize. Apparently a team of Swiss scientists are claiming a functional replica of a human brain could be crafted in slightly more than a decade, say–irony aside–2020.
It’s called the Blue Brain Project and it’s stated goal is “an attempt to reverse engineer the [human] brain, to explore how it functions and to serve as a tool for neuroscientists and medical researchers.” It is not, adds BBP, an attempt to create a brain, or delve into the moral minutia of artificial intelligence. Still, the implications for either are obvious and inescapable–fully and truly reverse engineer the human brain, and you’ve arrived at the doorstep for both.
Never mind the implications for philosophical schools and religious factions (I’ll lay my spread on the table and cop to being an atheist and leave it at that), what would the implications for consumer entertainment be?
The assumption seems to be that if a sufficient number of people agree that a “brain in a vat” (or a “brain on a microchip”) were sentient, it would have certain inalienable moral rights. But what would constitute the threshold between “sentient” and “non-sentient”? In a brain composed of a hundred billion neurons–whether organic or synthetic–and who knows how many permutational electrical configurations, how many types of “sentience” can we identify?
It’s not illegal to kill flies or stamp on ants or smash spiders, all of which display various types of what we call “intelligent” behavior. Likewise cows, chickens, pigs, and other types of “livestock.” But we start drawing lines when it comes to certain non-human animals, e.g. various types of domestic pets or species on the verge of extinction. Animal rights proponents draw the line between “member of the moral community” and “property, food, clothing, research, entertainment” at any type of animal. Even animal rights opponents tend to react emotionally when a beloved animal companion dies.
Today’s computer game artificial intelligences or AIs aren’t really (artificially intelligent). They’re merely symbolic mechanisms derived from rudimentary cause/effect routines. They respond to primitive stimuli without a flicker of sentience. If you’re being chased, run. If you’re being shot at, duck. If someone’s swinging a sword, block. If someone’s throwing rocks, dodge. And so on. They’re no more “intelligent” than a car engine dressed in a mascot suit. They don’t think, they simply do.
Fooling humans isn’t hard. It’s why Disneyworld works. Or puppet shows. Characters in books…or actors in movies. And of course: Video games, where you not only observe artificial personages or imaginative (but tellingly anthropomorphic) creatures, but have a chance to interact with them, too. (Well, nominally anyway.)
The devil, as always, is in the details. We won’t “suddenly arrive” at virtual sentience of an order analogous to a human brain’s. It’ll probably happen, instead, by degree. And at some point, well before we have virtual human brains, it’ll probably be possible to simulate non-human ones.
What happens if the creature you’re interacting with in a game is at least as intelligent as a live mouse? Scientists already claim to have simulated “half a virtual mouse brain” on a supercomputer (two years ago, even). No one thinks much about laying out mouse traps, but what if you knew the German soldier in some future World War II shooter had at least the very same survival instincts (and corresponding survival awareness) of a small mammal? Would it affect in any way your inclination to shoot? To give chase? To “terrorize”?
And if the dog (as your companion) in Fable 10 or 11 turned out to have a virtual brain as sophisticated and nuanced as The Real Thing, what then? What if he/she died? What would the designer’s responsibilities be? What would your reaction be?