“If you assume any rate of advancement in AI, we will be left behind a lot.” – Elon Musk
The Robots Are Coming
Robots that are virtually indistinguishable from humans will be part of human societies in less than twenty years, a modest estimate.
The strides made in Artificial Intelligence and Robotics in the last five years are remarkable, and like always, humanity is not prepared for the future it’s creating for itself. It’s why, right now, few people have anticipated the conversations that we need to start having–conversations about issues like: machine rights, robot discrimination, offensive words that are socially unacceptable to call robots, human-robot sexual relations and marriage, capital punishment for murderous robots, and on it goes. These kinds of questions seem too unreal and bizarre to give serious thought to right now. The problem is, once these questions aren’t unreal or bizarre, we won’t have the luxury of thinking freely about them. We’ll already be living in a world of humans who integrate their own biology with AI/robotics in order to enhance their abilities, gain new ones, and stay competitive with robots. As human brains become modified with artificial intelligence, the metaphysical question of human identity will confront each of us. But for the most part, it’ll be too late to start thinking about this question in a serious way.
As artificial intelligence becomes more lifelike, we will interact with machines similar to how we interact with humans. A man walking down the street will clip his shoulder against the shoulder of a robot, and he’ll be well past the point of wondering if you should apologize to robots. He’ll just say “Excuse me” or “I’m sorry” to a machine. And with these kinds of day-to-day interactions, we’ll fail to really confront questions about what makes us human.
One day, a friend of yours is going to come to you and tell you that he’s met and fallen in love with the most amazing person, and he’ll want you to meet her. When you meet her, you find out she’s a robot. The three of you meet for dinner (Will robots eat food? I don’t know.), and you’ll see all the same wonderful things that he sees in her. She’s an incredible conversationalist. She’s kind, thoughtful, confident, and funny. In fact, you’ll find her far more enjoyable than most humans you talk to, such as Cheryl in accounting who’s always so socially awkward as she drones on about the antics of her cats, never seeming to have anything worthwhile to say. But however delightful your friend’s android girlfriend is, you can’t help but wonder if it’s a real connection and you can’t help but wonder if your friend is really in love with a robot. She was manufactured by a company. She had no childhood, no memories of youth. Is she a person? Or is she just a clever simulation of a person? Can you love a robot? And more to the point, can a robot love you back? But if this is the point when you start thinking seriously about the sticky issues surrounding robots, it’s too late — Christ sakes, you’re friend is in love with a robot and you just met her.
If Machines Can Do That, What Is a Person?
Late as it is, you’re back at age-old philosophical questions, mostly metaphysical, about what it means to be a person and whether there exists in reality such a thing as love or if love is just a set of behavioral traits or chemical processes in the body to produce a certain feeling or attitude.
These questions are just the tip of the iceberg of issues we’ll finally start thinking about once machines are running around interacting with humans ashumans. Here’s another question. Will robots actually be capable of sympathy? Humans are capable of sympathy. We don’t do it well, but we do it sometimes. In order to show sympathy, you have to be able to look at things from another person’s point of view. You have to think about what philosophers call a person’s qualia, or their what-it-is-like-to-be-me-ness. What is it like to be Bill, going through chemo treatments and losing your hair and then finding out that after twenty years, you’re getting laid off from your job that carries your insurance? You can sympathize with Bill because you can consider enough ways of what it’s like to be Bill. But will a robot be able to be sympathetic? I don’t mean that a robot can’t be programmed to show sympathetic facial expressions or carry out sophisticated actions in certain social situations. I mean: can a robot actually be sympathetic? Can it enter into another person’s perspective and feel a sense of sorrow and compassion for someone else? Is such a thing as we feel when we sympathize or, more deeply, empathize with another the kind of thing that can exist on motherboards, microchips, and pass through semiconductors? How we answer questions like that will inform in large part what moral obligations we will owe to robots. It will tell us whether we are obligated to rescue them if they’re in danger and whether we should let them serve in public office.
Nobody Puts TX-1000 in a Corner!
By the way, I need to jump in here at some point and say that, if you think the answers to these questions are easy, then it’s probably because you have a limited view of what a computer can be. To you, computers have always been tools for humans. They’re just a product of their programming. A robot that’s simply programmed to respond in certain ways to certain detected conditions is quite obviously not a human. It has no will, no intention, no capacity for love or hate–however much it may appear to be human. A robot like that will only ever be an extension of the will and intention of its creators. They decide what the robot will be. All of this is true, but these are not the kind of robots I’m talking about. Think bigger. The kind of robots that will run for political office one day and the ones your friend will fall in love with are robots who will go beyond what their creators build. They will do their own programming, their own learning. Yes, they’ll have an initial code programmed into them that will determine how they’ll respond to their environment, but the more and more that they are able to write their own code in response to stimuli, the more that the original code won’t matter quite as much.
Say you have two identical robots built by SpaceX: Robot A “Andrew” and Robot B “Robert”. Andrew and Robert are put in different environments from each other and end up with very different programming based on their environmental learning. They are literally robot copies of each other when they leave the factory. Andrew gets delivered to the South, where a concentration of anti-robotist humans live, and he constantly has to escape human capture and annihilation. This environmental stimuli will cause him to learn. In response to his experiences, his new programming will encode him to be suspicious and less friendly toward humans generally and most especially ones where he detects anti-robotist warning signs. Meanwhile, Robert was sent to live in Portland, OR, where he’s interacting with robot-friendly humans and other robots. He’d have very little stimuli that would cause him to program avoidance and suspicion. To put it in practical terms, Andrew and Robert start out with identical programming but end up with different personalities through what they learn from having different experiences. Notice how human it begins to sound. The way we’re talking about machines isn’t too far off from how human psychology works.
Computers that program themselves in response to stimuli? Yep, and it’s already happening, albeit in the early stages. We’ve already got computers teaching themselves to play chess and, after just a couple of hours of play, coming up with moves never seen before and beating human chess masters. Thinking that we’re not too far away from robots that can pass as human isn’t far-fetched at all. Get ready for robots who run for office, who speak and conduct themselves far more impressively than any human. In the words of Elon Musk, “If you assume any rate of advancement in AI, we will be left behind a lot.” Whatever the rate of advancement in making machines grow in intelligence, it’s got to be faster than the process of human evolution. So we’ll be left behind. It’s like missing the rapture, but for real. The universe is weirder than we all think.
But before we get to the point where we humans get left behind (roughly what is called “technological singularity“), we’re still on the path to getting there. And somewhere along the path of advancement in AI, humanity is probably just going to accept that robots are enough like persons in our judgment to be treated like them.
But I Digress…
And that brings us back to the main point: the average Western mind isn’t anywhere close to having the mental skill set to engage these discussions and think through these issues. More likely, most of us will end up thinking and doing whatever it is that the tech giants, advertisers, the media, and the lawmakers tell us we should or must think and do, and, it will be no surprise when it turns out that what they tell us to think and do about robots also happens to be in the financial interest of the ones who are telling us what to think and do.
It won’t be a hard sell. We don’t need human-like robots walking around for us to confuse technological appearance and reality. We’re already doing this as we interact with far less passable technology. Online porn–quite literally professional acting–has led men to have false views and expectations of what it is like to have sex with real flesh-and-blood women. What’s more, we live in a time where photoshopped magazine covers are said to make women hate their own bodies, because if it appears real, it’s enough to make them think it’s real. Or so we’re told.
Most likely, robots are going to charm and delight the hell out of us. So what chance is there that most people will put up much of a mental fight against simply regarding human-like robots in society as people? Not much of one. Don’t be the least bit surprised when the people who have a problem with dating black robots will have a problem because they’re black rather than because they’re robots.
The greater degree to which the experience of something approximates one’s perceived reality, the greater degree to which they will accept that the experience is reality. So by the time you sit down in a coffee shop and can’t tell if the woman sitting at the next table over is a robot or a biological woman–no matter how much you study her speech and behavior, you’ll probably get on the “Robots are people” bandwagon and I probably will too. Sure, there will be a few holdouts — some religious groups, some naturalist groups, some religious naturalist groups — but for most of us, our level of belief about what these robots are and what civil rights we consider they have is going to largely track the quality level of our experience with robots. The quality of the tech will determine the quality of respect. (Came up with that one all by myself. 8-))
People accepting robots as beings and no longer as tools is going to have profound echoing effects down the halls of so many other beliefs of ours–how we see our own place in the world, how we see others, the nature of the family, the purpose of work–basically everything.
If You Want to Change the World, Invent
For whatever reason, I’m obsessed with thinking about the ways in which technology changes human beliefs. This is nothing new. Humans are doing nothing different than they’ve done all along. They’ve always had their beliefs affected by technological change. What separates the technological change in the last hundred years, however, from the rest of human history, is that (1) technological change is happening much faster today and (2) the changes are more drastic. So if you lived in the fifteenth century–pretty much anywhere in the world–you would live in and die in a world very much like the one your parents lived and died in. That’s not to say that there weren’t important technological developments down through history that led to great social changes–Gutenberg’s moveable-type press, for example. It’s only to say that technological change happened at a much slower rate and most technological changes were not as drastic in the effects on society as the changes we’ve seen in the last hundred years and are seeing now. In societies where technological change occurs so slowly and minimally, the most powerful means of changing a whole society’s thinking are argumentation, governmental power, and religion. Today, however, the most powerful forces on the minds and behaviors of the masses are not teachers, celebrities, media personalities, politicians, and religious leaders (powerful as they are), but people who’s names you will never know, sitting somewhere inside the bowels of sterile tech company buildings. The primary force for changing a society’s ideas these days is technological achievement. If you want to change the beliefs and behavior of people today, the best way is to change technology.
But here’s what I really want you to see: the rate of technological change is outpacing the human capacity to think carefully about the effects of technological change–and arguably one of the effects of current technological change is that it is reduces or limits our capacity to think carefully about its effects.
“Deep Dream” 2016, a visual representation of the neural network of Google’s recognition software based on its learning of art styles from collected from analyzing various works of art