Okay, I confess. Sorry, Siri, but I find you and Alexa creepy. I worry that native intelligence is being replaced by “artificial intelligence,” which strikes me as a modern-day oxymoron, like “virtual reality.” I’m scared about what’s coming as technology takes over our lives. And I’m nearly convinced robots are going to make humans unnecessary if not extinct. Call me crazy, but that’s what they called Jules Verne too.
It seems I’m in good company. Some pretty big names in science and technology have also expressed concern about the inherent risks AI could pose. They include the late physicist Stephen Hawking who told the BBC several years ago that, "the development of full artificial intelligence could spell the end of the human race." Elon Musk, engineer and head of Tesla, has said that autonomous machines could unleash “weapons of terror,” comparing the adoption of AI to “summoning the devil.” And Bill Gates is worried that AI is only viable if we make sure humans remain in control of machines.
As one techie posted on Tech Times.com, what happens if Siri decides that she wants to take over the world? He didn’t seem to think that was a real threat, but what if AI becomes so advanced that it decides it wants power of its own? Others worry that if artificially intelligent systems misunderstand a mission they’re given, they could cause more damage than good and end up hurting lots of people.
A lot of folks are worried about the implications of AI-controlled weapons. They might be able to help soldiers and civilians in war zones, but they could also cause a global arms race that could end up being disastrous. According to a scientist at the Future of Life Institute, “There is an appreciable probability that the course of human history over the next century will be dramatically affected by how AI is developed. It would be extremely unwise to leave this to chance,” he argues.
There are also troubling ways in which AI could infringe upon our personal privacy. Facebook’s recent problems have already demonstrated some of the possibilities, from unwanted intrusion to exposure that leaves us vulnerable. Facebook can already recognize someone by the clothes they wear, the books they read, and the movies they watch. What happens when government agencies have fully developed recognition systems?
In one alarming thesis put forward by Nick Bostrom, an Oxford University philosopher, artificial intelligence may prove to be apocalyptic. He thinks AI “could effortlessly enslave or destroy Homo sapiens if they so wished.”
No longer the stuff of science fiction, many AI milestones have already been reached even thought experts thought it would take decades to get where we are now in terms of relevant technology. While some scientists think it will take a long time to develop human-level AI or superintelligence, others at a 2015 conference thought it would happen within the next forty years or so. Given AI’s potential to exceed human intelligence, we really don’t know how it will behave. If humans are no longer the smartest beings on earth, how do we get to stay in control?
A recent, lengthy article about “Superior Intelligence” in The New Yorker, pointed out that imbuing AI with higher intelligence than humans have risks having robots turn against us. “Intelligence and power seek their own increase,” Tad Friendly posited in his piece. “Once an AI surpasses us, there’s no reason to believe it will feel grateful to us for inventing it, particularly if we haven’t figured out how to imbue it with empathy.”
Here’s another interesting thing to contemplate. In 1988, Friendly shares, “roboticist Hans Moravec observed that tasks we find difficult are child’s play for a computer, and vice versa. ‘It is comparatively easy to make computers exhibit adult-level performance in solving problems on intelligence tests, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.’”
And here’s a scary thought: According to The New Yorker, Vladimir Putin told Russian schoolchildren not long ago that “the future belongs to artificial intelligence. Whoever becomes the leader in this sphere will become the ruler of the world.” In light of recent interference with western elections, one must wonder what he’s got in the way of AI technology (or whether he has already found a way to infiltrate Donald Trump’s brain and program his mouth.)
I realize I may be getting ahead of things and sounding unduly alarmist, but it’s all pretty scary stuff. I hope the day never comes when people younger than I am have to admit that, along with Stephen Hawking et. al., I was not totally out in left field. Worse still, I hope they never have to dodge incoming missiles directed by maniacal robots angry because we didn’t make them even smarter and more powerful than they already are.
# # #
Elayne writes and worries from Saxtons River, Vt.