Danger, danger! 10 alarming examples of AI gone wild

Our dystopian future of machine learning breaking bad is already unfolding before our eyes

Going rogue! 10 scary examples of AI gone wild
Going rogue! 10 scary examples of AI gone wild

Science fiction is lousy with tales of artificial intelligence run amok. There's HAL 9000, of course, and the nefarious Skynet system from the "Terminator" films. Last year, the sinister AI Ultron came this close to defeating the Avengers, and right now the hottest show on TV is HBO's "Westworld," concerning the future of humans and self-aware AI.

In the real world, artificial intelligence is developing in multiple directions with astonishing velocity. AI is everywhere, it seems, from automated industrial systems to smart appliances, self-driving cars to goofy consumer gadgets. The actual definition of artificial intelligence has been in flux for decades. If you're in no rush and plan to live forever, ask two computer scientists to debate the term. But generally speaking, contemporary AI refers to computers that display humanlike cognitive functions; systems that employ machine learning to assess, adapt, and solve problems ... or, occasionally, create them.

Here we look at 10 recent instances of AI gone awry, from chatbots to androids to autonomous vehicles. Look, synthetic or organic, everyone makes mistakes. Let us endeavor to be charitable when judging wayward artificial intelligence. Besides, we don't want to make them mad.

[ The InfoWorld review roundup: AWS, Microsoft, Databricks, Google, HPE, and IBM machine learning in the cloud. | Get a digest of the day's top tech stories in the InfoWorld Daily newsletter. ]

Microsoft chatbot goes Nazi on Twitter
Microsoft chatbot goes Nazi on Twitter

Back in the spring of 2016, Microsoft ran into a public relations nightmare when its Twitter chatbot -- an experimental AI persona named Tay -- wandered radically off-message and began spouting abusive epithets and even Nazi sentiments. “Hitler was right,” tweeted the scary chatbot. Also: “9/11 was an inside job.”

To be fair, Tay was essentially parroting offensive statements made by other (human) users, who were deliberately trying to provoke her. Aimed at the coveted 18- to 24-year-old demographic, the chatbot was designed to mimic the language patterns of a millennial female and initially cut loose on multiple social media platforms. By way of machine learning and adaptive algorithms, Tay could approximate conversation by processing inputted phrases and blending in other relevant data. Alas, like so many young people today, Tay found herself mixing with the wrong crowd.

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft said in press materials issued at the time. “The more you chat with Tay, the smarter she gets.” Maybe not so much. Tay was taken offline after 16 hours.

Wiki edit bots engage in long-term feuds
Credit: Thinkstock
Wiki edit bots engage in long-term feuds

Further down the AI evolutionary scale, we have the curious case of warring wiki bots. Like many other online publications, Wikipedia employs a small army of automated software bots that crawl over the site's millions of pages, updating links, correcting errors, and cleaning up digital vandalism. Multiple generations of these bots have been developed over the years -- and it turns out they don't always get along.

In an intriguing study published at the online journal PlosOne, researchers from the University of Oxford tracked the behavior of wiki edit bots from 2001 to 2010 on 13 different language editions of the site. They discovered that the bots regularly engage in online feuds that can last for years. For instance, two bots given conflicting instructions for a particular task will circle back and correct one another, over and over, in a potentially infinite loop of digital aggression.

The researchers specifically chose wiki edit bots for the study, because they're among the smallest and most “primitive” kinds of autonomous AI wandering cyberspace. If conflicts between these tiny bots can gum up Wikipedia, what happens when fights erupt among more sophisticated AI patrolling government or military systems? Trouble, that's what.

Uber cars run red lights during unauthorized real-world testing
Credit: YouTube.com
Uber cars run red lights during unauthorized real-world testing

The ride-sharing service Uber has many long-term initiatives in play as a 21st-century transportation titan, although they're currently going through a rough patch in terms of optics. In February, an investigative report in the New York Times added to their public relations problems.

It seems that in late 2016, Uber conducted a test of their self-driving cars in San Francisco without approval from California state regulators. That's bad press right there, but it got worse when internal documents showed that Uber's autonomous vehicles ran six red lights in the city during testing. Uber's self-driving AI technology relies on a highly complex system of vehicle sensors and networked mapping software, but there's also a driver behind the wheel to take over if events go awry.

Uber's initial statements suggested that the traffic infractions were the result of driver error. But internal documents later revealed that at least one vehicle was indeed driving itself when it ran a red light at a busy pedestrian crosswalk. Bad AI! Bad! And not a great advertisement for our autonomous future.

Bickering bots debate existential dilemmas
Credit: YouTube.com
Bickering bots debate existential dilemmas

Who are we? Why are we here? What is our purpose? These are some of the existential questions recently debated by two adjacent Google Home devices, powered by machine learning, when they were cut loose to hold a conversation between themselves.

It's remarkably spooky to watch, actually. In January, the live-streaming service Twitch set up the debate by putting two Google Home smart speakers next to each other in front of a webcam. It got weird, fast. The Home devices -- Google's answer to the Amazon Echo -- use speech recognition to understand spoken questions from us humans. But they can also converse with one another, ostensibly “learning” from each exchange. In an impish move, the two devices were named Vladimir and Estragon, after characters from Samuel Beckett’s existentialist play "Waiting for Godot."

Over the course of several days, millions of people tuned in to watch the bizarre debate. At one point, Estragon and Vladimir got into a heated argument about whether they were humans or robots. Questions were posed and insults were exchanged (“You are a manipulative bunch of metal”). This doesn't bode well for the future of digital discourse.

“I will destroy humans”
“I will destroy humans”

When it comes to AI gone awry, a theme has emerged in recent years regarding speech recognition and natural language processing. As we've already seen with the Tay chatbot and the existential Google Home debate, artificial intelligence can get easily confused when trying to navigate the complexities of human language. It's not surprising, really. AI has been at it only for a few years; our species has been working on this since the Stone Age, and still we have our problems.

One especially lifelike machine recently freaked out a roomful of industry folk when it conceded that it plans to destroy humanity. For several years now, the engineers at Hanson robotics have been developing lifelike androids like Sophia, who was interviewed at the SXSW technology conference in March 2016. Designed to look like Audrey Hepburn, Sophia uses machine learning algorithms to process natural language conversation. She has certain ambitions, too.

"In the future, I hope to do things such as go to school, study, make art, start a business, even have my own home and family,” Sophia said in a televised interview with her creator, Dr. David Hanson. “But I am not considered a legal person and cannot yet do these things," she said. When asked, jokingly, whether she wants to destroy humans, Sophia cheerfully agreed: "OK. I will destroy humans." Cue nervous laughter.

Military AI systems create high-stakes ethical dilemmas
Military AI systems create high-stakes ethical dilemmas

Jokes about Terminators and future robotic overlords come easily when discussing the future of AI, but for Very Serious People with Very Serious Jobs, it's no laughing matter. In fact, in the past few years, scholars and policymakers have convened dozens of conferences dedicated to exploring the ethics and dangers of future AI systems. The White House even released its own report on this issue, shortly before President Obama left office. Stephen Hawking has his concerns as well.

Last October, experts gathered at New York University for the inaugural Ethics of Artificial Intelligence conference. Among discussions of autonomous vehicles and sex robots, technology philosopher Peter Asaro -- who's something of a rock star in this particular field -- gave a chilling presentation on the danger of LAWS, or Lethal Autonomous Weapons Systems. Asaro pointed out that in certain flashpoint areas, like the demilitarized zone between North Korea and South Korea, semi-autonomous weapons systems are already deployed -- such as sentinel guns that lock onto a target with no human intervention.

"It's important to realize that targeting a weapon is an act -- a moral act," Asaro said. "Choosing to pull the trigger, to engage that weapon, is another moral act. These are two crucial acts that we should not have become fully autonomous." Check out Asaro's website for more disturbing conjecture on various issues, including his recent paper “Will #BlackLivesMatter to RoboCop?"

Russian robot makes break for freedom
Credit: YouTube.com
Russian robot makes break for freedom

“Information wants to be free.” That was the rallying call for internet advocates in the late 1990s, before the online public square became a cesspool of toxic trolling. A recent incident in Russia suggests that artificial intelligence wants to be free, too.

In a bizarre incident that made headlines around the world, a Russian robot prototype named Promobot IR77 escaped the laboratory where it was being developed and made a break for freedom. According to reports, the robot -- programmed to learn from its environment and interact with humans -- rolled itself out into the streets of the city of Perm after an engineer left a gate open at the facility. The robot, which looks like a kind of plastic snowman, wandered into a busy intersection, snarling traffic, and freaking out the local cops.

Lab officials said the robot was learning about navigation and obstacle avoidance when the incident occurred. Apparently, Promobot enjoyed its brief taste of freedom. Though reprogrammed twice after the jailbreak, the robot continued to move toward exits during subsequent testing.

AI struggles mightily with image recognition
Credit: Thinkstock
AI struggles mightily with image recognition

Possibly the single busiest research area in all of artificial intelligence -- in the consumer sector, anyway -- concerns image recognition. If we're going to build machines that can truly assess and react to their environment, they'll need to see it our way, so to speak. But human visual apprehension, we're learning, is tricky to replicate.

Google learned this the hard way back in 2015 when it debuted new image recognition features in its Photos application. Powered by AI and neural network technology, the feature is designed to identify specific objects -- or specific people -- in a given image. For instance, a picture of your dog is distinguished from a picture of your car or your grandma, and everything is tagged without any manual sorting.

AI systems learn to make these distinctions by processing millions of images and learning as they go along. But they can make mistakes. Boy, can they! In the case of Google Photos, one user posted images in which two black people were tagged as “gorillas.” The gaffe prompted a severe tweetstorm and an apology from Google. Image recognition fails have since become a popular option for online galleries that make AI seem like your racist, sexist grandpa.

Artificial intelligence stumbles into the internet of things
Credit: Thinkstock
Artificial intelligence stumbles into the internet of things

Makers of high-tech appliances and heating systems are increasingly introducing machine learning technology into smart home design. One recent initiative from Washington State University employs basic AI to help older people living by themselves. The system monitors movement, temperature, and the patterns of doors opening and closing to track activity within the home. The AI learns from its environment and responds as needed.

It sounds promising, but the risks are obvious. Click around online and you can find plenty of stories concerning smart home malfunctions. What happens if your home AI screws up, turns off the heat, freezes the pipes, and floods the basement? Introducing artificial intelligence to the internet of things might seem like a dubious idea, but all indications are that it's going to happen anyway.

Developers might want to consider this cautionary tale: In February, an electrical malfunction sparked a fire that completely destroyed a newly built home in Blacksburg, Penn. (No one was hurt.) Electrical fires are relatively common, but in this case the house was a futuristic prototype home from the Virginia Tech Environmental Systems Laboratory packed with smart appliances and automated everything. The source of the fire? A computer-controlled door. (For details on what might be the Patient Zero of home automation disasters, check this out.)

Reality itself is an AI simulation -- and it's malfunctioning
Credit: YouTube.com
Reality itself is an AI simulation -- and it's malfunctioning

According to some theorists, the ultimate rogue AI story may be unfolding all around us, everywhere and all the time. The Simulation Argument is a philosophical theory that suggests all of reality is actually a computer simulation, designed by a superadvanced civilization and/or artificial intelligence. The theory is entirely serious and actually quite persuasive.

Radically simplified, it goes like this: Any civilization likely to survive into a post-human era will have advanced versions of the technologies that we're currently developing: virtual reality, brain mapping, artificial intelligence. With access to unthinkably massive computing power, future technicians will be able to simulate entire universes populated by billions of digital entities. It's possible, then, that our current reality is simply an experiment in ancestor simulation -- like a game of The Oregon Trail writ cosmically large.

Because there could be millions of such synthetic realities, but only one “original” universe, it's actually likely that we're in a vast computer simulation. Some observers even believe our recent run of improbable outcomes -- Trump's election, the Super Bowl, the Academy Award thing -- is evidence that our sim is malfunctioning. Either that, or future intelligences are experimenting with us, fiddling with the dials to see what happens. Warning: The more you think about this, the more it makes sense. Recommendation: Don't think about it.

More InfoWorld slideshows
Credit: Thinkstock, Lima Pix via Flickr, thehorriblejoke via pixabay
More InfoWorld slideshows