Scientists have created a murder-obsessed psychopathic AI dubbed Norman - and she's learned it all on Reddit
- MIT researchers have programmed artificial intelligence using only violent and shocking content from Reddit.
- They called her “Norman”.
- As a result, Norman has an obsession with death.
- This is not the first time that an artificial intelligence has darkened on the dark side because of the internet — this has already happened to Microsoft’s “Tay”.
Many people are afraid of artificial intelligence. Maybe because they’ve seen too many movies like “Terminator” or “I, Robot,” in which machines stand up to humans, or maybe because they think too much about Roko’s Basilisk.
In truth, it is possible to create an AI obsessed with murder.
This is what scientists Pinar Yanardag, Manuel Cebrian and Iyad Rahwan did at the Massachusetts Institute of Technology, when they programmed an AI algorithm, exposing it only to violent and atrocious content from the Reddit platform, and which they called “Norman”.
Norman’s name comes from Norman Bates, the main character of the novel “Psycho”, and “represents a case study on the dangers of Artificial Intelligence drifts, when biased data is used in machine learning algorithms”, according to MIT.
Scientists tested Norman to see how he would respond to tests for ink spots – those ambiguous images sometimes used by psychiatrists to determine people’s personality traits and emotional functioning.
With the first ink job, a classic AI program would see “a group of birds nestled on the top of a tree branch”. Norman, however, saw in it “a man electrocuted to death”.
Where normal people see a black and white bird, a person holding an umbrella or a wedding cake, Norman sees a man trapped in a pasta machine, a man killed by a driver and “a man shot dead in front of his wife”.
“Norman only observes terrifying images. So he sees death in every picture he looks at,” the researchers told CNNMoney.
The Internet is a dark place, and other AI experiments have shown that things can quickly escalate, as soon as an intelligence is exposed to the worst places and the worst people around them. “Tay”, Microsoft’s Twitter bot, was unplugged a few hours after its launch in 2016, because it was starting to spread hate speech, racial slurs and denied the existence of the Holocaust.
But all is not lost on Norman. The team believes it is possible to help him adopt a less “psychopathic” attitude, by learning answers from humans for the same ink stain tests. AI can also be used for good purposes, as MIT showed with an algorithm called “Deep Empathy” last year, to help people get closer to victims of natural disasters.
But Norman didn’t fail to freak people out on the internet.
Here are some reactions from people on Twitter, about Norman:
“1984: how was Terminator born? Who thought that creating a psychopathic robot was a good idea?
1984: How did Terminator even get made? Who thought creating a psycho robot was a good idea?
– Ethan Buckley (@Heymrbuckley) June 7, 2018
“In case you need to fuel your nightmares…
I don’t understand why the @MIT did this.
Don’t you have anything else to do?
*makes great gestures*”
In case you need new nightmare fuel…
I don’t understand why @MIT
has done this 😳
Is there nothing else to do?
*literally gestures everywhere *https://t.co/ximpga7WFj
– Melody 👏🏻WASH YOUR HANDS 💦 (@scientistmel) June 7, 2018
“Imagine you get killed by the most unbearable robot on the planet and the last thing you hear before you die is ‘ad hominem'”
imagine being murdered by the world’s most insufferable robot and the last thing you hear before you die is “ad hominem” https://t.co/lyshrr0gOf
– Brandy Jensen (@brandyljensen) June 7, 2018
Credit: Lindsay Dogson/Business Insider
Receive our latest news
Every day, the main Business Insider news