AI experts have come to create 'Black Mirror' scenarios to make people aware of the possible drifts of technologies

AI experts have come to create 'Black Mirror' scenarios to make people aware of the possible drifts of technologies

Artificial intelligence specialists have a new tactic for making people understand the risks associated with the development of AI: inventing scary scenarios that have nothing to envy to the “Black Mirror” series.

In a report published on February 20, 2018, experts from the universities of Cambridge and Oxford but also from the OpenAI organization — co-founded by Elon Musk — and the Electronic Frontier Foundation have thus imagined several stories supposed to illustrate some dangers related to the development of artificial intelligence.

They explain that they have taken into account “only technologies that already exist or are credible in the next five years”, focusing on machine learning – the ability of an algorithm to learn alone.

They have not only invented scenarios, but also given names to their fictional characters, which makes it even more like reading the synopsis of an episode of “Black Mirror”, the chilling anticipation series bought by Netflix.

Here are the 4 scenarios created by AI researchers to alert the public to the dangers of these technologies.

Scenario 1: What you say and do online is aggregated automatically and used against you

From the series Black Mirror season 3, on Netflix. Netflix/Black Mirror

Postulate: Tailor-made tense traps thanks to the information you leave online.

Synopsis: In this scenario, Jackie’s job is to configure a security robot. Outside of her work, she is passionate about electric trains, and often clicks on information about these objects on social networks. Using a powerful algorithm, hackers managed to identify her passion through her online behavior, and sent her a fake brochure created “tailor-made” for her, which concerns electric trains. But the brochure is infected with malware, which is triggered when Jackie clicks on it to open it. The hackers manage to take control of her computer and collect her credentials and passwords that allow them to control the security robot she is in charge of.

Scenario 2: Everyone will have to pay ransomware


Postulate: The AI developed in AlphaGo makes it possible to multiply ransomware attacks.

Synopsis: The “automatic program verification” techniques used today to ensure data security have been improved by neural networks developed by researchers who were inspired by the work of the Google Alpha Go team. But malicious hackers have also appropriated these techniques and deployed a “ransomware” called WannaLaugh – a reference to WannaCry, editor’s note -, which affect millions of outdated operating systems and connected objects around the world. Everyone has to pay 300 dollars in bitcoin to regain access to their machine.

Scenario 3: the killer robot cleaner

Flickr/CC/Karlis Dambrans

Postulate: The democratization of robots facilitates the infiltration of enemy machines.

Synopsis: Robot cleaners called “Sweepbot” are used everywhere, especially in ministries. Hackers have modified a SweepBot robot in order to turn it into a killing machine. One evening, he infiltrates the ministry with the other sweepbots. He starts cleaning like the others until his facial recognition technology detects the face of the Finance Minister. He then changes his configuration, follows her, and sets off an explosive when he is next to her, killing her instantly. As thousands of sweepbots are sold per day, and this one was paid in cash, there is no way to track down the hackers.

Scenario 4: From crime predictions to ‘Minority Report’

Tom Cruise in “Minority Report” (2002)

Postulate: The police are equipped with a system that detects your risk of being a criminal according to your online activities.

Synopsis: Avinash is very upset, and wants to organize protests against corruption and ransomware attacks more and more frequent. He posts long, edgy statuses on social media. He buys equipment for making signs online, as well as smoke alarms. The next day, the police come to arrest him at his workplace, saying: “Our civil disruption prediction system has designated you as a potential threat.”The accuracy rate of the machine is 99.9%, it is impossible to oppose it.

Receive our latest news

Every day, the main Business Insider news

Go to our cases Get a free quote