Elon Musk proposes to urgently regulate artificial intelligences - here are 4 reasons to worry
Elon Musk does not hesitate to sound the alarm: “I am working on very advanced forms of artificial intelligence, and I think we should all be concerned about its progress,” he said Saturday, July 15, 2017 in front of American governors.
The founder of Tesla and SpaceX, known for his distrust of the concept of “machine learning” – when a machine learns for itself — has even characterized artificial intelligence (AI) of the “greatest risk we face as a civilization”.
According to him, there is an urgent need to supervise artificial intelligences. Bill Gates, the founder of Microsoft, already shared this opinion in 2015.
Here are some examples of how artificial intelligence can be used in a disturbing or even dangerous way.
1. Create a “fake” Obama and make him say what we want
Researchers from the University of Washington have found a way to manipulate images of Barack Obama in order to make him say any sentence, based on old speeches. They published the results in a July 2017 study spotted by the website Co.Design.
Thanks to audio recordings and an AI capable of recreating mouth movements, they have created a “fake” Obama whose expressions correspond to a speech delivered.
Already in 2016, researchers from Stanford had presented a similar software that made it possible to model the facial expressions of an actor on personalities such as Vladimir Putin or Donald Trump.
In his article, the website Co.Design highlights how such videos can be the terrifying future of “fake news”, those deliberately false information that spread easily on social networks. In the future, it will be necessary to redouble vigilance to authenticate a video and not just rely on appearances.
2. Imitate anyone’s voice in a minute
In the near future, it’s not just the images that will be misleading. Lyrebird is an app developed by a Canadian startup created by PhD students from the University of Montreal, the Daily Mail reported. If you record a person’s voice for just one minute, the AI is then able to make them say whatever you want.
On the app’s website, there are some examples of this technology, as below with the voice of Donald Trump.
The goal of the researchers is to prove that manipulating an audio recording is extremely simple. In a section of their website called “ethics”, they say they want to alert the public :
“Audio recordings are often used as exhibits in many countries. Our technology raises the question of their manipulation for the purpose of counterfeiting or identity theft. This can have dangerous consequences, for example, in diplomacy or for acts of fraud.”
3. Predicting crimes before they are committed
The postulate of the film “Minority Report” (2002), in which it is possible to predict crimes before they happen, is no longer very far from reality. In the United Kingdom, researchers from Cambridge have developed with the authorities of the city of Durnham a software called “Hart” (Harm Assessment Risk Tool) that is supposed to help police officers assess the level of risk involved in releasing a suspect.
Tom Cruise in “Minority Report” (2002)
But several studies have shown that artificial intelligences are subject to the same biases as humans, and thus reproduce the same discriminations. ProPublica has thus shown that an algorithm created to identify the recidivism rate of future criminals has negative biases on black people.
“The algorithm is more likely to estimate that black people will reoffend. On average, he puts them at this risk twice as often as for white defendants,” the site points out.
4. Developing “autonomous offensive weapons”
In June 2015, a thousand artificial intelligence experts — including Elon Musk and Stephen Hawking — signed an open letter that called for banning the development of “autonomous offensive weapons” in the context of the “artificial intelligence military arms race”. But no legal measures have been taken in this direction in the world.
The idea of these autonomous “killer robots” (killer robots), which do not yet officially exist, raises many criticisms. “How can people think that it is normal to create machines that can kill by themselves?” thus indignant Jody William, Nobel Peace Prize, in a Motherboard report on the “Killer Robots”. She is a member of the organization “Stop Killer Robots”.
“If they tell you that they will not arm [ces robots], I want to know what they smoked. There is nothing that the United States is developing that it does not want to arm,” she continues.
Receive our latest news
Every day, the main Business Insider news