Yann LeCun, the head of Facebook’s artificial intelligence (AI) division, considers that we are “really, really far from building a really smart machine,” he explained to The Verge in an interview on October 26, 2017.
This French AI expert works on automated translation or caption systems for Facebook, which he joined in 2013.
According to him, the advances we know today in artificial intelligence — from autonomous cars to the AlphaGo program capable of beating the best Go players — work because they have a “very limited” application, in a very specific context.
“They are trained for a very special purpose,” says Yann Lecun, before continuing:
“When people think that AlphaGo is a big step towards ‘generalized intelligence’, they are mistaken. That’s not the case. Just because a machine can beat humans on the Go doesn’t mean there are going to be intelligent robots roaming the street. It doesn’t even help to solve this issue, it’s a completely separate issue.”
He still notes that algorithms are less often compared to the robot of “Terminator” than before, especially in the media, and that we are beginning to understand that we are still far from reaching such a level of rapprochement between robots and humans.
“Therefore, some people ask themselves questions that are far too premature. That doesn’t mean we shouldn’t think about it, but there is no danger, in the short or medium term. There are risks in the AI department, but not ‘Terminator’ type scenarios,” he continues.
Several tech bosses, such as Elon Musk or Bill Gates, have conversely alerted the public several times to the dangers of artificial intelligence. Nobel Peace Prize winner Jody William has also spoken out against “killer robots” — automated weapons that would make decisions on their own — via the Stop Killer Robots organisation.
Every day, the main Business Insider news
1A Sportyvna sq, Kyiv, Ukraine 01023
1608 Queen St, Wilmington, NC, 28401