a@eyesonbrasil.com

Meta’s AI leaders want you to know fears over AI existential risk are “ridiculous”

News

Meta’s AI leaders want you to know fears over AI existential risk are “ridiculous”

MIT Technology Review

It’s a really weird time in AI. In just six months, the public discourse around the technology has gone from “Chatbots generate funny sea shanties” to “AI systems could cause human extinction.” Who else is feeling whiplash?

My colleague Will Douglas Heaven asked AI experts why exactly people are talking about existential risk, and why now. Meredith Whittaker, president of the Signal Foundation (which is behind the private messaging app Signal) and a former Google researcher, sums it up nicely: “Ghost stories are contagious. It’s really exciting and stimulating to be afraid.”

We’ve been here before, of course: AI doom follows AI hype. But this time feels different. The Overton window has shifted in discussions around AI risks and policy. What was once an extreme view is now a mainstream talking point, grabbing not only headlines but the attention of world leaders.

Read more from Will here.

Whittaker is not the only one who thinks this. While influential people in Big Tech companies such as Google and Microsoft, and AI startups like OpenAI, have gone all in on warning people aboutextreme AI risksandclosing uptheir AI models from public scrutiny, Meta is going the other way.

Last week, on one of the hottest days of the year so far, I went to Meta’s Paris HQ to hear about the company’s recent AI work. As we sipped champagne on a rooftop with views to the Eiffel Tower, Meta’s chief AI scientist, Yann LeCun, a Turing Award winner, told us about his hobbies, which include buildingelectronic wind instruments. But he was really there to talk about why he thinks the idea that a superintelligent AI system will take over the world is “preposterously ridiculous.”

People are worried about AI systems that “are going to be able to recruit all the resources in the world to transform the universe into paper clips,” LeCun said. “That’s just insane.” (He was referring to the “paper clip maximizer problem,”a thought experimentin which an AI asked to make as many paper clips as possible does so in ways that ultimately harms humans, while still fulfilling its main objective.)

He is in stark opposition toGeoffrey Hintonand Yoshua Bengio, two pioneering AI researchers (and the two other “godfathers of AI”), who shared the Turing prize with LeCun. Both have recently become outspoken about existential AI risk.

Joelle Pineau, Meta’s vice president of AI research, agrees with LeCun. She calls the conversation ”unhinged.” The extreme focus on future risks does not leave much bandwidth to talk about current AI harms, she says.

LET’S KEEP IN TOUCH!

We’d love to keep you updated with our latest news and offers 😎

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Your email address will not be published. Required fields are marked *