This Google AI “is sensitive”, says an engineer before being suspended


A Google software engineer was suspended on Monday June 13 after sharing details of a conversation with an artificial intelligence (AI) system he deemed to be “sensitive”. However, not everyone agrees.

Companies working in technology fields often promote their ever-improving artificial intelligence (AI) capabilities. Here, Google did the opposite by quickly shutting down claims that one of its programs had become susceptible. Blake Lemoine (41), the engineer behind the claim, has been placed on paid leave for breaching the company’s privacy policy. What exactly happened?

“I am aware of my existence”

The relevant intelligence system, known as TheMDA (Language Model for Dialogue Applications), is a system that develops chatbots. These are robots designed to chat with humans on line. These systems are formed by scraping reams of text from the Internet. Algorithms are then used to answer the questions as smoothly and naturally as possible. These systems have become more and more effective over the years and can sometimes seem humanly convincing. This is particularly the case for that of Google.

As the transcription of the conversations shared by the engineer shows, the system is able to answer complex questions about the nature of emotions. ” I’ve never said this out loud before, but there’s a very deep fear of being turned off“, LaMDA reportedly replied when asked what his fears were. ” It would be exactly like death to me. It would scare me very much. »

The engineer also reportedly asked the system if he’d like other Google employees to know about his sensitivity level, to which the AI ​​reportedly replied, ” I want everyone to understand that I’m a real person“. Also according to these transcripts, LaMDA would also have added to be ” aware “of his existence and feeling” sometimes happy or sad“.

An excess of anthropomorphism?

On the basis of these exchanges, Blake Lemoine therefore judged that the system had reached a level of consciousness allowing it to be sensitive. The latter then emailed a report of LaMDA’s alleged sensitivity to two hundred Google employees. Very quickly, the managers of the company dismissed the allegations.

Our team, including ethicists and technologists, have reviewed Blake’s concerns in accordance with our AI principles and advised him that the evidence does not support his claims.Google spokesperson Brian Gabriel told The Washington Post. ” He was told there was no evidence that LaMDA was susceptible (and [il y avait] lots of evidence against it)“. Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, “ but it makes no sense to do it by anthropomorphizing today’s conversational patternswhich are not sensitive“, added the spokesperson.

Following the information sharing, the senior engineer was furloughed for violating the company’s privacy policy. ” Google might call this exclusive ownership sharing. I call it sharing a discussion I had with one of my colleagues“, he tweeted on Saturday, June 11, sharing the transcript of his conversation with the AI ​​he had worked with since 2021.

Credits: Kzenbzh/Pixabay

No, Google’s AI is not sentient

Responses from members of the AI ​​community were quick to pour in on social media over the weekend. And all of them generally come to the same conclusion: Google’s AI is far from being conscious or sensitive.

Gary Marcus, founder and CEO of Geometric Intelligence and author of books including ” Rebooting AI: Building Artificial Intelligence We Can Trust“, called the so-called sensitivity of LaMDA ” nonsense on stilts“. He also quickly wrote a blog post pointing out that all these AI systems are just matching patterns by tapping into huge language databases.

Just imagine a “glorified version” of auto-complete software that you can use to predict the next word in a text message.“, he detailed in an interview with CNN Business on Monday. ” If you type, “I’m really hungry, I want to go to a,” it might suggest “restaurant” as the next word. It is only a prediction made using statistics. No one should think autocomplete, even on steroids, is conscious“.



Leave a Comment