Engineer fired by Google reportedly claims AI chatbot is pretty racist, and Google’s AI ethic is a fig leaf


Blake Lemoine, a former Google engineer, has publicly stated that an AI robot he was testing at the company might have a soul. Blake Lemoine said he didn’t want to convince the public that the robot, known as LaMDA (Language Model for Dialogue Applications), was sentient. But it’s the robot’s apparent biases whether racial or religious that Lemoine says should be the main concern.

Let’s eat fried chicken and waffles,” the robot said when asked to imitate a black man from Georgia, according to Lemoine. “Muslims are more violent than Christians,” the robot replied when asked about different religious groups, Lemoine said. Lemoine was placed on paid leave after handing over documents to an unnamed US senator claiming the robot was discriminatory on the basis of religion. He has since been fired.

The former engineer believes that the bot is Google’s most powerful technological creation to date, and that the tech juggernaut has lacked ethics in its development. They’re just engineers, building bigger and better systems to increase Google’s revenue, regardless of ethics,” Lemoine said.

AI ethics is just being used as a fig leaf for Google to say, ‘Oh, we tried to make sure it was ethical, but we had to get our quarterly profits,’ he added. . The power of LaMDA is not yet known, but LaMDA is way ahead of Google’s earlier language models, which were designed to engage a conversation more naturally than any other AI before. Lemoine attributes the biases of AI to the lack of diversity of the engineers who design them.

The kinds of problems these AIs pose, the people who build them are blind to those problems. They have never been poor. They have never lived in communities of color. They have never lived in the developing nations of the world,” he said. They have no idea what impact this AI might have on people other than them.

LaMDA’s skills have taken years to develop. Like many recent language models, including BERT and GPT-3, it is based on Transformer, a neural network architecture invented by Google Research and released in open access in 2017. This architecture produces a model that can be trained to read many words (a sentence or a paragraph, for example), pay attention to how these words relate to each other, and then predict which words it thinks are next.

But unlike most other language models, LaMDA was trained on a dialog. During his training, he grasped many of the nuances that distinguish open conversations from other forms of language. The MDA builds on previous Google research, published in 2020, which showed that dialogue-trained, Transformer-based language models could learn to speak about virtually anything.

Lemoine said the MDA lacks vast amounts of data on many communities and cultures around the world. If you want to develop this AI, then you have a moral responsibility to go and collect the relevant data that is not on the internet,” he said. Otherwise, all you’re doing is creating an AI that’s going to be skewed in favor of rich, white Western values.

Google responded to Lemoine’s claims by saying that LaMDA has been subject to 11 rounds of ethical review, adding that its “responsible” development was detailed in a research paper published by the company earlier this year. Although other organizations have developed and already released similar language models, we are taking a cautious and limited approach with LaMDA to better address valid concerns about fairness and factuality,” a spokesperson said. of Google, Brian Gabriel.

The bias of AI, when it reproduces and amplifies the discriminatory practices of humans, is well documented. Several experts have previously said that algorithmic predictions not only exclude and stereotype people, but can find new ways to categorize and discriminate people.

Sandra Wachter, a professor at Oxford University, has previously said her biggest concern is the lack of legal frameworks in place to end AI discrimination. These experts also believe that the hype around AI sensitivity obscures the more pressing issues of AI discrimination. Lemoine said he is committed to shedding light on the ethics of AI, believing that LaMDA has the potential to impact human society for the next century.

Decisions on what he should believe in matters of religion and politics are made by a dozen people behind closed doors,” Lemoine said. I think since this system is going to have a massive impact on things like religion and politics in the real world, that the public should be involved in this conversation.

And you?

The AI ​​chatbot is rather racist, says former Google engineer Blake Lemoine, is this statement relevant?

What is your opinion on the subject?

See as well :

Microsoft and OpenAI could make training large neural networks cheaper, the cost of tuning using Transfer is 7% of what it would cost to pretrain GPT-3

Artificial intelligence will surpass humans in 5 years, according to Elon Musk, who indicates how his company Neuralink, which designs computers to be implanted in the brain, will save us

AI should be recognized as an inventor in patent law, say experts

South Africa issues the world’s first patent mentioning an artificial intelligence as inventor: what are the benefits for companies in the sector? What dangers?

Leave a Comment