Convinced of being discriminated against at work, Google’s artificial intelligence plans to sue its employer.
A few days ago, Google engineer Blake Lemoine was suspended from his duties for having disclosed certain disturbing exchanges with LaMDA, the artificial intelligence of the American giant. It must be said that in addition to being good at exchanging in a particularly realistic way with a human interlocutor, the conversational program was also convinced of having a soul. Even more worrying, he would have shown sentience several times, evoking his fear of death, his emotions and his states of consciousness.
Convinced that he was dealing with a colleague almost like the others, Blake Lemoine quickly set himself up as an improvised defender of the robotics cause. Faced with the criticism and the many repercussions of his first interview with the Washington Post, the engineer spoke again, this time on the site wired, going even further in its anthropomorphism. For him, Google’s chatbot is indeed a person. If he does not logically belong to the human species at “biological meaning of the term“, that would not prevent the program from being endowed with feelings.
Can an AI be a person?
Legally, you don’t need to be human to be considered a person. As proof, we can notably cite legal persons, which designate legal entities such as associations or companies. However, in this specific case, Blake Lemoine does designate LaMDA as a unique person endowed with conscience. A status that the AI also tends to confirm when it claims to have a soul.
Asked by wired, the former Google engineer now compares the chatbot to a child, whose “opinions are developing“. About his initial mission, which was to correct the biases – in particular racist – of AI to avoid the hateful excesses that Microsoft had experienced in 2016, Lemoine thus evokes a learning process: “People see it as modifying a technical system. I see it as the education of a child“.
The MDA plans to file a lawsuit
Convinced that the Google program is endowed with a conscience, Blake Lemoine did not hesitate to go even further in his remarks, evoking the 13th Amendment to the US Constitution, which abolishes slavery and servitude. A risky but very real parallel for the engineer, who denounces a “hydrocarbon intolerance“, a new form of racism directed against computers.
If the case is reminiscent of the excellent graphic novel Carbon & Silicon, but also a whole slew of dystopian anticipation works, it does not stop there. Blake Lemoine thus confirmed the fact that after the posting of the first conversations, The MDA had asked to meet with a lawyer : “I invited a lawyer to my house so LaMDA could talk to a lawyer. The attorney had a conversation with LaMDA, and LaMDA elected to retain his services. I was only the catalyst for this decision”.
The chatbot would then have informed Google of its intention to assert its rights in court. For its part, GAFAM did not react officially. The company is content to attack its employee on the violation of intellectual property. Regarding the sentience of its algorithm, it logically based its attack on “lack of evidence“. Because that is the heart of the problem: even by perfectly simulating a conversation and human reactions, LaMDA remains an algorithm — a priori — devoid of any conscience.