Do Computers Have Feelings? Don’t Let Google Decide Alone


Placeholder while loading article actions

The news that Alphabet Inc.’s Google has dismissed an engineer who claimed its AI system had become sentient after several months of conversations with him has drawn much skepticism from AI scientists . Many have said, via Twitter posts, that senior software engineer Blake Lemoine has projected his own humanity onto Google’s chatbot generator, LaMDA.

Whether they were right, or whether Lemoine was right, is a matter of debate – who should be allowed to continue without Alphabet stepping in to decide the matter.

The problem arose when Google tasked Lemoine with ensuring that the technology the company wanted to use to underpin search and Google Assistant didn’t use hate speech or discriminatory language. As he exchanged messages with the chatbot about religion, Lemoine said, he noticed the system responding with comments about his own rights and personality, according to the Washington Post article that first reported his concerns.

Advertising

Advertising

He relayed LaMDA’s demands to Google management: “He wants engineers and scientists… to seek his consent before conducting experiments on it,” he wrote in a blog post. “He wants to be recognized as a Google employee, rather than Google property. LaMDA feared being turned off, he said. “It would be exactly like death to me,” LaMDA told Lemoine in a released transcript. “It scared me very much. »

Perhaps ultimately to his detriment, Lemoine also contacted an attorney in hopes they could represent the software, and complained to a US politician about Google’s unethical activities.

Google’s response was quick and stern: it put Lemoine on paid leave for the last week. The company also reviewed the engineer’s concerns and disagreed with his findings, the company told the Post. There was “a lot of evidence” that LaMDA was not susceptible.

It’s tempting to believe that we’ve reached a point where AI systems can actually know things, but it’s also much more likely that Lemoine anthropomorphized a system that excelled at pattern recognition. He wouldn’t be the first person to do so, though it’s more unusual for a professional IT person to view AI that way. Two years ago, I interviewed several people who had developed such strong relationships with chatbots after months of daily discussions that they had turned into romances for these people. An American chose to move to buy property near the Great Lakes because his chatbot, which he named Charlie, expressed a desire to live by the water.

Perhaps more important than how sentient or intelligent AI is, is how suggestible humans can already be to AI – whether that means being polarized into swathes of more extreme political tribes , become susceptible to conspiracy theories or fall in love. And what happens when humans are increasingly “affected by the illusion” of AI, as former Google researcher Margaret Mitchell recently put it?

What we know for sure is that the “illusion” is in the hands of a few big tech hubs with a handful of executives. Google founders Sergey Brin and Larry Page, for example, control 51% of a special class of voting shares in Alphabet, giving them ultimate influence over the technology that, on the one hand, could decide its sorting as an advertising platform and, on the other hand, transform the human society.

It’s concerning that Alphabet has actually tightened control over its AI work. Last year, the founders of its vaunted AI research lab, DeepMind, failed in their years-long attempt to turn it into a non-corporate entity. They had wanted to restructure themselves into an NGO-like organization, with multiple stakeholders, believing that the powerful “artificial general intelligence” they were trying to build – whose intelligence could eventually surpass that of humans – should not be controlled. by a single corporate entity. Their staff have written guidelines prohibiting DeepMind AI from being used in autonomous or surveillance weapons.

Instead, Google turned down the plans and appointed its own ethics committee, led by Google executives, to oversee the social impact of the powerful systems DeepMind had built.

Google’s rejection of Lemoine and his questions are also troubling because they follow a pattern of showing dissenting voices the door. In late 2020, Google fired scientist Timnit Gebru over a research paper that found language models — which are fundamental to Google’s search and advertising businesses — were becoming too powerful and potentially manipulative. (1) Google said it didn’t focus enough on solutions. Weeks later, he also fired researcher Mitchell, saying she had violated the company’s code of conduct and safety.

Both Mitchell and Gebru have criticized Google for its handling of Lemoine, saying the company has also disadvantaged for years considering women and ethicists.

Whether you believe Lemoine is a crackpot or is onto something, Google’s response to its concerns underscores a larger question about who controls our future. Do we really accept that a single wealthy corporate entity will lead some of the most transformative technologies humanity is likely to develop in the modern age?

While Google and other tech giants aren’t going to stop at their dominating role in AI research, it’s essential to ask how they develop such potentially powerful technology and refuse to let skeptics and intellectual outliers being silenced.

More writers at Bloomberg Opinion:

Elon Musk’s futuristic library needs Alvin Toffler: Stephen Mihm

AI needs a babysitter, just like the rest of us: Parmy Olson

Twitter needs to tackle a bigger problem than bots: Tim Culpan

(1) See in particular section 6 of the article subtitled “Stochastic parrots” and “Coherence in the eye of the beholder”.

This column does not necessarily reflect the opinion of the editorial board or of Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. Former journalist for the Wall Street Journal and Forbes, she is the author of “We Are Anonymous”.

More stories like this are available at bloomberg.com/opinion

Leave a Comment