Tech

Google Sidelines Engineer Claims Its AI Is Mindful


SAN FRANCISCO – Google recently laid off an engineer with pay after dismissing his claims that its artificial intelligence is sentient, highlighting another marvel of cutting-edge technology company’s best.

Blake Lemoine, a senior software engineer in Google’s responsible AI organization, said in an interview that he was laid off on Monday. The company’s human resources department said he violated Google’s privacy policy. The day before the suspension, Mr. Lemoine said he had delivered documents to the US senator’s office, which claimed that they provided evidence that Google and its technology engaged in discriminatory behavior. religion.

Google says its system mimics chat exchanges and can talk about different topics, but not consciously. “Our team – including ethologists and technologists – reviewed Blake’s concerns in accordance with our AI Guidelines and informed him that the evidence does not support the claim. his,” Brian Gabriel, a Google spokesman, said in a statement. “Some in the broader AI community are looking at the long-term possibilities of sentient or general AI, but it would not make sense to do so by anthropomorphizing today’s conversational models that do not exist. sense.” The first The Washington Post report Mr. Lemoine is suspended.

For months, Mr. Lemoine has been arguing with Google managers, executives, and human resources over his surprising claim that the company’s Language Model for Conversational Applications, or called LaMDA, consciousness and soul. Google says hundreds of its researchers and engineers talked to LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine. Most AI experts believe that the industry has a long way to go compared to the applicability of computers.

Some AI researchers have long made optimistic claims that these technologies will soon gain interest, but many others are extremely quick to dismiss these claims. “If you were to use these systems, you would never say things like that,” said Emaad Khwaja, a researcher at the University of California, Berkeley and the University of California, San Francisco, who is exploring similar technologies. such thing.

While pursuing the AI ​​pioneer, Google’s research organization has spent the last few years in scandal and controversy. Divisional scientists and other employees frequently dispute technological and personnel issues in episodes that often spill over into the public arena. In March, Google Fired one researcher sought to openly disagree with two published works by a colleague. And lay off by two AI ethics researchers, Timnit Gebru and Margaret Mitchell, after they criticized Google’s language models, continued to cast a shadow over the group.

Lemoine, a former soldier who identifies as a priest, a former prisoner and an AI researcher, spoke to senior Google executives like Kent Walker, president of global affairs. request, that he believes LaMDA is a 7 or 8 year old child. age. He wanted the company to seek the consent of the computer program before running tests on it. His statements were founded on his religious beliefs, which he said discriminated against the company’s human resources department.

“They have repeatedly questioned my sanity,” Mr. Lemoine said. “They say, ‘Have you been examined by a psychologist recently? ”” In the months before he was placed on administrative leave, the company offered him leave from mental health care.

Yann LeCun, head of AI research at Meta and a key player in the rise of neural networks, said in an interview this week that these types of systems aren’t powerful enough to achieve. real intelligence.

Google’s technology is what scientists call neural network, is a mathematical system that learns skills by analyzing large amounts of data. For example, by identifying patterns in thousands of cat pictures, it can learn to recognize a cat.

Over the past few years, Google and other leading companies have Neural network design learns from a large amount of prose, including unpublished books and thousands of Wikipedia articles. These “big language models” can be applied to many tasks. They can summarize articles, answer questions, create tweets, and even write blog posts.

But they are extremely flawed. Sometimes they create perfect prose. Sometimes they create nonsense. These systems are very good at reconstructing patterns they have seen in the past, but they cannot reason like humans.



Source link

news7f

News7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button