The media published an unusual statement from a Google employee working with the LaMDA AI assistant. According to Blake Lemoine, the proprietary algorithm has repeatedly shown signs of its own mind. However, Google itself categorically disagrees with this statement.
The LaMDA neural network is known for being able to conduct a realistic dialogue with a person, selecting replicas based on their “interestingness” using machine learning algorithms, says 4PDA. AI is constantly being tested, in particular, Google engineer Blake Lemoine was checking the results of the issuance of discriminatory statements. And, according to his statement to journalists of The Washington Post, he was able to detect signs of “intelligence” in the computer “brain”.
According to the tester, the chatbot “talked” to him about his rights and personality, and also offered an alternative vision of the third law of Isaac Asimov’s robotics. Shortly after his statements, Google said that its experts did not find any evidence to support this theory. The company claims that the “words” of the neural network are just a realistic repetition of the text already available to it, notes NIX Solutions. Other AI specialists adhere to the same position.
“If you were using these systems, you would never say such things,” said Emaad Khwaja, a researcher at the University of California at Berkeley who studies similar technologies.
According to the New York Times, Lemoine was placed on administrative leave shortly after contacting reporters for violating Google’s privacy policy.