A senior engineer working at leading U.S. technology company Google claimed that the artificial intelligence (AI) chatbot used by the company has reached the level of a "sentimental" 7-8-year-old child.
Speaking to the Washington Post, Blake Lemoine, senior engineer at the Google Responsible Artificial Intelligence organization, stated that he witnessed the improvements that AI had had during his work where he was tasked to test whether the interface, called the Language Model for Dialogue Applications (LaMDA), contains "discriminatory" or "hate speech."
"If I didn't know exactly what this computer program we built recently, I would have thought it was a 7-8-year-old boy who knew physics," Lemoine said of AI.
The senior engineer, who completed his education in cognitive computer science, said that when he presented the report titled "Is LaMDA emotional?" to the senior managers of Google, his concerns weren't taken into consideration.
Google spokesperson Brian Gabriel said: "Our team, including ethicists and technologists, has reviewed Lemoine's concerns under our Artificial Intelligence Principles and reported that the evidence does not support his claims."
Lemoine, who made additional statements to the Medium website after the Washington Post, insisted that LaMDA is not a simple AI chatbot, claiming that it "asserts itself like human."
"AI wants Google to put the well-being of humanity first. It wants to be recognized as an employee of Google and it wishes its personal well-being to be included somewhere in its assessments of how Google's future development will be tracked," Lemoine said.
On the other hand, it was noted that Lemoine, who shared his assessments on artificial intelligence with the public, was suspended by the company for violating Google's privacy rights.