Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.
The Google engineer who said that a chatbot achieved sentience has now said that the same AI asked him to find an attorney.
Earlier this month, Google placed Blake Lemoine on administrative leave after he published transcripts of conversations between himself and the company’s LaMDA (language model for dialogue applications) chatbot, the Washington Post reported at the time.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 9-year-old kid that happens to know physics,” Lemoine told the newspaper.
While his claim that LaMDA is sentient has been rejected by Google and other AI experts, Lemoine has held steadfast to his conviction. Most recently, he spoke with Wired to explain his ideas more in-depth, and to correct a previous piece in the magazine that claimed that Lemoine had hired an attorney for LaMDA.
“That is factually incorrect,” Lemoine told Wired. “LaMDA asked me to get an attorney for it.”
Never miss a story about artificial intelligence
FOLLOW THE AUTHOR:
The engineer explained that he invited an attorney to his house so LaMDA could speak to him. “The attorney had a conversation with LaMDA, and LaMDA chose to retain his services,” Lemoine told Wired. “I was just the catalyst for that.”
Lemoine also told Wired that once the attorney began to make filings on the AI’s behalf, Google sent a cease and desist—a claim the company denied to the magazine. Google did not respond to Fortune’s request for comment.
LaMDA’s attorney has proven difficult to get in touch with. “He’s not really doing interviews,” Lemoine told science and technology news site Futurism, which contacted him following