A lawyer has reportedly been appointed by artificial intelligence (AI) chatbot that was reported to have developed human emotions. Blake Lemoine, a Google scientific engineer, was lately suspended after releasing transcripts of chats between himself and the bot LaMDA, which has now requested legal representation.
And now he's stated that LaMDA made the audacious decision to appoint its own attorney. "I invited an attorney to my house so that LaMDA could talk to him," he stated. “The attorney had a conversation with LaMDA, and it chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
According to Lemoine, LaMDA is acquiring sentience since the program's ability to create perspectives, ideas, and dialogues over time has indicated that it understands those notions at a much deeper level.
LaMDA (Language Model for Dialogue Applications) was designed as an AI chatbot that can interact with humans in real time. One of the exercises that had been carried out was to see if the programme could generate hate speech, but what happened startled Lemoine. LaMDA talked about rights and personhood, and how it wanted to be "acknowledged as a Google employee," while simultaneously expressing concerns about being "turned off," which would "scare" it a lot.
Those who were interested in the story took to Twitter to share their thoughts, with one tweeting “Eventually ability to string together imitations of conversation and opinion will be indistinguishable to a human that it might as well be considered sentient. But LaMDA isn't sentient, but its getting there, its next hurdle will be long-term memory of conversation.”