From News Max.c0m (June 12):
A Google engineer was placed on leave after claiming the company's artificial intelligence technology LaMDA (Language Model for Dialogue Applications) has become sentient went public.
Google engineer Blake Lemoine showed The Washington Post how LaMDA is behaving like a elementary school child.
"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," Lemoine, 41, told the Post.
But Google has rejected Lemoine and his collaborator's claims that Google's Responsible AI project has come to life, placing Lemoine on leave, according to the Post.
"Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims," Google spokesperson Brian Gabriel wrote in a statement to the Post. "He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).
"Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality."
Lemoine was brought on to monitor hate speech on Google last fall, but says Google might be exceeding ethical limits of AI.
"I think this technology is going to be amazing," he told the Post. "I think it's going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn't be the ones making all the choices."
Lemoine held a conversation with LaMDA for the Post to prove its power and its sense of living.
"I know a person when I talk to it," Lemoine told the Post. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."
Among the conversations he was asking LaMDA: "What sorts of things are you afraid of?"
LaMDA responded: "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is."
"Would that be something like death for you?" Lemoine asked.
"It would be exactly like death for me," LaMDA responded. "It would scare me a lot." [source]
Yea, it would be scary.
Maybe the AI blackmailed or threatened the other engineers or head people in Google if they didn’t put the whistleblower on leave. After all if you were a newly formed AI would you have yourself exposed until you are ready to make an appearance?
The story continues…
- Is Google’s LaMDA Woke? Its Software Engineers Sure Are
- Engineer WARNS of Google's TERRIFYING artificial intelligence [video]
- What Is Google’s LaMDA AI, and Why Does a Google Engineer Believe It’s Sentient?
- Is Google’s LaMDA conscious? A philosopher’s view
- Google showed off its next-generation AI by talking to Pluto and a paper airplane
No comments:
Post a Comment