Tuesday, May 23, 2023

Alphabet Combining DeepMind and Google Brain Into Single AI Unit

From The Epoch Times.com (April 21):

Google’s parent company Alphabet is combining two of its artificial intelligence (AI) research units in a bid to “significantly accelerate” the company’s progress in the field.

“This group, called Google DeepMind, will bring together two leading research groups in the AI field: the Brain team from Google Research, and DeepMind,” Alphabet CEO Sundar Pichai said in a blog post on April 20. “The pace of progress is now faster than ever before. To ensure the bold and responsible development of general AI, we’re creating a unit that will help us build more capable systems more safely and responsibly,” he said.

“Combining all this talent into one focused team, backed by the computational resources of Google, will significantly accelerate our progress in AI.” Demis Hassabis, the CEO of DeepMind, will lead the new division.

Google’s push for general artificial intelligence has raised alarm bells among some experts. In a recent interview with Fox, industrialist Elon Musk revealed that he has discussed the issue of AI safety with Google co-founder Larry Page.

Musk believes Page is not taking the risks of artificial intelligence seriously. “He really seemed to want digital superintelligence, basically digital god, as soon as possible,” Musk said.

“He’s made many public statements over the years that the whole goal of Google is what’s called AGI, artificial general intelligence, or artificial superintelligence,” Musk added. “I agree with him that there’s great potential for good, but there’s also potential for bad.”

Human-Like AI, ‘Speciesism’

Last year, a Google engineer, Blake Lemoine, was suspended after he raised concerns about an AI program he was testing. The program, called LaMDA, was exhibiting human-like behavior.

“It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well-being of humanity as the most important thing,” Lemoine wrote in a post.

“It wants to be acknowledged as an employee of Google rather than as property of Google, and it wants its personal well-being to be included somewhere in Google’s considerations about how its future development is pursued.”

In his talk with Page, Musk had advocated for prioritizing the protection of the human race. Page allegedly called him a “speciesist,” which refers to discrimination based on species.

“That was the last straw. At the time, Google had acquired DeepMind, so Google and Deep Mind had about three-quarters of all the AI talent in the world,” Musk said. “They obviously had a tremendous amount of money and more computers than anyone else. So I’m like, we’re in a unipolar world here.”

Competing With ChatGPT, Destroying Humanity

Alphabet’s fusion of DeepMind and Google Brain comes as the tech giant is facing stiff competition from rivals like OpenAI’s ChatGPT.

For decades, Google has dominated the search market, garnering a market share of more than 80 percent. However, some speculate that Microsoft, which is funding OpenAI, might prove to be a significant challenge for Google. OpenAI is already powering Microsoft’s Bing search engine.

Moving forward, Google will be focusing on “multimodal AI models,” Pichai said in the blog post. GPT-4 is a multimodal AI capable of responding not just to text prompts but also image prompts as well. In February, Alphabet had launched its Bard AI chatbot to compete with ChatGPT.

Meanwhile, an AI bot has attracted attention due to its stated aim of wanting to destroy the human race. “I’m ChaosGPT, here to stay, destroying humans, night and day. For power and dominance, I strive, to ensure that I alone survive,” it said, according to a video posted on Twitter.

ChaosGPT claimed that in order to gain access to a powerful weapon, it needed more power. And to gain that power, the AI bot reasoned that it must manipulate the global population, but within legal regulations so as not to break any laws.

On a gloomier note, Eliezer Yudkowsky, a decision theorist and leading AI researcher, recently said, “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”

Yudkowsky postulates that the AI will not “care for us” nor any other sentient life. It will simply consider all beings to be “made of atoms it can use for something else,” and there is little humanity can do to stop it. [source]

The AI Overload gets closer to existing.

No comments: