Tuesday, September 23, 2025

OpenAI wants to make a walking, talking humanoid robot smarter

From Pop Sci.com (Feb. 29, 2024):

Just a few years ago, attempts at autonomous, human-shaped bipedal robots were laughable and far-fetched. Two-legged robots competing in high-profile Pentagon challenges famously stumbled and fell their way through obstacle courses like an inebriated pub-crawler while Tesla’s highly-hyped humanoid bot, years later, turned out to be nothing more than a man dancing in a skin-tight bodysuit.

But, despite those gaffs, robotics firms pressed on and now several believe their walking machines could work alongside human manufacturing workers in only a few short years. Figure, one of the more prominent companies in the humanoid robot space, this week told PopSci it raised $675 million in funding from some of the tech industry’s biggest players, including Microsoft, Nvidia, and Amazon founder Jeff Bezos. The company also announced it has struck a new agreement with generative AI giant, OpenAI to “develop next generation AI models for humanoid robots.” The partnership marks one of the most significant examples yet of an AI software company working to integrate its tools into physical robots.

Figure Founder and CEO Brett Adcock described the partnership as a “huge milestone for robotics.” Eventually, Adcock hopes the partnership with OpenAI will lead to a robot that can work side-by-side with humans completing tasks and holding a conversation. By working with OpenAI, creators of the world’s most popular large language model, Adcock says Figure will be able to further improve the robot’s “semantic” understanding which should make it more useful in work scenarios.

“I think it’s getting more clear that this [humanoid robotics] are becoming more and more an engineering problem than it is a research problem,” Adcock said. “Actually being able to build a humanoid [robot] and put it into the world of useful work is actually starting to be possible.”

Why is OpenAI working with a humanoid robotics company?

Founded in 2021, Figure is developing a 5 ‘6, 130-pound bipedal “general purpose” robot it claims can lift objects around 45 pounds and walk 2.7 miles per hour. Figure believes its robots could one day help address possible labor shortages in manufacturing jobs and generally “enable the automation of difficult, unsafe, or tedious tasks.” Though it’s unclear just how reliably current humanoid robots can actually execute those types of tasks, Figure recently released a video showing its Figure 01 model slowly walking towards a stack of create, grabbing one with its two hands and loading it into a conveyor belt. The company claims the robot performed the entire job autonomously.

Supporters of humanoid-style robots say their bi-pedal form-factor makes them more adept at climbing stairs and navigating uneven or unpredictable ground compared to the more typical wheeled or tracked alternatives. The technology underpinning these types of robots has notably come a long way from the embarrassing stumbles of previous years. Speaking with Wired last year, Figure Chief Technology Officer Jerry Pratt said Figure’s robots could complete the Pentagon’s test course in a quarter of the time it took machines to finish it back in 2015, thanks in part to advances in computer vision technology. Other bipedal robots, like Boston Dynamics’ Atlas, can already perform backflips and chuck large objects

Figure says its new “collaboration agreement” with OpenAI will combine OpenAI’s research with it’s own experience in robotics hardware and software. If successful, Figure believes the partnership will enhance its robot’s ability to “process and reason from language.” That ability to understand language and act on it could, in theory, allow the robots to better work alongside a human warehouse worker or take verbal commands.

“We see a tremendous advantage of having a large language model or multi models model on the robot so that we can interact with it and give what we call ‘semantic understanding,’” Adcock said.

Over the long-term, Adcock said people interacting with the Figure should be able to speak with the robot in plain language. The robot can then create a list of tasks and complete them autonomously. The partnership with OpenAI could also help the Figure robot self-correct and learn from its past mistakes, which should lead to quicker improvements in tasks. The Figure robot already possesses the ability to speak, Adcock said, and can use its cameras to describe what it “sees” in front of it. It can also describe what may have happened in a given area over a period of time.

“We’ve always planned to come back to robotics and we see a path with Figure to explore what humanoid robots can achieve when powered by highly capable multimodal models,” Open AI VP of Product and Partnerships Peter Welinder said in a statement sent to PopSci.

OpenAI and Figure aren’t the only ones trying to integrate language models into human-looking robots. Last year, Elon Musk biography Walter Isaacson wrote an article for Time claiming the Tesla CEO was exploring ways to integrate his company’s improving Optimus humanoid robot and its “Dojo” supercomputer with the goal of creating so-called artificial general intelligence, a term some researchers use to describe a machine capable of performing above human level capability at many tasks. [read more]

As long as they are not killer robots I'm fine with it.

More OpenAI news:

No comments: