From Arstechnica.com (July 28, 2023):
On Friday, Google DeepMind announced Robotic Transformer 2 (RT-2), a "first-of-its-kind" vision-language-action (VLA) model that uses data scraped from the Internet to enable better robotic control through plain language commands. The ultimate goal is to create general-purpose robots that can navigate human environments, similar to fictional robots like WALL-E or C-3PO.
When a human wants to learn a task, we often read and observe. In a similar way, RT-2 utilizes a large language model (the tech behind ChatGPT) that has been trained on text and images found online. RT-2 uses this information to recognize patterns and perform actions even if the robot hasn't been specifically trained to do those tasks—a concept called generalization.
For example, Google says that RT-2 can allow a robot to recognize and throw away trash without having been specifically trained to do so. It uses its understanding of what trash is and how it is usually disposed to guide its actions. RT-2 even sees discarded food packaging or banana peels as trash, despite the potential ambiguity.
In another example, The New York Times recounts a Google engineer giving the command, "Pick up the extinct animal," and the RT-2 robot locates and picks out a dinosaur from a selection of three figurines on a table.
This capability is notable because robots have typically been trained from a vast number of manually acquired data points, making that process difficult due to the high time and cost of covering every possible scenario. Put simply, the real world is a dynamic mess, with changing situations and configurations of objects. A practical robot helper needs to be able to adapt on the fly in ways that are impossible to explicitly program, and that's where RT-2 comes in. [read more]
Pick up trash, huh? Maybe Google can make a robot with that AI tech and pick up trash in parks. Just an idea.
No comments:
Post a Comment