• AiNews.com
  • Posts
  • Google DeepMind's Table Tennis AI Achieves Human-Level Performance

Google DeepMind's Table Tennis AI Achieves Human-Level Performance

A sleek and modern robotic arm equipped with a table tennis paddle, captured mid-action as it competes against a human opponent in a high-tech laboratory setting. The robot, designed with a streamlined and futuristic appearance, demonstrates its advanced capabilities in real-time adaptation and performance. The background features data and performance metrics displayed on a screen, emphasizing the interaction between human and AI in this competitive table tennis match.

Google DeepMind's Table Tennis AI Achieves Human-Level Performance

Google DeepMind has developed a groundbreaking robotic table tennis AI agent capable of achieving "human-level speed and performance," winning 45% of matches against human opponents with varying skill levels.

The robot dominated beginners, winning 100% of matches, and held its own against intermediate players, securing victories in 55% of the 29 matches played. This achievement was made possible through a blend of simulated training and real-world data, allowing the AI to refine its skills and adapt to opponents' playing styles in real time.

Despite its success with amateur players, the robot struggled against advanced opponents, highlighting its current physical and skill limitations. While AI has been competing against humans in games like chess for years, this development marks significant progress in physical games, moving closer to the robotics community's goal of achieving human-level performance in real-world tasks. This advancement opens up new possibilities for robots that can adapt more effectively to the physical world.

A Step Towards Human-Level Robotics

Achieving human-level performance in real-world tasks is a key objective for the robotics research community. This new table tennis robot represents a significant step toward that goal. The sport of table tennis, known for its physical demands and strategic complexity, typically requires years of training for humans to master. The AI agent developed by Google DeepMind employs a hierarchical and modular policy architecture, which includes low-level controllers for specific skills and a high-level controller that selects the most appropriate skill based on game conditions.

The robot's performance was evaluated in 29 matches against human players ranging from beginners to advanced level. While the robot excelled against less experienced players, it lost all matches against advanced players. This outcome demonstrates that the robot has achieved solid amateur-level performance, particularly in rallies.

Innovative Training and Real-Time Adaptation

The robot's training involved an innovative combination of simulation and real-world practice. Initially, a small amount of data from human-human matches was used to set the task conditions. The AI was then trained using reinforcement learning in simulation, with the policy deployed to real hardware through a zero-shot approach. This process allowed the robot to continually improve its skills by playing against humans and adjusting to the increasingly complex standards of play.

The AI agent’s high-level controller first selects the appropriate playing style (forehand or backhand) and then narrows down the choice of specific low-level skills using game statistics and the opponent’s strengths and weaknesses. This adaptive system enables the robot to update its strategy throughout the match, making it a challenging opponent for human players.

Human Interaction and Future Development

Participants in the study found the robot to be fun and engaging, with many expressing a desire to play with it again. The robot’s ability to provide a dynamic and enjoyable practice experience was highlighted, particularly as a more interactive alternative to traditional ball throwers.

However, advanced players were able to identify and exploit weaknesses in the robot’s strategies, such as its difficulty in handling underspin. This feedback is being used to further train the robot and improve its performance in future iterations.

Google DeepMind’s robotic table tennis AI represents a major milestone in the quest for human-level robotic performance in physical tasks. As the technology continues to evolve, it promises to enhance our understanding of how AI can interact with and adapt to the physical world, ultimately leading to more capable and versatile robots. For more details on the training process and to see more videos, visit DeepMind’s website.