• AiNews.com
  • Posts
  • Robot Trained by Watching Surgery Videos Matches Skill of Human Doctor

Robot Trained by Watching Surgery Videos Matches Skill of Human Doctor

A futuristic operating room with a robotic surgical arm performing a precise procedure on a patient. A screen shows a video feed of a human surgeon performing the same procedure, illustrating the robot's training through imitation learning. The setup includes advanced medical equipment and tools, emphasizing the high-tech environment and the concept of robots learning complex tasks by watching experienced surgeons. Text on the image reads, "Robot Trained by Watching Surgery Videos Matches Human Skill.

Image Source: ChatGPT-4o

Robot Trained by Watching Surgery Videos Matches Skill of Human Doctor

In a groundbreaking development for medical robotics, a robot trained by watching videos of experienced surgeons has performed surgical tasks as proficiently as human doctors. The project, led by researchers at Johns Hopkins University, marks a major advancement in imitation learning—a method that allows machines to learn complex actions by observing rather than being programmed step-by-step.

This achievement could push the field of robotic surgery closer to a future where autonomous robots can handle intricate medical procedures independently, without the need for extensive pre-programmed instructions.

Breakthrough in Imitation Learning for Robotic Surgery

Johns Hopkins researchers showcased their findings at the Conference on Robot Learning in Munich, a premier gathering for robotics and machine learning experts. Using imitation learning, the research team taught the da Vinci Surgical System robot to perform three key tasks often required in surgery: needle manipulation, tissue lifting, and suturing. In every case, the robot matched the skill level of human doctors.

“It’s really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery,” said senior author Axel Krieger, assistant professor in Johns Hopkins’ Department of Mechanical Engineering. “We believe this marks a significant step forward toward a new frontier in medical robotics.”

How Imitation Learning Trains Robots Through Video

The research team trained the model using hundreds of videos taken from wrist-mounted cameras on da Vinci Surgical System robots. These cameras captured footage from actual surgeries performed around the world, creating a vast archive for the robot to "learn" from. With nearly 7,000 da Vinci systems in use and over 50,000 trained surgeons globally, the project had access to an extensive dataset for imitation learning.

Unlike traditional programming methods, where each movement must be manually coded, imitation learning enables the model to observe and reproduce complex actions. The robot’s model operates similarly to language models like ChatGPT but translates visual data into robotic kinematics—the mathematical language of motion.

Overcoming Traditional Challenges in Robotic Surgery

Despite the da Vinci robot’s popularity, it is often criticized for its lack of precision. To address this, the researchers trained the model to execute relative, rather than absolute, movements—improving the robot’s accuracy in performing tasks.

“All we need is image input, and then this AI system finds the right action,” said lead author Ji Woong “Brian” Kim, a postdoctoral researcher at Johns Hopkins. “We find that even with a few hundred demos, the model is able to learn the procedure and generalize to new environments it hasn’t encountered.” Added Krieger: "The model is so good learning things we haven't taught it. Like if it drops the needle, it will automatically pick it up and continue. This isn't something I taught it do."

Implications for Future Autonomous Surgery

The success of imitation learning in training the da Vinci robot opens up new possibilities for fully autonomous surgical systems. With this model, researchers believe they could quickly train robots to perform any surgical procedure by having them watch video recordings, reducing training time from years to mere days.

Previously, programming a robot to perform a specific surgical technique required manually coding each step, often taking years for complex procedures. “It’s very limiting,” Krieger said. “What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple days.”

This advancement could accelerate the development of autonomous robots that reduce human error and improve surgical precision. The team is now working on training a robot to perform entire surgeries, aiming to pave the way for safer, more efficient healthcare.

What This Means for Medical Robotics

The application of imitation learning in medical robotics represents a significant leap toward autonomous surgery. If refined and widely adopted, this approach could alleviate the burden on human surgeons, reduce medical errors, and increase surgical precision. However, autonomous surgery robots also raise ethical questions and will require rigorous testing to ensure patient safety.

As this technology advances, researchers like those at Johns Hopkins and Stanford University are helping shape the future of healthcare. By combining imitation learning with cutting-edge robotics, they are moving closer to a world where robots play an active role in complex medical procedures, potentially transforming surgery as we know it.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.