Surgery has a reputation as being an intellectual profession, but a lot of the actual practice of surgery involves simple, routine tasks that are performed on nearly every patient. Suturing is one of those and a team of researchers from University of California Berkley and Intel thinks that it is possible to teach a robot to suture by having it learn from videos of suturing performed by real surgeons. This may one day allow human surgeons to focus on their core tasks while letting robots do the monotonous work.
The team has already developed a system, called Motion2Vec, which has watched and analyzed many publicly available surgery videos and used that knowledge to teach a da Vinci surgical robot to apply sutures across incisions. Their findings, which have not yet been reviewed but are published in arXiv, show an “85.5% segmentation accuracy on average suggesting performance improvement over several state-of-the-art baselines, while kinematic pose imitation gives 0.94 centimeter error in position per observation on the test set.”
The team relied on a set of modern computing techniques, including neural networks and Siamese networks, the latter being a way to compare and group images that are alike, to provide a way to teach robots tasks that only humans can presently perform. Perhaps with enough videos to watch, the system may one day even help with performing routine procedures.
Here’s a demonstration of the Motion2Vec system:
And here’s a presentation about the technology at the International Conference on Robotics and Automation:
Study in arXiv: Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos
Link: Motion2Vec project info page…
(hat tip: Engadget)