Autonomous surgical robotics is the new frontier of surgery. In recent years, several studies have analysed the feasibility of autonomy in the field of robotic minimally invasive surgery (R-MIS). One of the most important requisites for such a system is the capability of reconstructing patient's 3D anatomy in real-time and registering it with pre-operative data. A popular approach to address this problem is to use simultaneous localisation and mapping (SLAM) techniques. However, they suffer from the lack of a correct scaling factor for the 3D model when a monocular vision systems is used. In this paper we register the sparse point cloud obtained with SLAM with the pre-operative model of the patient, in order to guide a robotic arm to perform some representative surgical tasks. To achieve this goal, we propose to recover the environment scaling factor for the SLAM point cloud exploiting the kinematics of the da Vinci® Endoscopic Camera Manipulator (ECM). The proposed approach is tested in a real environment using an anatomically realistic phantom whose pre-operative model is extracted from the phantom’s magnetic resonance imaging (MRI) scan. Validation is carried out by performing the bladder pushing task during a radical prostatectomy procedure.

Rigid 3D Registration of Pre-operative Information for Semi-Autonomous Surgery

Nicola Piccinelli;Andrea Roberti;Eleonora Tagliabue;Francesco Setti;Riccardo Muradore;Paolo Fiorini
2020-01-01

Abstract

Autonomous surgical robotics is the new frontier of surgery. In recent years, several studies have analysed the feasibility of autonomy in the field of robotic minimally invasive surgery (R-MIS). One of the most important requisites for such a system is the capability of reconstructing patient's 3D anatomy in real-time and registering it with pre-operative data. A popular approach to address this problem is to use simultaneous localisation and mapping (SLAM) techniques. However, they suffer from the lack of a correct scaling factor for the 3D model when a monocular vision systems is used. In this paper we register the sparse point cloud obtained with SLAM with the pre-operative model of the patient, in order to guide a robotic arm to perform some representative surgical tasks. To achieve this goal, we propose to recover the environment scaling factor for the SLAM point cloud exploiting the kinematics of the da Vinci® Endoscopic Camera Manipulator (ECM). The proposed approach is tested in a real environment using an anatomically realistic phantom whose pre-operative model is extracted from the phantom’s magnetic resonance imaging (MRI) scan. Validation is carried out by performing the bladder pushing task during a radical prostatectomy procedure.
2020
Rigid 3D Registration, Autonomous Surgery, SLAM
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1015030
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? ND
social impact