The ability to recognize gestures in a minimally-invasive surgical scenario is key towards greater integration of automated systems that could support surgeons during everyday operations. For instance, it can act as a post-operational tool for quality assessment and surgeons training or it can be integrated as a high-level supervisor over both manual and fully autonomous surgeries. To this end, the implementation of a real-time recognition algorithms capable of distinguishing among fine-grained surgical gestures, which are instances of generic representationscalled "surgemes", becomes necessary. This work applies a combinedSpatiotemporalConvolutional Neural Network that interpolates temporal features to correctly pinpoint the occurrence of gesture variations and provides the overall segmentation confidence for further analysis. Thanks to its reduced computational profile and overall performance, it allows for real-time action segmentation specifically tuned onsurgical gestures.
|Titolo:||Surgical Action Recognition with Spatiotemporal Convolutional Neural Networks|
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||04.01 Contributo in atti di convegno|