Manipulation and Modeling of Human Motion is an inter-disciplinary collaboration between engineers, surgeons, and methodologists/biostatisticians. For details on the principal investigators and researchers involved in the project, please refer to the people page.
The following are broad goals of our collaborative research:
- Surgical Activity Recognition
- Objective Skill Assessment
- Automated Feedback and Individualized Learning
- Human Machine Collaboration
For detailed information, refer here.
To accomplish our research objectives, we rely on multiple data sets, comprising kinematic data describing surgical tool motion, video recordings of operators performing surgical tasks, and annotations for surgical tasks, maneuvers, gestures, and surgical skill. These data sets have been captured on various surgical platforms including robotic, endoscopic, open surgery, and simulation (da Vinci Skills Simulator). Read more…
Our research involves applying existing methods and developing new ones for representation of data, statistical models of surgical activity, and classifiers for surgical skill.
- Data Representation
- Statistical models that we use in our research studies rely upon both continuous and discrete representation of surgical motion data. For continuous representations of surgical motion data, we use dimensionality-reduction methods such as linear discriminant analysis or principal components analysis. For discrete representations of surgical motion data, we quantize the data using two different approaches – a simple vector quantization using k-means clustering and a motion-based encoding called Descriptive Curve Coding (DCC). Read more…
- We apply and develop various methods to model the data to recognize component units of surgical motion. Our first goal for each of the following models is to recognize smaller units of surgical motion such as gestures and maneuvers.
- Hybrid Dynamic Systems
- Language Models
- String Motif-based Models
- Models based on Fractal Geometry of Surgical Motion. Read more…
- Unsupervised Learning for Surgical Motion
- R. DiPietro, G.D. Hager: Unsupervised Learning for Surgical Motion by Learning to Predict the Future. Medical Image Computing and Computer Assisted Intervention (2018).
- Recurrent Neural Networks to Capture Extremely Long-Term Dependencies
- R. DiPietro, C. Rupprecht, N. Navab, G.D. Hager, Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies. ICLR Workshop (2018).
- Recurrent Neural Networks for Surgical Activity Recognition
- R. DiPietro, C. Lea, A. Malpani, N. Ahmidi, S. Vedula, G.I. Lee, M.R. Lee, G.D. Hager: Recognizing Surgical Activities with Recurrent Neural Networks. Medical Image Computing and Computer Assisted Intervention (2016). (Oral presentation.)
- JIGSAWS Released!
- Our group has released a complete data set (JIGSAWS) for public use to provide fellow researchers with a rich data set for surgical activity recognition and skill assessment.
- IROS Workshop on Human SensoriMotor Control (Chicago, USA)
- Colin Lea presented his work titled “Using Vision to Improve Activity Recognition in Surgical Training Tasks“, co-authored by Gregory Hager and Rene Vidal.
- Anand Malpani presented his work titled “Evaluating Surgical Training Task Segments: using the Crowd and the Machine“, co-authored by Gregory Hager.
- M2CAI Workshop at MICCAI 2014 (Boston, USA)
- Piyush Poddar’s work won the Best Paper Award, titled “Automated Objective Surgical Skill Assessment in the Operating Room Using Unstructured Tool Motion” co-authored by Narges Ahmidi, Swaroop Vedula, Masaru Ishii, Lisa Ishii, and Gregory Hager.
- Yixin Gao presented her work titled “Language of Surgery: A Surgical Gesture Dataset for Human Motion Modeling” co-authored by Swaroop Vedula, Carol Reiley, Narges Ahmidi, Balakrishna Varadarajan, Henry Lin, Lingling Tao, Luca Zappella, Benjamin Bejar, David Yuh, Grace Chen, Rene Vidal, Sanjeev Khudanpur, Gregory Hager.
- Anand Malpani presented his paper titled “Pairwise comparison-based objective score for automated skill assessment of segments in a surgical task.” co-authored by Swaroop Vedula, C. C. Grace Chen and Gregory D. Hager at IPCAI 2014 (June 28th, Fukuoka, Japan)
This project acknowledges the following sources of support:
- Science of Learning Institute
- Automated Assessment of the Effects of System Limitations Based Upon Data Collected from Multiple Training Centers (NIH, R. Kumar (PI), G. Hager, Dr. D. Yuh)
- Structure Induction for Manipulative and Interactive Tasks (NSF-IIS, Hager (PI), S. Khudanpur)
- CDI Type-II: Language Models for Human Dexterity (NSF, Hager (PI), S. Khudanpur, R. Vidal, R. Kumar, S. Hsiao, Dr. D. Yuh)
- CPS:Medium: Hybrid Systems for Modeling and Teaching the Language of Surgery, (NSF, Hager (PI), S. Khudanpur, R. Vidal, R. Kumar, Dr. G. Chen)
- Intuitive Surgical Inc.