Surgical training and evaluation has traditionally been an interactive and slow process in which interns and junior residents perform operations under the supervision of a faculty surgeon. This method of training lacks any objective means of quantifying and assessing surgical skills. Economic pressures to reduce the cost of training surgeons and national limitations on resident work hours have created a need for efficient methods to supplement traditional training paradigms. While surgical simulators aim to provide such training, they have limited impact as a training tool since they are generally operation specific and cannot be broadly applied. Robot-assisted minimally invasive surgical systems, such as Intuitive Surgical’s da Vinci, introduce new challenges to this paradigm due to its steep learning curve. However, their ability to record quantitative motion and video data opens up the possibility of creating descriptive, mathematical models to recognize and analyze surgical training and performance. These models can then be used to help evaluate and train surgeons, produce quantitative measures of surgical proficiency, automatically annotate surgical recordings, and provide data for a variety of other applications in medical informatics.
The Language of Surgery research project is associated with the Laboratory for Computational Sensing and Robotics (LCSR) at Johns Hopkins University. We are interested in modeling and understanding the underlying structures in surgical motions. We would like to eventually use this understanding to create benchmarks for surgical skill evaluation, to develop methods for better surgical training and to automate the documentation of surgeries for libraries. Members of this project are from various other labs, including the Computational Interaction and Robotics Lab (CIRL), Center for Language and Speech Processing (CLSP), Haptics Lab and also the Johns Hopkins Medical Institutions (JHMI). We are also affiliated with Intuitive Surgical Inc and the National Science Foundation(NSF)
Gregory Hager, Sanjeev Khudanpur, Rene Vidal
Dr. David Yuh, Dr. Grace Chen
Dr. Li-Ming Su, Sandrine Voros, James Woo, Taylor Harmon, Jessica Hu, Tiffany Chen
- C. E. Reiley, G. D. Hager, Decomposition of Robotic Surgical Tasks: An Analysis of Subtasks and Their Correlation to Skill, MICCAI M2Cai workshop, 2009. (accepted for poster) PDF
- B. Varadarajan, C. E. Reiley, H. C. Lin, S. Khudanpur, G. D. Hager, Data-Derived Models for Segmentation with Application to Surgical Assessment and Training, MICCAI, 5761:426-434, 2009 (poster) PDF
- C. E. Reiley, G. D. Hager, Task versus Subtask Surgical Skill Evaluation of Robotic Minimally Invasive Surgery, MICCAI, 5761:435-442, 2009 (poster)PDF
- Tiffany L. Chen, Henry C. Lin, Stuart Shippey, Jessica Y. Hu, Darius Burschka, Gregory D. Hager, Vision-based surgical tool tracking system for automatic surgical skill evaluation, Medicine Meets Virtual Reality Conference (MMVR) 2009.
- Carol E. Reiley, Henry Lin, Bala Varadarajan, Sanjeev Khudanpur, David D. Yuh, and Gregory D. Hager, Automatic Recognition of Surgical Motions Using Statistical Modeling for Capturing Variability, Medicine Meets Virtual Reality Conference (MMVR), 132:396-401, 2008. pdf
- Tiffany L. Chen, Henry C. Lin, Stuart Shippey, Victoria L. Handa, Gregory D. Hager. Tool Tracking for Surgical Motion Segmentation and Recognition for Surgical Skill Assessment. Biomedical Engineering Society Meeting, Los Angeles, CA, 2007.
- Henry C. Lin, Izhak Shafran, David Yuh, and Gregory D. Hager. Towards Automatic Skill Evaluation: Detection and Segmentation of Robot-Assisted Surgical Motions. Computer Aided Surgery, 11(5):220-230, September 2006.
- Henry Lin, Izhak Shafran, David D. Yuh, and Gregory D. Hager. Vision-Based Automatic Detection and Segmentation of Robot-Assisted Surgical Motions. Medicine Meets Virtual Reality Conference (MMVR) 2006. [pdf]
- Henry C. Lin, Izhak Shafran, Todd E. Murphy, Allison M. Okamura, David D. Yuh, and Gregory D. Hager. Automatic Detection and Segmentation of Robot-Assisted Surgical Motions. In Proceedings of Medical Image Computing and Computer Assisted Intervention (MICCAI) 2005, LNCS 3749, pages 802-810, 2005. Best Student Award at 2005 MICCAI. [pdf]
This research project is based upon work supported by the National Science Foundation under Grant No. 0534359. Students on this project have also been funded by the NSF Graduate Fellowship Program and the Link Foundation Fellowship.
|Project Intranet Page|
|Old Surgical Skill Evaluation page|