The Language of Surgery project is based on the premise that surgical activity, like most human activity, is structured. Data from the surgical field captures this structure. Research in the Language of Surgery project aims to discover structure in surgical activity and develop applications that rely upon this structure. Since 2006, the Language of Surgery project has focused upon methods to develop representations to describe surgical instrument motion and video image data that not only generalize across activities but also are invariant to factors such as surgeons’ skill or style, objective assessment of technical skill, and providing data-driven targeted feedback to support acquisition of technical skill. The Language of Surgery project includes multiple studies with different types of data, which provide a unique opportunity to apply state of the art machine learning techniques to tackle important clinical challenges.
Septoplasty is a procedure to correct a bent middle wall of the nose (called the septum). Surgeons separate the cartilage and bone in the wall from the membrane covering them, reshape cartilage, remove bony spurs, and restore the septum. All trainees in Otolaryngology – Head & Neck Surgery must learn to competently perform septoplasty.
But, this procedure poses a few unique challenges to surgeons and educators. Only one surgeon can clearly see the surgical field inside the nose at any time. Thus, it is hard for the supervising surgeon to observe, guide, and evaluate trainee surgeons. Furthermore, there are no reproducible methods to assess trainees’ skill and learning in the operating room. The overall goal for this study is to develop automated and objective measures for technical skill and competency in septoplasty, relative to anatomical structure, using data from the surgical field.
We capture data from a cohort of trainee and faculty surgeons over time. It includes instrument motion data using electromagnetic sensor tracking, faculty and trainee surveys to questionnaires about case difficulty, activities performed, and technical skill. Data are captured across three sites at Johns Hopkins, at the Facial Plastic SurgiCenter, and at the Veterans Affairs Medical Center, Washington DC. By January 2019, we recorded data for nearly 250 septoplasty procedures. In addition, a prior study at Johns Hopkins provided data for an additional 120 septoplasty procedures.
Our objectives in this study are to (1) develop and validate automated and objective measures of technical skill and competency; (2) assess technical skill relative to shape of septum obtained using instrument motion data; and (3) determine how patient and surgeon factors, such as case difficulty, comorbidities, and surgeons’ sleepiness, affect technical skill and surgical outcomes. We use a variety of methods to achieve our objectives in this study, which include hand-crafted features using clinical insights, analysis of trees of encoded trajectories, machine learning techniques, deep learning (neural networks), as well as traditional statistical methods.
A key step in septoplasty is straightening out deviations in the septum. One approach we developed to detect skill during Septoplasty was to estimate the plane of the septum (septal plane) and find perpendicular motions or “strokes”.
In the figure above, vectors point in the direction of tool movement and are colored by the orientation of the tool. The tool changes orientation when the surgeon moves from one side of the septum to the other. The change in tool motion from purple to gold corresponds with the estimated septal plane in blue.
3D maps of stroke locations from an expert and a novice surgeon during Septoplasty. Notice how the expert more efficiently uses sparse strokes compared to the novice. We use the difference in stroke statistics to predict whether a surgeon is an expert or a novice .
 Ahmidi, Narges, et al. “Automated objective surgical skill assessment in the operating room from unstructured tool motion in septoplasty.” International journal of computer assisted radiology and surgery 10.6 (2015): 981-991.
Swaroop Vedula: svedula at jhu.edu
Masaru Ishii: mishii3 at jhmi.edu
A cataract is an opaque lens in the eye. It keeps light from reaching the back of the eye and is the most common cause of preventable blindness across the world. The lens of the eye is constructed like an M&M(R), an outer shell protects the lens inside. To treat cataracts, surgeons replace the opaque lens with a clear artificial lens. Surgeons make a smooth circular opening in the front of the shell, remove the opaque lens, and insert an artificial lens into the shell. All trainees in Ophthalmology must learn to competently perform cataract surgery. Typically, surgeons use an operating microscope for this procedure and videos of the surgical field are readily available.
But, training surgeons across the globe is limited by unreliable and inefficient methods to assess surgeons’ skill. Multiple rating scales to assess technical skill in cataract surgery exist, but it is time-consuming to routinely use them during training. Assessments also vary across surgeons using the same rating scale. The overall goal in this study is to automate assessment of technical skill for cataract surgery such that it can scale globally.
By January 2019, we prepared a data set of 100 cataract surgery procedures. The data includes videos of the surgical field, annotations for tasks in the procedure and instruments used, and technical skill assessments using two rating scales. Our data set is also supplemented by the publicly available CATARACT-101 data.
Our objectives in this study are to: (1) develop and validate automated methods to segment videos of cataract surgery into constituent tasks; (2) develop valid, scalable, automated methods to assess technical skill both for tasks in cataract surgery and for the overall procedure; (3) create novel methods for interpretable assessment of surgical technical skill using crowdsourcing and analytical techniques. We use traditional computer vision methods, deep learning, and crowdsourcing to analyze data in this study.
 Kim, Tae Soo, et al. “Crowdsourcing annotation of surgical instruments in videos of cataract surgery.” Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis. Springer, Cham, 2018. 121-130.
Swaroop Vedula: svedula at jhu.edu
Science of Learning: Automated Coaching:
Learning surgical technical skill — like many learning contexts — can be significantly transformed with technology like virtual reality and augmented reality simulation. Evidence shows that skill acquired in simulation transfers to the operating room. Despite advances in technology to support surgical skill training, it remains passive. That is, the role of the technology is limited to record data on surgical performance. This project is based on a hypothesis that augmenting technology with human intelligence in a coaching paradigm can transform its role in how surgeons acquire technical skill. The overall goal for this study is to develop scalable methodologies to build smart machines to augment human learning of complex skills. The end product for this work is an automated virtual coach for surgical skills training in the simulation laboratory.
Surgical activities are structured interactions between instruments/surgeons’ hands and objects or tissue in the surgical field. Some of these interactions are critical for successful execution of the activity, i.e., they may be considered critical task elements. Consequently, acquiring technical skill, and teaching it, involves mastering how to perform critical elements of the task.
Data captured in this study include narratives from expert surgeons of tasks they performed on a surgical skills simulator along with instrument motion, videos of the surgical field, and events recorded by the simulator. Additional data to be captured include crowdsourced evaluation of critical task elements in novice and experts performances of select tasks on the surgical simulator using a structured rubric. The specific aims in this study are to: (1) establish a generative framework to extract structured feedback from a corpus of peer evaluations of a surgical task; (2) develop a comprehensive virtual coach for surgical training, relying upon peer evaluation-based structured feedback; and (3) determine effectiveness of a comprehensive virtual coach for learning a fundamental surgical task.
Swaroop Vedula: svedula at jhu.edu
GI Endoscopy: coming soon!
HIPEC: coming soon!
This project acknowledges the following sources of support:
- Science of Learning Institute
- Automated Assessment of the Effects of System Limitations Based Upon Data Collected from Multiple Training Centers (NIH, R. Kumar (PI), G. Hager, Dr. D. Yuh)
- Structure Induction for Manipulative and Interactive Tasks (NSF-IIS, Hager (PI), S. Khudanpur)
- CDI Type-II: Language Models for Human Dexterity (NSF, Hager (PI), S. Khudanpur, R. Vidal, R. Kumar, S. Hsiao, Dr. D. Yuh)
- CPS:Medium: Hybrid Systems for Modeling and Teaching the Language of Surgery, (NSF, Hager (PI), S. Khudanpur, R. Vidal, R. Kumar, Dr. G. Chen)
- Intuitive Surgical Inc.
Lab members: Click here to access the intranet.