Open Projects

There are typically a number of open undergraduate and masters projects. If you are interested in any of these projects email the individual mentor. If you are interested in working with us but do not see a project you’re interested in then email Prof. Hager (hager@cs.jhu.edu).


 

Project Name: Deploying an interactive user interface for visualizing electronic health records
Position: Software programmer, data analytics
Time: Summer and fall of 2016
Location: Homewood campus
Contact: Narges Ahmidi (nahmidi1@jhu.edu)
Malone Center for Engineering in Healthcare (Homewood campus) has open projects for undergrad or grad students to join in the development of a platform for visualizing healthcare data. The successful candidate will support work on developing tools to access an electronic medical records database using SQL commands, to parse the data to extract meaningful information, and to visualize the information in an interactive fashion using standard web-base visualization tools.

The ideal candidate would have the following skill levels: advance in Java script (D3), intermediate in Python, and basic knowledge of SQL. We are looking for enthusiastic candidates who are willing to work on this project full time during the summer and potentially afterwards.

In addition, students with strong background in machine learning and big data analysis and interest in data science, analytics, and healthcare applications are also encouraged to contact us.

 


 

Project Name: Annotation & Analysis of Complex Human Activities in Intensive Care Units
Mentor: Austin Reiter (areiter@cs.jhu.edu)
Description: Intensive Care Units (ICUs) are hectic environments where dozens of doctors, nurses, and other personnel perform hundreds of tasks to ensure a patient’s proper recover. We are currently developing a system using 3D cameras to attempt to evaluate what tasks are happening in the ICU at any given time to ensure a patent’s safety.

We are looking for students to help annotate much of this data. This may entail temporal labeling of certain actions in video, spatial labeling of people in the room, or other tasks as we see fit. Depending on the student’s skill level this could also include developing algorithms for improving patient monitoring in the ICU.

Number of Students: Multiple students welcome


Project Name: Label and Analyze Surgeons and their Skills [CLOSED]
Mentor: Anand Malpani (anandmalpani@jhu.edu), Swaroop Vedula (vedula@jhu.edu)
Description: Work with surgeons, residents, and engineers in this multidisciplinary project (going on since 2007) on recent developments in the field of robotic surgery (using the da Vinci Surgical System). The project would involve an initial phase of labeling surgical videos for activity segments i.e. steps that the surgeons/trainees follow while performing a surgical task. Following this, the next step would be to analyze the labeled sequence of steps to assess their skill at performing the particular tasks.
Skills: Cognitive skills :), as well as eventually some basic programming using your favorite language!
Number of Students: We have different kinds of surgical procedures and so we need more than just one person. We are currently already mentoring 4 undergraduates and we can accommodate 1-2 more undergraduate students (No CS background necessary!)


Project Name: Simulation Development for da Vinci (Robotic) Surgery Training [ON HOLD]
Mentor: Anand Malpani (anandmalpani@jhu.edu)
Description: This project is motivated by the need for an open-source based simulation framework for training in robotic surgery. An open-source framework allows easy access to information on the environment in the training task allowing machine learning algorithms to perform automated assessment of surgical skills more reliably. As the environment including the tools and other interacting objects in the task are being simulated by the computer, automated tools can get 100% accurate (ground truth) information on positions and orientations of these objects.
Skills:
* Object Models development: SolidWorks or similar (intermediate)
* Packing the simulation task: C++ (intermediate) AND Python (basic) AND Robotics – Kinematics (beginner)
Students: Upper Class Undergrads, Masters Students


Mentor: Colin Lea (colincsl@gmail.com)
Project ideas:
Surgical Skill Analysis: Our recent work modeling surgical training tasks shows that we can accurately detect important structures in video such as a set of insertion point locations in a suturing task. We hypothesize that the skill of the user can be distinguished by modeling the movement of these points throughout the procedure. This project has the following goals:
(1) annotate a set of suturing images to verify accuracy of our algorithm on various datasets
(2) use (and possible extend) our object recognition tools to measure the deformation of insertion points over time
(3) model how this movement correlates to the user’s skill level

Multi-modal Activity Recognition: We have recently developed algorithms for modeling how humans interact with the environment both in industrial and surgical settings. In this project you will setup infrastructure for evaluating these algorithms on recent benchmarks in the computer vision community. This project has the following goals:
(1) create Python tools for experimenting with datasets (e.g. Cornel CAD 120, Georgia Tech Toy Planes)
(2) develop a set of appropriate features that model human-object interactions
(3) evaluate our algorithms on these datasets and compare with those in the literature



Project Name: Large-Scale Object Recognition, Detection and Pose Estimation
Mentor: Chi Li (cli53@jhu.edu)
Description:
We are working on a visual scene perception system for task automation and activity recognition in collaborative human-robot tasks. We would like to expand the vision pipeline by incorporating object recognition capabilities from Point Cloud Library (PCL) and other state-of-the-art methods.
Objective:
(1) create wrappers for existing object recognition modules in PCL (require C++ experience)
(2) expand our pipeline for simultaneously recognizing and tracking objects
(3) evaluate and speed up these approaches for real-time use in our system
(4) setup benchmarking dataset for RGBD-based object recognition, detection and pose estimation.
Skills:
C++, Computer vision or machine learning experience


Project Name: Language of Surgery
Mentor: Swaroop Vedula (vedula@jhu.edu) and Narges Ahmidi (narges.ahmidi@gmail.com)
Description: We are looking for someone to assist with data collection and associated research in a cutting-edge research study with potential for substantial impact on aspects of graduate surgical education in otolaryngology. The ideal candidate will either have experience in a clinical setting (as a student or trainee) or an interest in clinical work, superior social and communication skills, discipline, and time management ability.

The candidate will attend surgeries at different operating rooms affiliated with Hopkins, coordinate sterilization of operating equipment with nursing staff, setup data recording equipment and capture data following previously established procedures, all the while respecting the sterile environment in the operating room.

Data collection is expected to start in September (2015). On average, this will be a part-time effort with data collection scheduled for multiple days in some weeks and not at all in others.

Candidates interested in participating in research related to the analysis of the collected data are especially encouraged to apply.

Interested candidates should send a resume to: vedula@jhu.edu or narges.ahmidi@gmail.com

Comments are closed