Robotics, System Skill and Data Acquisition
Data is collected and archived from da Vinci robotic surgery systems using JHU’s recording system. This archival system interfaces with the daVinci vision cart and Application Program Interface (API). The quantitative measurements include, motion data, system events, and synchronized video data (“procedure data”). The daVinci API provides tool, camera and master handle motion vectors including joint angles, Cartesian position and velocity, gripper angle, and joint velocity and torque data. In addition, the API streams system events, as they occur. Our recording system can also acquire multiple video channels (left, right cameras, and OR scene when available) time-stamped and synchronized with the corresponding data from the API.
Additional data sets acquired from other manipulative tasks are also under construction.
The data sets repositories, acquisition process, and data set description can be found under the private section
Hybrid Dynamic Systems
Research in hybrid dynamical systems has focused on the question of effective modeling and identification. Our hypothesis is that we can view dynamical systems as a means for creating an underlying dictionary of motion models. Activities are expressed as a sequence of models taken from this dictionary. In particular, our work is focused on the use of sparse systems as a unifying theory for modeling and identification. The detailed description can be found under the Hybrid Dynamic Systems Research webpage
We believe that, although there is great diversity in the expression of a task through motion, there is also an invariant representation of “semantic” content of tasks. Our work in language modeling is focused on the development of general-purpose methods for extracting invariant content from diverse time-sequences expressing the same task. There is representational overlap in our low-level modeling with hybrid dynamical systems, but the techniques for extracting structure differ.
Assessment and Automation
The time series models we develop are a representation of the content of a task execution. One of our goals is to develop new ways to associate or measure the distance between executions in a way that reflect the underlying skill of the subject. For example, by developing a library of tasks executions together with an expert classification or ranking of skill, we strive to create automated methods for correctly classifying or ranking subsequent (unseen) trials. At the same time, time-series models can also be used to provide human augmentation. Algorithms for online identification can be used to determine *what* is happening, and learned models of motion can be used to augment that work.