Monthly Archives: April 2014

Development Meeting with Trond (2)


The goal of this meeting was to present the gesture code that we had been assigned to work with in the previous week and integrate it into Trond’s framework.


We were successfully able to add recognition code to the main controller of the application. In addition we were able to integrate pinch-zooming fully into the graphics engine which allowed us to simply make an API call to resize the model on screen.


Gesture recognition code added to handle detection of leap gestures

Gesture recognition code added to handle detection of leap gestures

Point Breakdown


Each team member worked on gesture code or conferred with Trond about integrating our leap motion code into the graphics manipulation engine.

All members equally contributed to the reading of Trond’s code base and helped him to integrate gesture recognition.







One big, happy development family.

Design Document

For this milestone, we created our application design document. We want to provide explicit information about the requirements for out project and how the project is put together. Our design spec contains several main parts including problem statement, project scope, platform specifics, software specifics, personas, use case, and application details. These specifications are made in order to help us to have more clear direction when we are developing the application.



We achieved this milestone by ¬†gathering and formulating what we have done throughout the past 15 weeks. After conducting¬†user research with our hi-fi prototype, we have gathered all the necessary information we need to design our application. And when we were¬†exploring the leap motion and playing with 3D models, we found out some limits¬†of the technologies we are using. For examples, the groping motion to take off layers doesn’t not work with leap motion while the sensor can not recognize users’¬†fingers when they get too close to users’ palm; and also leap motion can not differentiate the swiping motion and the horizontal movement of cursor.

Therefore, throughout the process of this milestone, we have iteratively redesigned our application. The dev team and the design team have collaboratively discussed about how to implement the most user-friendly design under the technology constrains. While the design of the application eventually has to compromise our limited time and budget, this deliverable helped us to recognize and reassure our project scope and priority tasks.

Point Breakdown

The tasks were equally distributed to all members.

Connie(5): Covered the scope of the project and assumption of the users.

Ashish(5): Covered the software specification.

Ted(5): Covered users and persona.

Alyssa(5): Covered application details.

Ngoc(5): Covered hardware specifications.

Deliverable: Design Document


High-Fidelity User Testing

Our design for the 3D interactive interface is on the verge of usable design, but we can’t be sure until we test! To test the high-fidelity prototype and diagnose any possible¬†design flaws, two anatomy students who had never been exposed to the project or interface were selected. Both students, Rebecca and Susan, were in graduate school for a medical-related field, and had taken intense anatomy courses in their studies. Both were students at the University of Washington.

User Testing Method 

Both users were tested at a local library with minimal distractions and a quiet work area. The high-fidelity prototype consisted of two parts: one to test the gestures, and one to test the user interface. The gestures were tested with a Leap Motion web prototype, and the user interface was tested with video simulations. Each user tested each part consecutively during their testing session. Users were compensated with candy.

The Leap Motion gestures were tested with a coded prototype built in JavaScript with the Leap Motion API and three.js Library. Existing tutorial code for Leap Motion gestures, and examples from Robbie Tilton’s Reflective Prism demo were used to hack the prototype together. The gestures tested on the interactive demo were rotation, zoom, and pointing.

To test the sanity of the menu options, users were asked to interact with video prototypes mirroring actions of the interface. The user would point to the screen, as if it were the 3D interface, while the tester moved the mouse to act as the system. The user would be guided through a set script carefully planned with the video. The video would reflect the ideal actions that would be performed by the user. While one tester guided the user and acted as the system, the other tester took notes or recorded the testing session.

User Testing Results

After testing users on the gestures and UI of the interface, several main issues need to be revised in the prototype before the final specifications are drawn for the Ghost Anatomy Project. Users struggled with the harsh rotation gesture, which took an enormous amount of effort and precision to get working. The rotation gesture should definitely be a lot smoother, and more natural to control. Swift interactions, and shortcuts were asked for by the users as well. For example, Rebecca asked for a one-motion shortcut to search for and select body parts. Most users had to be told how gestures worked in order to interact with the interface. Some way to explain the gestures to the user, or indicate the gestures with affordances in the interface, would be most beneficial.

The user interface could also be improved to better suit the users. Users did not notice the menu, and did not look to it to perform functions during testing, such as turning labeling on or off. Both asked for a visible zoom indicator. Since the UI interactions were only simulated, the exact gestures and interactions with the UI will have to be tested and refined as the actual interface is built.

Next Steps

For this step, each member was involved in the user testing and report compilation. With these results, the design team will iterate on the high-fidelity prototype again, and refine it for the final Design Document detailing the design details of the application.

Alyssa (5)

Connie (5)

Ted (5)

Deliverable: High-Fidelity User Testing Report

Development Meeting with Trond


Accomplishing a task such an incredible technical feat as rendering and displaying a human body and identifying all the individual components is no small endeavor.

For this reason, we requested the help of¬†Trond¬†Nilsen, a quite brilliant software engineer. Trond is a graduate student who’s research is closely aligned with ours. He has already built the model rendering and viewing into his application. We are currently having¬†weekly meetings to work on upgrading his current project to support the Pepper’s Ghost illusion/interface. The process will require splitting the render into multiple views, and passing gestures in as function calls to manipulate his view.¬†



Every Monday, we meet with Trond. After the first session we were able to add all of the math calculations and adjustments to Trond’s engine to allow for 4 separate Three.js cameras of a single model.

After setting up a local web server, the models can be fed in and viewed as such.

Hologram split

View Split for Hologram Use

If one were to place a prism on the screen with this  application running, you would be able to see the illusion of a floating, 3D human body.


During the next meeting we able to integrate the Leap Controller and get a data stream from the pinch-to-zoom gesture. Before the next meeting, we hope to be able to get all of the gestures implemented and added to the application.



All of the new application code we created was pushed to a GitHub repository.

Open source and available here.

To download and test, use an apache/nginx web server.

Your browser must be compatible with webgl to view.


Point Breakdown

Each team member spent some time with Trond throughout each work period and helped integrate our ideas with Trond’s framework.

All members equally contributed to the reading of Trond’s code base and helped him to integrate view splitting and leap controls.






Timeline Revision

We’ve recently been talking about collaborating with Trond Nilsen, who is a graduate student at the UW. He is working on a project that is very similar to ours and we will be able to align our goals with his in order to achieve a much more outstanding final product. In collaboration, Trond will receive and medium for display that is effective for the purpose of his¬†project, and we will receive additional development efforts as well as a substantially developed starting point. He will bring with him much more experience than our team has in working with 3D models for the web, as well as a solid understanding of how to structure the final application.

This will advance our project because it will give us a clear definition of our timeline, including Trond and adjusting for development with him on the team.

Ashish (5), Ngoc (5): Work together to revise timeline.



High-Fidelity Mockup

To refine our interface design, we needed to create a high-fidelity prototype which would look and function exactly like how we would want our application to be. For our hi-fi mockup, we decided to put together all of the gestures that we compiled through our lo-fi user testing and have our users perform tasks using an interface that will be nearly identical to what we will have as our final product. The results from our future hi-fidelity prototype testing will help us diagnose design details, and inform our final design.

So far, we’ve constructed a prototype based on one of the demos that we did for our tutorial. We integrated the Leap Motion Interface with Three.js in order to create a working experience prototype that will allow users to fully interact with the rendered model. Using code samples from the Three.js and Leap Motion tutorials, and our knowledge of programming, we created a web-based demo hosted on Ted’s student server. Our demo will not be projected onto our 3D volumetric display yet; the purpose of this mockup is to show the users the actual functionality of our application. An example of our initial attempt can be found below:

demoClick here to view our first demo

One of the biggest challenges that we ran into was being able to render an entire 3D model of the human body. A vital task that we plan on having for our hi-fi mockup is the ability to “take off” layers. We were able to render a single body part (the mandible; see below) but rendering multiple body parts at once was an unexpected challenge for us. We realized that planning time for problems in advance was a wise decision to make.


Click here to view our second demo

We were unable to construct the entire hi-fi prototype in time because of this issue. To workaround this challenge, we created¬†videos¬†of storyboarded interactions with the interface. The videos mimic the interface, and are designed to be played when users are asked to perform specific, hard-coded functions. These videos will help us diagnose interface and interaction problems.¬†What we have so far will be a foundation that we can utilize to have the user feel like they’re actually using the application.

2.brainlayers (0-00-06-14)

Click here to view our prototype videos

During our actual user study, we will follow procedures that are similar to our lo-fi mockup. We plan to have gestures for every major interaction, and have the user perform actions such as taking off a layer, or accessing the quiz in order to study up anatomy materials. We plan to finalize our High-Fidelity Mockup by next Thursday, which is when we have our first user study scheduled. The prototype will allow us to push our designs to final refinement before heavy development begins.

For the prototype, Ted developed the web prototype (5 points), Alyssa developed the prototype videos (5 points), and Connie prepared the user testing (5 points). The next step for each of these team members is to pull together for user testing, further prototype refinement, and finalizing the Design Spec.

Deliverable: High-Fidelity Mockup

Extra: Hi-Fi Prototype Draft