Timeline Revision

We’ve recently been talking about collaborating with Trond Nilsen, who is a graduate student at the UW. He is working on a project that is very similar to ours and we will be able to align our goals with his in order to achieve a much more outstanding final product. In collaboration, Trond will receive and medium for display that is effective for the purpose of his project, and we will receive additional development efforts as well as a substantially developed starting point. He will bring with him much more experience than our team has in working with 3D models for the web, as well as a solid understanding of how to structure the final application.

This will advance our project because it will give us a clear definition of our timeline, including Trond and adjusting for development with him on the team.

Ashish (5), Ngoc (5): Work together to revise timeline.




High-Fidelity Mockup

To refine our interface design, we needed to create a high-fidelity prototype which would look and function exactly like how we would want our application to be. For our hi-fi mockup, we decided to put together all of the gestures that we compiled through our lo-fi user testing and have our users perform tasks using an interface that will be nearly identical to what we will have as our final product. The results from our future hi-fidelity prototype testing will help us diagnose design details, and inform our final design.

So far, we’ve constructed a prototype based on one of the demos that we did for our tutorial. We integrated the Leap Motion Interface with Three.js in order to create a working experience prototype that will allow users to fully interact with the rendered model. Using code samples from the Three.js and Leap Motion tutorials, and our knowledge of programming, we created a web-based demo hosted on Ted’s student server. Our demo will not be projected onto our 3D volumetric display yet; the purpose of this mockup is to show the users the actual functionality of our application. An example of our initial attempt can be found below:

demoClick here to view our first demo

One of the biggest challenges that we ran into was being able to render an entire 3D model of the human body. A vital task that we plan on having for our hi-fi mockup is the ability to “take off” layers. We were able to render a single body part (the mandible; see below) but rendering multiple body parts at once was an unexpected challenge for us. We realized that planning time for problems in advance was a wise decision to make.


Click here to view our second demo

We were unable to construct the entire hi-fi prototype in time because of this issue. To workaround this challenge, we created videos of storyboarded interactions with the interface. The videos mimic the interface, and are designed to be played when users are asked to perform specific, hard-coded functions. These videos will help us diagnose interface and interaction problems. What we have so far will be a foundation that we can utilize to have the user feel like they’re actually using the application.

2.brainlayers (0-00-06-14)

Click here to view our prototype videos

During our actual user study, we will follow procedures that are similar to our lo-fi mockup. We plan to have gestures for every major interaction, and have the user perform actions such as taking off a layer, or accessing the quiz in order to study up anatomy materials. We plan to finalize our High-Fidelity Mockup by next Thursday, which is when we have our first user study scheduled. The prototype will allow us to push our designs to final refinement before heavy development begins.

For the prototype, Ted developed the web prototype (5 points), Alyssa developed the prototype videos (5 points), and Connie prepared the user testing (5 points). The next step for each of these team members is to pull together for user testing, further prototype refinement, and finalizing the Design Spec.

Deliverable: High-Fidelity Mockup

Extra: Hi-Fi Prototype Draft


Gather Participants for High-Fidelity Prototype Testing

Participation Proclamation! (Part 2)
Group Members Responsible: Alyssa, Connie, Ted

“We… w-we can’t give up,” stuttered Ted, who was on the verge of passing out after days of coding nonstop in order to meet the team’s milestone deadline. It wasn’t just Ted — the entire team had been writing up the tutorial code that Ngoc and Ashish had valiantly compiled so that they can learn the basics of Three.js and the Leap Motion API.

Connie was curled up into a ball in the corner of the iSchool T.E. Lab of Mary Gates Hall. Alyssa was shaking, perhaps from the unfathomable amount of coffee that she had consumed over the last 24 hours in order to maintain her zombified state-of-mind. You could see her bloodshot eyes from a mile away. But Alyssa carried on, out of sheer willpower and motivation.

“Hang in there guys! We’re almost done,” encouraged Alyssa, who was the only, and not to mention most optimistic, project manager the Ghost’s Anatomy team could ever ask for. “I know you guys are all tired from coding all night long, but once we put together our lo-fi study results, we only have one thing left to do!”

“Wuh… one thing left to do?” reiterated Ted.

“Yeah… one thing left…” affirmed Alyssa. And then thump. Alyssa was down. Ted hesitated. He was lost, confused, and broke down, shaking his head in denial while scratching his scalp in utter defeat. And then there was silence.

Time stopped for a brief moment. Not a single creature in the room moved a muscle. The room gave off an atmosphere of despair. It was as if your roommates were stealing all of the internet bandwidth while you sat in your room, alone, waiting for the next episode of House of Cards, on Netflix, to buffer. Sheer and utter torture.

And then it hit them. Connie, crawling towards the team similarly to how the boy from the Grudge did, gathered all of the remnants her energy and used the last of it up to give the team one final push. ”We only have one thing left to do!” she rejoiced. “We need to gather participants for our hi-fidelity prototype testing!”

And so our quest continued. We gathered a list of possible participants, and made sure that our pool of participants was as diverse as possible, ranging from undergraduates to instructors to medical students; from nyancats to octopi to doges. Feel free to take a look at our deliverable for this milestone by clicking below!

Deliverable: Email Body Text

I hope you enjoyed reading both chapters of the Participation Proclamation Saga.
Until we meet again!

Ted T.

Tagged , ,

Design API Specification and New Directions

Linking the Leap Motion API and three.js library requires a lot of integration and complicated development. To abstract the functions we need for our application, we were initially planning to write an API that abstracts the backend linkage between the two resources, and allow us to make simple function calls in order to develop certain features.

Today, we met with an amazing researcher and developer named Trond Nilsen who is working on a related anatomy application for his dissertation. He is using 3D models in conjunction with the three.js library, which is exactly what we need to do for our project, and his project contains a great deal of the core functionality we would need for our application. To our delight, Trond has vocalized his desire to collaborate with The Ghost Anatomy Project in order to gain a more user-centric and displayable interface. We are incredibly exited to collaborate with such an experienced developer with extensive knowledge in both the field of anatomy and development of anatomy-related applications.
While we were originally going to build an API to mask the backend code in combining the Leap Motion and three.js code, Trond has agreed to complete most of the three.js code, leaving it up to our team to use the Leap Motion API and integrate it. Since we are no longer dealing with three.js ourselves, it is no longer necessary for us to write an API or have a Design API Specification detailing the features of the API. In response to our shifting development direction, we have created a document expressing the details of our collaborative development with Trond.  

Members involved: Ashish, Ngoc

Gather Participants for Low-Fidelity Prototype Testing

Participation Proclamation! (Part 1)

“What a sexy idea! Our Low-Fidelity Prototype will blow everyone’s minds!” exclaimed Ted, who was mesmerized by Alyssa’s creative design concept. The research team had gathered around the lone rectangular table in the iSchool T.E. Lab of Mary Gates Hall for their weekly researcher meeting. They had but one goal: to figure how they should conduct the lo-fi prototype testing for their Ghost’s Anatomy project.

After a brief moment of relief, laughter, and joy, Alyssa brought up one of the most thought-provoking questions that had ever been brought up during our meetings.

“But now what are we supposed to do?”

And then there was a slight pause. What lasted for seconds seemed like it lasted for minutes; hours; no, days. All three researchers looked into the yonder whilst trying to figure out what the team’s next course of actions should be. Not only could they hear the cooling fans roaring throughout the room, but they could feel every bit of wind tingling through their bodies, almost as if they were holograms themselves.

And then it hit them. Connie stood up. “I know! I have it!” she rejoiced. “We need to gather participants for our low-fidelity prototype testing!”

And so our quest began. We gathered a list of possible participants, and made sure that our pool of participants was as diverse as possible, ranging from undergraduates to instructors to medical students; from nyancats to octopi to doges. Feel free to take a look at our deliverable for this milestone by clicking below!

Members involved: Alyssa, Ashish, Connie, Ngoc, Ted

Deliverable: Email Body Text

WARNING: Spoiler alert! (highlight below to view content)
In the end, our quest was a success! We were able to gather not one, not two, but five spectacular participants. Our participant demographics ranged from students to instructors to even Teds!

Stay tuned for another exciting episode of Participation Proclamation.
In and out!

Ted T.

Tagged , ,

Low-Fidelity User Testing

After the lo-fi user testings, we concluded the most commonly adapted gestures and the additional functions that were suggested by our interviewees.

For zooming in/out, most people suggested pinching with two fingers, or grabbing and releasing. For rotating, grabbing the model and turn the hand, sideways swipe with vertical palm, or one finger sideways swipe are the common gestures the interviewees did. For taking off layers, grabbing a layer and groping motion away is the most common gesture among the interviewees. For isolating a body part, some people suggested to use a grabbing motion to select a body part and after the selected body part is highlighted, use a groping motion to take of the layer. Some also suggested to use a menu or a cursor to select a body part, and swipe to the left to peel it off. For navigating to the quiz, most people suggested to go through menu and choose a specific category. Some people prefer to swipe up the menu from the bottom; some people suggested to swipe left from right top corner. And most of them would like to be able to access to the menu anytime to switch a category. For the quiz contents, open-end question, multiple choice, and truth-or-false are all popular among the interviewees. Most of the interviewees expected various question type instead of simply asking the names of body parts. Furthermore, questions such as tracing the pathway of a hormone, where it’s produced, where it’s secreted, and what it influences, etc. are more advanced questions.

Additionally, some interviewees suggested us some functions that they would like to use on our application. Couple of the interviewees suggested that it would be helpful to show names of body parts when they point at them on the application. In addition, they would also want to be able high light the body parts during a group study. Some people also suggested that they would like to be able to search for a specific part of body through a search bar and  with maybe physical keyboard. Lastly, a tool bar is also requested by many interviewees. Users would like to be able to easily switch between different functions such as show-name function, or zoom in/out function.

After gathering all the most intuitive functions and gestures, we will be able to design our hi-fi prototype, which will be closer to our real application.

Here are the links to some of our user testing videos:

Mariko: https://www.youtube.com/watch?v=A3Z-yoO4WAo

Kayla: http://www.youtube.com/watch?v=n3GszxtmArg&feature=em-share_video_user

Annie: http://www.youtube.com/watch?v=jV2zSbkkNig&feature=em-share_video_user

Members involved: Alyssa, Ashish, Connie, Ngoc, Ted

Deliverable: User Testing Results



Tutorial Fun!

Hello everyone,

We are really excited to dive in and start writing code for our application. But first, we need to take baby steps before we can start running. With none of the team members having significant knowledge in our two main frameworks (Leap Motion and Three.js), we have to start by following simple tutorials to get a firm grasp of what we are grappling with. The Development Team (Ashish & I) will have a head start while the Research Team (Connie, Ted, Alyssa) work on gathering user requirements and conducting contextual inquiries.

Leap Tutorials

The Leap Motion tutorials for JavaScript are pretty well-written and a bit entertaining. In short, it teaches us to connect to a Leap Motion device,  getting animation frames, adjusting to leap space coordinates and reading gesture from a device.

Too many people, too many gestures on a leap device.

Too many people, too many gestures on a leap device.

Three.js Tutorials

There are many Three.js tutorials available on the interwebz. We were lucky enough to find a pretty good one that explained most of the concept concisely and still gave us enough exposure. Through these tutorials, we learned about drawing objects, manipulating them, textures and meshes, particles, models, animations and shaders. In all, very interesting stuff, and it’s nice to see your work unfold in front of you so quickly. If you want to see the end product, it’s here: http://students.washington.edu/ngocmdo/threejs/


It’s more appealing with animations. Click the live link above!

We’re excited to integrate these two technologies together and see what we can produce out of it. Hopefully we can do it with no issues!

Members involved: Alyssa, Ashish, Connie, Ngoc, Ted

 Deliverable: https://github.com/GhostCapstone


Until next time,


Members involved: All

Low-Fidelity Mockup

Here we created a Low-Fidelity prototype to test what are the necessary functions of our application and what are the most intuitive gestures.

Our low-fi prototype gave an illusion of a floating image. Interviewees can interact with the application using finger and hand gestures. During the interviews, there were always two interviewers and one interviewee. One of the interviewer held the application and interact with the user, and the other interviewer asked questions and took notes. The interviews are designed to test how well the interactions of this application work. We encourage the interviewees to think aloud and let us know what’s on their minds when using the application, and be as honest as possible.

We categorized the interviews into three main tasks: examine the human body, examine a specific body part, and take a quiz.

For task 1,  examine the human body, our goal is to have them look at the human body, rotate it, zoom in/out of body parts on the current layer. Our questions included: “As an anatomy student, do you need to take a closer look at certain body parts?”, “Do you need to zoom in at certain parts on the same layer?”, “Say you want to take a look around the head, and take a closer look at the iris of the eye. How would you do that?”

For task 2, examine a specific body part, out goal is to have them look at a specific part inside the body by stripping off layers, zoom into a section and remove layers which float around in space, put layers back if wanted, layers are transparent if they don’t fit in the full zoom window. Our questions included:  “Say you’d like to take a closer look at the lobe of the left brain. You can take off layers of the body by system in this application. How do you think you would do that?”

For task 3, take a quiz, our goal is to see what kind of quizzes they would want, and how they would interact with it. We suggested “body flashcards”, which allows users to select a body part to see the name, change it to red if they got the name wrong, change it to green if they got it right. Our questions included: “If you have a test on Thursday and need to study, would you use this application?”, “Would you be likely to use the quizzing app?”, and “How would you open the quizzing function and use it to study?”


Here are the links to some of our user testing videos:

Mariko: https://www.youtube.com/watch?v=A3Z-yoO4WAo

Kayla: http://www.youtube.com/watch?v=n3GszxtmArg&feature=em-share_video_user

Annie: http://www.youtube.com/watch?v=jV2zSbkkNig&feature=em-share_video_user

Members involved: Alyssa, Ashish, Connie, Ngoc, Ted

Deliverable: Design Sketches

Deliverable: Low-Fidelity Mock-up



Preliminary Interviews / Analysis

For our preliminary interviews, we conducted several interviews to figure out how we should design our initial lo-fi prototype. The process involved us going to several students, instructors, and lectures to get an idea and feel of what the anatomy learning process is like. We then conducted interviews based on our contextual inquiry script to mine out all of the information regarding how users study anatomy.

Each group member conducted multiple sets of interviews with several anatomy students and instructors. During these interviews, one member would ask questions, and the other would take notes.

We also took part in an anatomy lecture so that we could get a feel of what it’s like to learn anatomy. This help us a lot in terms of being able to understand how anatomy students think, and getting a broader perspective of what the users may possibly want.


In the end, we were able to gather valuable data that will help us design our lo-fi prototype. Thanks to our preliminary contextual inquiry interviews, we now have an idea of what directions we should head towards, and have a solid understand of what the users need.

Members involved: Alyssa, Ashish, Connie, Ngoc, Ted

Deliverable: Interview Notes

Deliverable: Task Analysis


Timeline Updates Are Here

Five weeks into our project, we have come to see that our development training is taking a little longer than expected. Learning takes time, especially when we have so much to work through while conducting user research and being busy students! We have updated our project milestone timeline schedule to accommodate for this new update.

Take a peek at our Revised Timeline.