Monthly Archives: February 2014

Tutorial Fun!

Hello everyone,

We are really excited to dive in and start writing code for our application. But first, we need to take baby steps before we can start running. With none of the team members having significant knowledge in our two main frameworks (Leap Motion and Three.js), we have to start by following simple tutorials to get a firm grasp of what we are grappling with. The Development Team (Ashish & I) will have a head start while the Research Team (Connie, Ted, Alyssa) work on gathering user requirements and conducting contextual inquiries.

Leap Tutorials

The Leap Motion tutorials for JavaScript are pretty well-written and a bit entertaining. In short, it teaches us to connect to a Leap Motion device,  getting animation frames, adjusting to leap space coordinates and reading gesture from a device.

Too many people, too many gestures on a leap device.

Too many people, too many gestures on a leap device.

Three.js Tutorials

There are many Three.js tutorials available on the interwebz. We were lucky enough to find a pretty good one that explained most of the concept concisely and still gave us enough exposure. Through these tutorials, we learned about drawing objects, manipulating them, textures and meshes, particles, models, animations and shaders. In all, very interesting stuff, and it’s nice to see your work unfold in front of you so quickly. If you want to see the end product, it’s here:


It’s more appealing with animations. Click the live link above!

We’re excited to integrate these two technologies together and see what we can produce out of it. Hopefully we can do it with no issues!

Members involved: Alyssa, Ashish, Connie, Ngoc, Ted



Until next time,


Members involved: All


Low-Fidelity Mockup

Here we created a Low-Fidelity prototype to test what are the necessary functions of our application and what are the most intuitive gestures.

Our low-fi prototype gave an illusion of a floating image. Interviewees can interact with the application using finger and hand gestures. During the interviews, there were always two interviewers and one interviewee. One of the interviewer held the application and interact with the user, and the other interviewer asked questions and took notes. The interviews are designed to test how well the interactions of this application work. We encourage the interviewees to think aloud and let us know what’s on their minds when using the application, and be as honest as possible.

We categorized the interviews into three main tasks: examine the human body, examine a specific body part, and take a quiz.

For task 1,  examine the human body, our goal is to have them look at the human body, rotate it, zoom in/out of body parts on the current layer. Our questions included: “As an anatomy student, do you need to take a closer look at certain body parts?”, “Do you need to zoom in at certain parts on the same layer?”, “Say you want to take a look around the head, and take a closer look at the iris of the eye. How would you do that?”

For task 2, examine a specific body part, out goal is to have them look at a specific part inside the body by stripping off layers, zoom into a section and remove layers which float around in space, put layers back if wanted, layers are transparent if they don’t fit in the full zoom window. Our questions included:  “Say you’d like to take a closer look at the lobe of the left brain. You can take off layers of the body by system in this application. How do you think you would do that?”

For task 3, take a quiz, our goal is to see what kind of quizzes they would want, and how they would interact with it. We suggested “body flashcards”, which allows users to select a body part to see the name, change it to red if they got the name wrong, change it to green if they got it right. Our questions included: “If you have a test on Thursday and need to study, would you use this application?”, “Would you be likely to use the quizzing app?”, and “How would you open the quizzing function and use it to study?”


Here are the links to some of our user testing videos:




Members involved: Alyssa, Ashish, Connie, Ngoc, Ted

Deliverable: Design Sketches

Deliverable: Low-Fidelity Mock-up



Preliminary Interviews / Analysis

For our preliminary interviews, we conducted several interviews to figure out how we should design our initial lo-fi prototype. The process involved us going to several students, instructors, and lectures to get an idea and feel of what the anatomy learning process is like. We then conducted interviews based on our contextual inquiry script to mine out all of the information regarding how users study anatomy.

Each group member conducted multiple sets of interviews with several anatomy students and instructors. During these interviews, one member would ask questions, and the other would take notes.

We also took part in an anatomy lecture so that we could get a feel of what it’s like to learn anatomy. This help us a lot in terms of being able to understand how anatomy students think, and getting a broader perspective of what the users may possibly want.


In the end, we were able to gather valuable data that will help us design our lo-fi prototype. Thanks to our preliminary contextual inquiry interviews, we now have an idea of what directions we should head towards, and have a solid understand of what the users need.

Members involved: Alyssa, Ashish, Connie, Ngoc, Ted

Deliverable: Interview Notes

Deliverable: Task Analysis


Timeline Updates Are Here

Five weeks into our project, we have come to see that our development training is taking a little longer than expected. Learning takes time, especially when we have so much to work through while conducting user research and being busy students! We have updated our project milestone timeline schedule to accommodate for this new update.

Take a peek at our Revised Timeline.

Display Prototypes [Pictures + Video]

What’s up everyone,

It’s still early in the project and our timeline doesn’t have a due date for the display until the end of March or so, but in order to get a head start and feel for what is required in making or final volumetric display, I’ve decided to make a few proto (proto-proto-proto, I like to call them) types. I get excited when building (noob-engineering, really) new projects, so when do my first iterations I tend to hack together as quick as I can so that I can roughly see what the product will look like while getting an idea of what is necessary for making them. For example:


Yeah! Look at the perfect combination of scratched acrylic and duct tape to hastily hack together a proto-proto-proto-type! And here is a picture of me again hacking together a display source for the prism by turning my laptop upside down to get even a glimpse of what the holography would look like. I must warn you though, the results are ghastly compared to what it should really look like. I advise not to look for more than half a second.


Alas, something floating in the air, but I’m not sure what! I think this took, in total, about 3-4 hours. The hardest part was probably cutting the acrylic into the pieces I need. They come in a rectangular piece, which I have to cut (more correctly, score, then break) using a hand tool. It was hard at first because you have to know how much pressure to apply, and have to use a straight edge that is strong enough not to be cut from the hand tool. If you mess up and cut away from the straight edge, you end up scratching the f— out of your once flawless plexiglass. Again, in my excitement of finishing, I grabbed the first thing I could think of that fits this requirement – a wood saw I happened to borrow from my pops a few weeks back. It worked perfectly … after my failed attempts of trying to saw it apart first. Like I said, I always do iterations when prototyping, so the duct tape was not going to cut it (hah) for me. My next iteration involved using a caulk gun with silicone to glue the pieces together – much more sturdy while allowing for flexibility.


You can’t see it in the picture, but the job was abominable! Some sides had more silicone than others, a lot of the silicone was running rampant, eating more of the plexiglass real estate than it need to, and the silicone on the inside served little purpose other than hiding a hideous crack. This iteration did not please me as much as it should’ve. Thus, next iteration!


Yes, close to perfection! This time I used masking tape on both inner and outer sides of the prism, as well as removing excess silicone (while flattening the bond line) with a knife to make it more professional-looking. You probably can’t see it in the picture, but trust me, it looked like the prism was made using a (albeit cheap) wand! I also included a base at the bottom for added stability. With that, I completed the first prototype. The next step was to make an even bigger prototype and see how well the method I used scales. However, I didn’t use silicone because my teammates wanted something that can easily disassemble. I opted for Velcro because it’s quick, easy and offered just the right amount of sturdiness. However, it required the beauty of the plexiglass to compromise into, well … crap. In the next pictures, one of the sides of protective layers (inside) isn’t removed to prevent dust from gathering, because we aren’t going to be using this prototype quite yet. Also shown is a comparison of size to the first prototype.


If you look closely at the edges, you can see the Velcro sticking out like a sore thumb. The experience gained from the first prototype made the second suuuuch a breeze. It probably took half the time it did compared to the first one, because I knew exactly what I was doing. In the next iteration, I will probably use clear silicone (the one used with the first prototype is opaque). Another option is to use Weld-On 16 (a type of clear acrylic glue), which would, I’m guessing, give us a much better bond; however, silicone offers a bit of flexibility without falling apart so this will be something that needs to be considered in future revisions. The image relies on reflectivity of the plexiglass and in order to increase it, I tried to apply a clear gloss layer to a piece of plexiglass to see what it would look like, but it just distorts the image so that’s out of question. Finally, here is a (better) teaser of what the prism looks like (smaller prototype):

IMG_20140202_014539 IMG_20140202_014240

In the pictures, you can see a double reflection of the image (look at the stars, or … dots, same thing). When we talked to Robbie Tilton about this, he said his model doesn’t have this issue and it was something he’d never seen. Weird! We’ll have to look into this some more as well. Keep us with our blog to see how our project unfolds!

Checking out,


GitHub is Ready

Hello all,

Our continued development is available for viewing on our GitHub page.

That is all,

Thank you!

Our Contextual Inquiry Script

After spending an entire weekend of brainstorming ideas for our contextual inquiry, we were finally able to create a script for our upcoming contextual inquiry interviews. We composed these questions by collaborating with one another on a Google Document, and we made sure that all of our grounds were covered before we started our research to ensure that we could conduct our interviews with utmost efficiency.

Members involved: Alyssa, Connie, Ted

Deliverable: Contextual Inquiry Script

Tagged , ,