Background
I’m fascinated by how many mediums design can exist in. As a designer that's has worked in print, motion, and the web, I’ve seen many types of projects. As a I transitioned into more of a UX role, I’ve become more interested in how humans interact with interfaces and systems. Virtual reality was that emerging medium that presented a whole new set of challenges when it comes to designing usable experiences. 

This was my thesis project for my graduate UX program at MICA, the Maryland Institute College of Art, in 2017. Over the course of 8 weeks we were tasked with finding a problem and applying UX principles to propose a solution. My project focused on creating a design pattern for dropped functionality that I noticed in a certain type of VR experience.
Narrative experiences were ones that told a story, informed, and educated users, primarily through the use of video and narration. Although well supported through smartphone VR, holding your phone an arm's length away, the experiences didn't scale up to headsets. Abilities to play/pause and select scenes were non existent on many devices. This was due to a lack of standardization in VR design.
Examples of narrative experiences could range from Google Spotlight Stories, which were visual 360-degree animations timed to music. Or experiences from Guardian VR like an animated story about Syrian immigrants. Or even YouTube which has all types of 360-degree content.
Research
While trying out a variety of experiences to get a better understanding of the interaction landscape, I learned about 3 main types of menu designs. Diegetic, Non-diegetic, and 2D. Diegetic, meaning in the scene, was a type of menu that was located in the environment. For example, having to turn around and look towards the door marked 'exit'. Non-diegetic is not in the scene, similar to a HUD like in Iron Man. 2D menus were the most common, and resembled the interfaces we see on 2D screens. 

I looked into guidelines from Google Cardboard/Daydream, Oculus, and Unity. I understood the usage of interaction patterns like the reticles which give users feedback when hovering over items and fuse cursors which can make a selection for a user when controllers are not present. I also understood some of the human factors involved with designing VR experiences and physical constraints like neck strain and field of view.
Design
My design process was a deep dive into a variety of tools that helped me prototype and iterate on my initial idea. I was intrigued by this space along the water at the Inner Harbor in Baltimore which contained a sign that described the harbor's history and Federal Hill park. I saw a parallel with the virtual narrative experiences that I was researching and this real life physical space. So, I took a 360-degree image of this walkway and looked at it in a headset. I could see a view of the harbor and look down to get more information. This system could be used in the virtual experience, so looking would let you access the controls you need. 

Looking down lets you read more information about the harbor's history

I used Google Blocks to place rectangles around the scene and saw the impact of an element's distance from you.
Using photoshop, I started with an image of an equirectangular grid and turned it into a panorama. I could rotate around the scene, draw UIs with the pen tool, and then continue rotating around the scene as if that element was in 3D space. This let me narrow in on how and where elements should be placed in the scene.
Framer let me test characteristics of the interaction. By outputting the exact degree of a device, I could see how far a user needed to look down to trigger the menu
Aframe, a javascript library for creating VR experiences, HTML, and CSS was used to make the testable prototypes. With standard web languages, I could make a much higher fidelity interface and that other tools couldn't allow for.
I made two prototypes to test with. The first being an 'I Spy' game where users needed to find items around the room. The list of items to find can be accessed by looking down. In this prototype I learned how important the field of view was. Originally, menu items were horizontally aligned, and although still in the field of view, items on the edge were in the peripheral zone and were still hard to focus in on. 
I found that keeping items as close to the center of the field of view as possible was best, so I ordered the menu items in a grid instead of a line.
The second prototype was a picture viewer. A user could explore 3 different scenes and change environments through a menu after looking downwards. Originally, I kept the menu locked in one location. This required the user, when facing away from the menu, to look down and rotate their head or body to the menu. The user needed to exert more effort which increased the interaction cost.
In the image below, the menu appears right in front of a user wherever they look down.
Testing
I performed in-person moderated usability tests to test the prototypes. I received a funding grant from the Graduate Research Develo​pment Grant Committee at MICA so that I could pay participants and financial compensation. I then posted to local slacks like UX Baltimore and Free Code Camp. The usability tests lasted 30 minutes and involved watching users interact with the prototype and asking them questions afterwards.
The Workstation
The workstation is a head-tracked interaction which brings up an interface that helps users access a contained set of controls or secondary information.
The action of 'looking down' can be hinted to through affordances, similar to how websites have a 'scroll down' icon in the hero section of the site. It's important to keep the menu low and out of the way, but aware that users may be sitting down so their neck movements are limited. Dimming the scene so that the user can focus on the menu follows the game design principle of 'guiding users with light'.  
Keeping content well within the field of view is important when using a workstation. Content on the edges of a user's field of view can be hard to focus on or may require the user rotating their head and body. This extra effort increases the friction with the experience. Always keep the workstation in front of the user when the user looks down to trigger the interaction. The workstation may move with the user only if the items in the menu need to be read, not selected.

Other projects

Back to Top