Paul Mealy

Paul Mealy has worked with virtual reality since the release of the Oculus Rift DK1 in 2013. He has architected, designed and developed applications for Oculus Rift, HTC Vive, Samsung Gear VR, Windows Mixed Reality, Google Daydream, and Google Cardboard. He has worked with numerous augmented reality hardware and technologies including the Microsoft HoloLens, ARKit for iOS, ARCore for Android and cross-platform solutions such as Vuforia.

Articles From Paul Mealy

page 1
page 2
page 3
page 4
31 results
31 results
Virtual & Augmented Reality For Dummies Cheat Sheet

Cheat Sheet / Updated 04-20-2022

The terms virtual reality and augmented reality (and others, like mixed reality and extended reality) are thrown about everywhere today, but do you really know what they mean? Virtual and augmented reality are rapidly changing fields, so it helps to know where they are today and where they may be headed in the future. Finally, seeing how virtual and augmented reality are being used in a variety of industries and how exactly you can experience these technologies is key to your enjoyment.

View Cheat Sheet
Designing Augmented Reality Apps: Interacting with Objects

Article / Updated 10-29-2018

As you set out to design your augmented reality app, you will need to consider how the user will interact with objects. Most virtual reality (VR) interaction takes place via a motion controller, but most headset-based augmented reality (AR) devices utilize a combination of gaze and hand tracking for interaction. Often, AR headsets use gaze-based navigation to track where a user is looking to target items within the environment. When an item is targeted, a user will often interact with that item via hand gestures. As such, you need to design your AR experience to keep the user’s hands within the headset’s area of recognition and work with each headset’s specific set of gestures. Educating the user about the area of recognition for gestures — and notifying users when their gestures are near the boundaries — can help create a more successful user experience. Because this way of interaction is new to nearly everyone, keeping interactions as simple as possible is important. Most of your users will already be undergoing a learning curve for interacting in AR, figuring out the gestures for their specific device (because a universal AR gesture set has yet to be developed). Most AR headsets that utilize hand tracking come with a standard set of core gestures. Try to stick to these prepackaged gestures and avoid overwhelming your users by introducing new gestures specific to your application. The image below gives examples of the two core gestures for HoloLens, the “Air Tap” (A) and “Bloom” (B). An Air Tap is similar to a mouse click in a standard 2D screen. A user holds his finger in the ready position and presses his finger down to select or click the item targeted via user gaze. The “Bloom” gesture is a universal gesture to send a user to the Start menu. A user holds his fingertips together and then opens his hand. Grabbing an object in the real world gives a user feedback such as the feel of the object, the weight of the object in his hand, and so on. Hand gestures made to select virtual holograms will provide the user with none of this standard tactile feedback. So it’s important to notify the user about the state of digital holograms in the environment in different ways. Provide the user cues as to the state of an object or the environment, especially as the user tries to place or interact with digital holograms. For example, if your user is supposed to place a digital hologram in 3D space, providing a visual indicator can help communicate to her where the object will be placed. If the user can interact with an object in your scene, you may want to visually indicate that on the object, potentially using proximity to alert the user that she’s approaching an object she can interact with. If your user is trying to select one object among many, highlight the item she currently has selected and provide audio cues for her actions. This image shows how the Meta 2 chooses to display this feedback to the user. A circle with a ring appears on the back of a user’s hand as he approaches an interactive object (A). As the user’s hand closes to a fist, the ring becomes smaller (B) and draws closer to the center circle. A ring touching the circle indicates a successful grab (C). A user’s hand moving near to the edge of the sensor is also detected and flagged via a red indicator and warning message (D). Mobile device interaction in AR apps Many of the design principles for AR apply to both headsets and mobile experiences. However, there is a considerable difference between the interactive functionality of AR headsets and mobile AR experiences. Because of the form factor differences between AR headsets and AR mobile devices, interaction requires some different rules. Keeping interactions simple and providing feedback when placing or interacting with an object are rules that apply to both headset and mobile AR experiences. But most interaction for users on mobile devices will take place through gestures on the touchscreen of the device instead of users directly manipulating 3D objects or using hand gestures in 3D space. A number of libraries, such as ManoMotion, can provide 3D gesture hand tracking and gesture recognition for controlling holograms in mobile AR experiences. These libraries may be worth exploring depending on the requirements of your app. Just remember that your user will likely be holding the device in one hand while experiencing your app, potentially making it awkward to try to also insert her other hand in front of a back-facing camera. Your users likely already understand mobile device gestures such as single-finger taps, drags, two-finger pinching and rotating, and so on. However, most users understand these interactions in relation to the two-dimensional world of the screen instead of the three dimensions of the real world. After a hologram is placed in space, consider allowing movement of that hologram in only two dimensions, essentially allowing it to only slide across the surface upon which it was placed. Similarly, consider limiting the object rotation to a single axis. Allowing movement or rotation on all three axes can quickly become very confusing to the end user and result in unintended consequences or placement of the holograms. If you’re rotating an object, consider allowing rotation only around the y-axis. Locking these movements prevents your user from inadvertently shifting objects in unpredictable ways. You may also want to create a method to “undo” any unintentional movement of your holograms, as placing these holograms in real-world space can be challenging for your users to get right. Most mobile devices support a “pinch” interaction with the screen to either zoom in on an area or scale an object. Because a user is in a fixed point in space in both the real world and the hologram world, you probably won’t want to utilize this gesture for zooming in AR. Similarly, consider eliminating a user’s ability to scale an object in AR. A two-fingered pinch gesture for scale is a standard interaction for mobile users. In AR, this scale gesture often doesn’t make sense. AR hologram 3D models are often a set size. The visual appearance of the size of the 3D model is influenced by the distance from the AR device. A user scaling an object in place to make the object look closer to the camera is really just making the object larger in place, often not what the user intended. Pinch-to-scale may still be used in AR, but its usage should be thoughtfully considered. Voice interaction in AR apps Some AR devices also support voice interaction capabilities. Although the interaction for most AR headsets is primarily gaze and gestures, for those headsets with voice capabilities you need to consider how to utilize all methods of interaction and how to make them work well together. Voice controls can be a very convenient way to control your application. As processing power grows exponentially, expect voice control to be introduced and refined further on AR headsets. Here are some things to keep in mind as you develop voice commands for AR devices that support this feature: Use simple commands. Keeping your commands simple will help avoid potential issues of users speaking with different dialects or accents. It also minimizes the learning curve of your application. For example, “Read more” is likely a better choice than “Provide further information about selected item.” Ensure that voice commands can be undone. Voice interactions can sometimes be triggered inadvertently by capturing audio of others nearby. Make sure that any voice command can be undone if an accidental interaction is triggered. Eliminate similar-sounding interactions. In order to prevent your user from triggering incorrect actions, eliminate any spoken commands that may sound similar but perform different actions. For example, if “Read more” performs a particular action in your application (such as revealing more text), it should always perform the same interaction throughout your application. Similar-sounding commands should also be avoided. For example, “Open reference” and “Open preferences” are far too likely to be mistaken for each other. Avoid system commands. Make sure your program doesn’t override voice commands already reserved by the system. If a command such as “home screen” is reserved by the AR device, don’t reprogram that command to perform different functionality within your application. Provide feedback. Voice interactions should provide the same level of feedback cues to a user that standard interaction methods do. If a user is utilizing voice commands, provide feedback that your application heard and understood the command. One method of doing so would be to provide onscreen text of what commands the system interpreted from the user. This will provide the user with feedback as to how the system is understanding his commands and allow him to adjust his commands if needed.

View Article
Augmented Reality App Design: Starting Up and User Environment

Article / Updated 10-25-2018

When designing for augmented reality (AR), it’s important to follow some helpful design principles. Design principles are a set of ideas or beliefs that are held to be true across all projects of that particular type. AR is no exception. Design principles are typically created through years of trial and error within a field. The older a field of study is, the more likely a strong set of design principles has arisen around that field for what works well and what doesn’t. Developers are still defining the design principles that will help guide the AR field forward. The field is still very young, so these best practices are not set in stone. That makes AR an exciting field to be working in! It’s akin to the early Internet days, where no one was quite sure what would work well and what would fall on its face. Experimenting is encouraged, and you may even find yourself designing a way of navigating in AR that could become the standard that millions of people will use every day! Eventually a strong set of standards will emerge for AR. In the meantime, a number of patterns are beginning to emerge around AR experiences that can guide your design process. Starting up your AR app For many users, AR experiences are still new territory. When using a standard computer application, videogame, or mobile app, many users can get by with minimal instruction due to their familiarity with similar applications. However, that is not the case for AR experiences. You can’t simply drop users into your AR application with no context — this may be the very first AR experience they’ve ever used. Make sure to guide users with very clear and direct cues on how to use the application on initial startup. Consider holding back on opening up deeper functionality within your application until a user has exhibited some proficiency with the simpler parts of your application. Many AR experiences evaluate the user’s surroundings in order to map digital holograms in the real world. The camera on the AR device needs to see the environment and use this input to determine where AR holograms can appear. This orientation process can take some time, especially on mobile devices, and can often be facilitated by encouraging a user to explore his surroundings with his device. In order for users to avoid wondering whether the app is frozen while this mapping occurs, be sure to show an indication that a process is taking place, and potentially invite the user to explore her surroundings or look for a surface to place the AR experience. Consider displaying an onscreen message to the user instructing her to look around her environment. This image displays a screenshot from the iOS game Stack AR, instructing a user to move her device around her environment. Most AR applications map the real world via a computational process called simultaneous localization and mapping (SLAM). This process refers to constructing and updating a map of an unknown environment, and tracking a user’s location within that environment. If your application requires a user to move about in the real world, think about introducing movement gradually. Users should be given time to adapt to the AR world you’ve established before they begin to move around. If motion is required, it can be a good idea to guide the user through it on the first occurrence via arrows or text callouts instructing him to move to certain areas or explore the holograms. Similar to VR applications, it’s important that AR applications run smoothly in order to maintain the immersion of augmented holograms existing within the real-world environment. Your application should maintain a consistent 60 frames per second (fps) frame rate. This means you need to make sure your application is optimized as much as possible. Graphics, animations, scripts, and 3D models all affect the potential frame rate of your application. For example, you should aim for the highest-quality 3D models you can create while keeping the polygon count of those models as low as possible. 3D models are made up of polygons. In general, the higher polygon count of a model, the smoother and more realistic those models will be. A lower polygon count typically means a “blockier” model that may look less realistic. Finding the balance between realistic models while keeping polygon counts low is an art form perfected by many game designers. The lower the polygon count of a model, the more performant that model will likely be. The image below shows an example of a 3D sphere with a high polygon count and a low polygon count. Note the difference in smoothness between the high-polygon model and the low-polygon model. Similarly, ensure that the textures (or images) used in your application are optimized. Large images can cause a performance hit on your application, so do what you can to ensure that image sizes are small and the images themselves have been optimized. AR software has to perform a number of calculations that can put stress on the processor. The better you can optimize your models, graphics, scripts, and animations, the better the frame rate you’ll achieve. AR app design: Considering the environment AR is all about merging the real world and the digital. Unfortunately, this can mean relinquishing control of the background environment in which your applications will be displayed. This is a far different experience than in VR, where you completely control every aspect of the environment. This lack of control over the AR environment can be a difficult problem to tackle, so it’s vital to keep in mind issues that may arise over any unpredictable environments your application may be used in. Lighting plays an important part in the AR experience. Because a user’s environment essentially becomes the world your AR models will inhabit, it’s important that they react accordingly. For most AR experiences, a moderately lit environment will typically perform best. A very bright room such as direct sunlight can make tracking difficult and wash out the display on some AR devices. A very dark room can also make AR tracking difficult while potentially eliminating some of the contrast of headset-based AR displays. Many of the current AR headsets (for example, Meta 2 and HoloLens) use projections for display, so they won’t completely obscure physical objects; instead, the digital holograms appear as semitransparent on top of them. AR is all about digital holograms existing in the environment with the user. As such, most AR usage is predicated on the user being able to move about their physical space. However, your applications could be used in real-world spaces where a user may not have the ability to move around. Consider how your application is intended to be used, and ensure that you’ve taken the potential mobility issues of your users into account. Think about keeping all major interactions for your application within arm’s reach of your users, and plan how to handle situations requiring interaction with a hologram out of the user’s reach. In the real world, objects provide us with depth cues to determine just where an object is in 3D space in relation to ourselves. AR objects are little more than graphics either projected in front of the real world or being displayed on top of a video feed of the real world. As such, you need to create your own depth cues for these graphics to assist users in knowing where these holograms are meant to exist in space. Consider how to visually make your holograms appear to exist in the real-world 3D space with occlusion, lighting, and shadow. Occlusion in computer graphics typically refers to objects that appear either partially or completely behind other graphics closer to the user in 3D space. Occlusion can help a user determine where items are in 3D space in relation to one another. You can see an example of occlusion (foreground cubes partially blocking the visibility of the background cubes), lighting, and shadow all at play in the image below. The depth cues of occlusion, lighting, and shadow all play a part in giving the user a sense of where the holograms “exist” in space, as well as making the holographic illusion feel more real, as if the cubes actually exist in the real world, and not just the virtual.

View Article
Designing Augmented Reality Apps: Comfort Zones, Interfaces, and Text

Article / Updated 10-25-2018

Augmented reality (AR) apps are really just emerging. AR technology is difficult to develop because we don’t yet understand all of the applications. Which is even more reason to experiment to see how AR will truly come in handy. Keep reading to learn about comfort zones, interfaces, and text in AR apps. AR app design: Understanding comfort zones Understanding users’ interaction within their comfort zones is important, especially for AR applications that may be more focused on getting work done. You also need to understand the differences between comfort zones for interaction with a head-mounted AR device versus comfort zones for interaction with AR on a mobile device. Head-mounted AR experiences are fairly similar to that of VR, with a few exceptions. You need to minimize how much users will be required to move their heads for any experiences longer than a few minutes. Though their work was focused on VR, Google VR Designer Mike Alger and Alex Chu of Samsung Research claim that users’ comfort level when rotating their heads horizontally is 30 degrees to each side, with a maximum rotation of 55 degrees. For vertical motion, rotation of 20 degrees upward is comfortable, with a maximum rotation upward of 60 degrees. Rotation downward is around 12 degrees comfortably with a maximum of 40 degrees. When defining your comfort zones for head-mounted AR, it’s also important to consider how your application will be used. Will it require users’ direct interaction, such as with hand tracking and gestures, or just point and click via controller or touchpad? If it will require direct interaction, consider how that can be used comfortably, especially if the application is intended for extended use. As more and more AR applications are utility based, this consideration will become more important. A report on office ergonomics by Dennis Ankrum provides a good guide for seated AR experiences requiring user interaction, especially AR applications intended to be used in conjunction with (or as a replacement to) traditional computer usage. Ankrum lists the correct eye-to-screen distance for most users as 25 inches from the eyes, preferably more, and optimal placement for screens as 15 to 25 degrees below the horizontal plane of a user’s eye, resulting in a small “comfort zone” for seated AR experiences. Meta has completed similar studies and achieved similar results with its headset for both standing and seated experiences. There is an “ideal content area” that exists between the intersection of where a user’s hands will be detected by the headset, the FOV of the headset itself, and the comfortable viewing angle for a user’s line of sight. Each headset is slightly different, but in general the ergonomics of a comfortable AR headset experience hold true across most platforms. The tracking technology utilized for Meta 2’s hand tracking has a detection area of 68 degrees, optimized at a distance of 0.35 meter and 0.55 meter from the user. Combined with the 40 degree vertical FOV of the headset, an ideal content area can be established at the intersection of what is comfortable for the user to reach and see. This comfort zone for interaction is not the same for every AR headset, but defining these zones will be similar for any current or future headsets. Carefully consider the amount of user movement and interaction that your application requires and what the comfort zones of your hardware may be. Take care to minimize the amount of neck rotation or unnecessary user motion. The first time a user has to reach up to “turn on” a virtual light bulb in your AR experience may be novel. If a user has to perform this action multiple times, it’ll quickly become tedious. Mobile device comfort zones are very different from that of head-mounted AR devices. In a mobile AR experience, a user is forced to hold his device a certain distance in front of his eyes and angle his arm or his head to get a view into the augmented environment within the device. Holding a device in this manner can be extremely taxing after a period of time, so try to find a way to minimize the user’s discomfort. If your application requires a large amount of user motion or long periods in which a user must hold his device out in front of him, find ways to provide rest periods to allow the user to rest his arms for a bit before continuing. AR app design: User interface patterns Best practices for AR user interface design are still being defined. There are not many defined user experience (UX) patterns that AR designers can fall back on as best practices for what a user will expect when entering an augmented experience. Plus, AR is a totally new form factor, different from the 2D screens people have become accustomed to. AR will enable people to totally rethink the way we handle user interface (UI) design. The 2D world of the computer consists of flat layouts with multiple 2D windows and menus. AR enables developers to utilize 3D space. When designing your AR UI, consider creating a spatial interface and arranging your UI tools and content around the user in 3D, instead of the windowed interface that computer screens currently confine us to. Consider allowing the user to use 3D space as an organizational tool for her items, as opposed to hiding or nesting content in folders or directories — a practice common in current 2D UIs. AR has ways to gracefully avoid hiding content. Instead of hiding menus inside other objects, use the physical environment available to you to organize your setup. Hidden menus in 2D screens are usually created due to space constraints or a designer feeling that the amount of content would be overwhelming for a user to consume. For augmented experiences in cases of what you may consider an overwhelming amount of information, considering organizing items in groups in 3D space. Instead of nesting content within menus, explore the possibility of miniaturizing content to optimize the space around your user. Content that may normally take up a large amount of space could be made small until a user has expressed a desire to interact with it. That is not to say that you can always avoid hidden or nested structures. Both will likely always exist in UX designs for AR. If you do find the need to nest content, try to keep the levels of nesting to a minimum. In most traditional 2D UIs, nested content is a given. On a traditional computer, users are fully accustomed to having to click into four or five different nested directories in order to locate a file. However, deep nesting of content can be very confusing to end users, especially in the 3D environment of AR. A user having to navigate in 3D space through multiple nested items will likely quickly grow frustrated with the experience. Shallow nesting and making items easily accessible within the spatial environment should enable users to retrieve content quickly. Limit expandable and hidden menus as much as possible in the AR space. These patterns may have worked well in the 2D screens of the past, but they aren’t necessarily relevant in the 3D world that AR is trying to emulate. Expandable/hidden menus can introduce a level of complexity that you should avoid, if possible. The windowed 2D world of current computing UIs has accustomed us to iconography and abstract 2D shapes that represent real-world tools. These icons also can often hide further functionality, such as expandable or hidden menus. However, the world of AR is full of new patterns for users to learn. Try to avoid creating a new system of 2D icons for your AR experiences. These can force users to have to guess and learn a system you’ve created that may not have relevance to them. If a tool is intended to be used within the 3D space of the experience, replace abstract icons or buttons with 3D objects in space that give the user a sense of the tool’s purpose. Look to real-world environments such as drafting desks or art studios for inspiration. Such real-world workspaces can provide examples of how real 3D objects are organized in a physical environment, which is generally what your UI in AR will try to emulate. Finally, enable your user to personalize and organize her own spaces in a way she finds comfortable, in the same way she may organize her physical desktops or work areas at home or work. This will increase the likelihood that she’ll be comfortable using the system you’ve created. Understanding text in AR Carefully consider the legibility length of text when creating your AR application, and proofread it during testing on as many hardware platforms and as many environmental conditions as possible. You likely won’t know what type of environment your application will be running in. A very dark area at night? An overly bright room at midday? To make sure text can be seen, consider placing it on contrasting-colored background. This image shows an example of potentially poor legibility on top of a sub-optimal environment (left), and how that legibility can be resolved for unknown environments via a text background (right). Photo by Jeremy Bishop on Unsplash The text size and typeface (font) can also affect text legibility. In general, you should opt for shorter headlines or shorter blocks of text whenever possible. However, many AR applications are utility based, and sometimes involve consuming large blocks of text, so ultimately designers will have to find a way to make long-form text documents manageable in AR. If long document consumption is required for your application, make sure that the font size is large enough that the user can read it comfortably. (Meta recommends a minimum font size of at least 1cm in height when the text is 0.5 meter from the user’s eye.) Avoid overly complicated calligraphic fonts. Instead, stick with utilizing simple serif or sans-serif fonts for these large text blocks. In addition, narrower columns of text are preferable to wider columns. Rapid serial visual presentation (RSVP) speed reading is a method of showing a document to a user a single word at a time. This could prove to be a good way of consuming large blocks of text in AR, because it allows a single word to be larger and more recognizable, instead of forcing your application to account for displaying these large blocks of text. For any informational or instructional text display, try to favor conversational terms that most users would understand over more technical terms that may confuse a user. “Unable to find a surface to place your object. Try moving your phone around slowly” is preferable to “Plane detection failed. Please detect plane.” AR app design: Testing, testing, 1, 2, 3 AR applications are still defining what make an interaction good or bad. So, you’ll often need to work from your own assumptions, and then test those assumptions as frequently as possible. Testing with multiple audiences will help reveal what’s working well and what you may need to go back to the drawing board with. When testing your application, give your test users only the same amount of information a standard user of your application would receive. Letting your testers try to use the app without assistance will help prevent you from inadvertently “guiding” them through your application and will result in more accurate test results.

View Article
Best Practices and Virtual Reality Design Principles

Article / Updated 10-24-2018

Designing for virtual reality (VR) experiences is unlike other application designs. The immersive nature of VR presents a whole new set of challenges. Consider the following points and best practices when designing for VR. VR design: Giving the user control A basic tenet of VR is giving users control over their surroundings. In real life, users are fully in control of how they move and perceive the world around them. When users “lose control” in real life is when their movements and perception of the world around them seem to no longer align. This feeling can be equated to the feeling of being inebriated, or what’s commonly referred to as simulator sickness. Simulator sickness should be avoided at all costs — users hate it and it will drive them away from your VR product. You want to ensure your users always feels in control. Their movements should always be mirrored by movement within the virtual environment. Additionally, you should never wrest control away from the user. You don’t want to move the user around without her actions triggering that movement. Also, don’t rotate or reposition a user’s view of the virtual environment. If a repositioning is needed, it is advisable to fading to black for a moment, then fade back up to your repositioned environment. Although it’s not optimal, fading to black (triggered by a user’s action of course) and back in can be a way to reposition the users environment without your user feeling as if she has relinquished control. Understanding locomotion in VR experiences Locomotion in VR has yet to be gracefully solved. One of the strengths of VR is the ability to create compelling environments that a user wants to explore. But it doesn’t matter how compelling an environment is if a user can’t move about to explore it. If your experience is more than a static, seated experience, you need to enable users to move about your space. You can create a method for a user to move forward using a standard, non-VR method, such as a joystick, but this kind of motion is apt to produce nausea. It tends to trigger a feeling of acceleration, which in turn triggers simulator sickness. When adding movement to your VR app, ask yourself how movement is enhancing the user’s VR experience. Unnecessary movement can be disorienting to users. Focusing on what value movement adds to the experience can help strengthen your VR app. Many applications find ways for users to be grounded on some sort of machine or platform, and then move the platform itself rather than the user. This can help alleviate some of the potential issues of simulator sickness, especially if the user remains seated. For room-scale VR experiences, “teleportation” is one of the current standards for smoothly moving users large distances in virtual worlds. The user aims at the place they would like to move to, some sort of graphic appears to define the target destination, and then the user triggers the teleportation. This image shows how a user in Vive’s headset can teleport around the Vive home scene. Holding down the touchpad displays a graphic to the user defining where she’ll teleport to if teleportation is triggered. A user can then choose to trigger the teleportation event, moving her to the new location, or cancel the teleportation event. Locomotion is very much an evolving best practice for VR, and one that is going to require plenty of exploration for what works best for your application. Application developers are implementing and improving upon this mechanic in a number of ways. Robo Recall, a game for Oculus Rift, enables the user to determine the direction he’ll be facing when he arrives at his teleportation location, instead of just teleporting him straight to the location in whatever direction he’s currently looking. Budget Cuts, a game by publisher Neat Corp, gives the user the ability to peek at his destination and how he’ll be facing before he teleports, removing the confusion that can often occur when a user teleports to a new location. And teleportation is not the only method of locomotion available. Many applications offer standard “walking” locomotion to users. Smooth locomotion, or sliding through virtual environments without jerky acceleration, can help retain some immersion of a standard method of movement with some of the potential “simulator sickness” triggers minimized. Other solutions for locomotion within a limited space are also being explored. Saccade-driven redirected walking is a method of redirecting users away from real-world obstacles that allows users to traverse large virtual scenes in a small physical space. In saccade redirection, the virtual scene is rotated slightly in a way invisible to the user, causing the user to alter his walking slightly in response to the digital scene changes. For example, utilizing this method, a user may think he’s walking in a straight line in the digital world, but in the physical world he’s guided on a much more circular path. Large-scale movement in VR is a mechanic that has yet to be completely solved. Teleportation is often used, but it’s only one of many possible solutions for motion. If your application requires movement, review other applications and their methods of locomotion and see what you think makes sense. You may even be the one to come up with the new standard of motion for VR experiences! VR design: Providing user feedback In the real world, a person’s actions are usually met with some sort of feedback, visual or otherwise. Even with your eyes closed, touching a hot stove provides the tactile feedback of a burning sensation. Catch a thrown ball, and you feel the smack of the ball against your palm and the weight of the ball in your hand. Even something as simple as grasping a doorknob or tapping your finger on a computer key provides tactile feedback to your nervous system. VR doesn’t yet have a method for fully realizing tactile feedback, but you can still find ways to provide feedback to the user. If available on the VR device you’re targeting, haptic feedback (via controller vibrations or similar) can help improve the user’s immersive experience. Audio can also help notify the user of actions (when a user clicks a button, for example). Providing these audio and haptic cues alongside your visuals can help make your VR environments seem more immersive and help notify a user when actions have occurred. Following the user’s gaze in VR design Knowing where a user’s gaze is centered is a necessary part of VR interactions, especially in the current versions of head-mounted displays (HMDs) that don’t provide eye tracking. Many VR applications rely on a user’s gaze for selection. In order to utilize gaze, you may want to provide a visual aid, such as a reticle to help a user target objects. Reticles are typically visually distinct from the rest of the environment in order to stand out, but small and unobtrusive enough to not draw the user’s attention away from the rest of the application. Reticles should trigger some sort of indication to the user as to what elements are interactive within the environment. The image below shows a reticle being used for selection in PGA’s PGA TOUR VR Live application. Without motion controllers, the reticle enables the user to see what interactive item her gaze should be triggering. Depending on your particular VR implementation, you may also choose to display a reticle only when a user is close to objects with which she can interact. This allows a user’s focus to be undisturbed by the extra visual information of a reticle when focused on things that she can’t interact with at the moment. Not every VR application needs a reticle. When using motion controllers to select or interact with objects outside of a user’s reach, a reticle is typically discarded in favor of a laser pointer and cursor for selection. You could just display the cursor, but you’re better off displaying a combination of a virtual model of the controller, a laser ray, and the cursor all together. Doing so helps users notice the motion controller and cursor, helps communicate the angle of the laser ray, and provides real-time feedback and an intuitive feel to the user about how the orientation of the motion controller can affect the input of the ray and cursor. The image below displays a motion controller and laser pointer in use in Google Daydream’s home menu scene. Avoiding simulator sickness in VR design Simulator sickness is the feeling of nausea brought on by a mismatch between the user’s physical and visual motion cues. At its simplest, your eyes may tell you that you’re moving, but your body disagrees. Nothing will make a user leave your app more quickly than the feeling of simulator sickness. There are a number of ways to avoid simulator sickness. Maintain application frame rate. Sixty frames per second (fps) is generally considered the minimum frame rate in which VR applications should run in order to prevent simulator sickness in users. If your app is running at less than 60 fps, you need to find ways to get back to at least 60 fps. Maintaining this frame rate is likely the most important tip to follow, even if it means cutting other portions of your application. Maintain continuous head tracking. Head tracking in VR refers to the application continuously following the motion of your head, and having those movements reflect themselves within the virtual environment. Aligning your application’s virtual world positioning with a user’s real-world head movements is vital to avoiding simulator sickness. Even a slight pause while tracking a user’s movements can induce motion sickness. Avoid acceleration. In the real world, our bodies notice acceleration far more than we notice movement at a constant velocity. While you’re traveling in a car going 65 mph on a highway, you may not feel any different than if you were sitting on a park bench. However, your body definitely feels the difference of the acceleration from zero to 65 mph. Acceleration or deceleration in the real world provides a visual change as well as a sensation of motion to the end user. VR, however, provides only a visual update. This lack of sensation of motion in VR can trigger simulator sickness. Avoid accelerating or decelerating a user in VR. If movement within the space is required, try to keep users moving at a constant velocity. Avoid fixed-view items. Any graphic that “fixes” itself to the user’s view can trigger the feeling of nausea. In general, keep all objects in 3D while in VR instead of fixing any items to the user’s 2D screen. More VR best practices to consider Here are a few more useful best practices for colors, sounds, and text usage, all of which can affect VR user experiences: Bright colors and environments: Imagine the feeling of leaving a darkened theater and walking out into a bright sunny day. You find yourself shielding your eyes against the glare of the sun, squinting and waiting for your eyes to adjust. In VR, the same feeling can be triggered by quickly changing from any dark scene to a bright scene. Immediate brightness changes from dark to light can annoy and disorient users, and unlike stepping out into bright sunlight, when in a headset a user has no way of shielding her eyes from the glare. Avoid harsh or quick changes between darker scenes to lighter scenes or items. Extremely bright colors and scenes can be difficult to look at for an extended period of time and can cause eye fatigue for your users. Be sure to keep scene and item color palettes in mind when building out your experiences. Background audio: VR applications should be immersive. In the real world, audio plays a huge part in helping you to determine your environment. From the bustling noises of a busy street to the white noise hum and background noises of an office environment, to the echoing silence of a dark cave, audio cues alone are often enough to describe an environment. Make sure to consider how not only event-based audio (such as audio triggers on user interaction), but also background audio will play a role in your experiences. Text input and output: When in VR, users are surrounded with visual information from the environment. Adding large blocks of text to this environment can overload the user with input. Where possible, avoid using large blocks of small-font text. Short text excerpts rendered in large print are typically preferred. Similarly, it can be difficult for a user in VR to input a large amount of text. Text input in VR has yet to be completely solved. If text input is a requirement of your application, consider carefully how this can occur in the VR space.

View Article
Virtual Reality Design Principles: Starting Up, User Attention, and Comfort Zones

Article / Updated 10-24-2018

When designing for virtual reality (VR), it’s important to follow best practices to optimize the user experience. The term design principles refers to a set of ideas or beliefs that are held to be true across all projects of that type. For VR, these principles vary from traditional design. Some examples of design principles within two-dimensional design include designing on a grid or creating a visual hierarchy of information to direct users to the most important information first. These principles, or agreed-upon standards, are created over many years, after much experimentation and trial and error. And although these principles can be broken, they should be broken only for good reason. Because VR is such a new field, those developing VR content are still in the process of discovering what its design principles are. Often, in order to find out what design principles work well, you have to find out what does not work well. Best practices and standards will emerge over time as the VR community grows and more mass consumer VR applications are produced. In the meantime, there are a number of generally agreed upon standards for VR, regardless of the platform for which you may be designing. Best practices for starting up your VR experience Upon initially entering an experience, users often need a moment to adjust to their new virtual surroundings. A simple opening scene where users can adjust to the environment and controls can be a good way to help them acclimate to your experience. Allow the user to acclimate themselves to your application and move into your main application experience only when they’re ready. This image shows how the game Job Simulator handles startup. Job Simulator’s entry screen establishes a clean environment and asks the user to complete a simple task similar to the controls used within the game in order to start the game. This gives the user time to adjust to the game environment and get accustomed to the controls that she’ll use in the game. Focusing the user’s attention in VR VR is much less linear than experiences within a traditional 2D screen. In VR, you must allow the user the freedom to look around and explore the environment. This freedom of exploration can make it difficult when you need to attract the user’s attention to certain portions of your application. A director in a 2D movie can frame the user’s vision exactly where he wants it. As the director within a 3D space, however, you have no idea if the user might want to face your main content or be focused on some other part of the scene. You cannot force a user to look a certain direction — forcing a users’ view in VR is one of the quickest ways to trigger simulator sickness. However, there are a number of ways to focus the user’s attention where you want it. Subtle 3D audio cues can guide a user to the area where action is occurring. Lighting cues can be used as well. For example, you can draw the user’s attention by brightening the parts that you want them to look at and darkening parts that you want to deemphasize. Another way is to reorient the content itself within the app to match the direction the user is facing. In what is perhaps the easiest solution, some applications simply put messaging within their 3D environment instructing the user to turn around and face wherever they want the user’s attention to be focused. This technique is also used in room-scale games in which a user may only have a limited number of sensors available to track his motion in the real world. It can be easy to get turned around in room-scale VR, and putting up a message can help a user re-orient himself in relation to the real-world sensors. The image below shows this method in use in the game Robo Recall. The messaging is blunt, but it gets the point across for where the user should focus. Whichever way you choose to handle focusing the user’s attention, realize that in VR users must have freedom of choice. That freedom of choice can conflict with what you may want them to do. Finding ways to allow that freedom of choice while also focusing the user where you want him is a vital part of a well-designed VR experience. Understanding the comfort zone in VR With traditional 2D design, user interface (UI) has been restricted to certain canvas sizes. Whether it’s the size of the browser or the size of the monitor, something has always placed a limit on the dimensions in which your user interface could exist. VR removes those restrictions. Suddenly a 360-degree canvas is at your disposal to design with! UI can be anywhere and everywhere! Before you start throwing interface elements 360 degrees around your users, there are a number of best practices to keep in mind for making your experience comfortable. If a user must rotate her head too much, strain to read interface text, or flail her arms about in an attempt to use your UI, it will most likely lead to a poor VR experience and cost you users. Alex Chu of Samsung Research, in his presentation “VR Design: Transitioning from a 2D to a 3D Design Paradigm”, provides a number of measurements for the minimal, optimal, and maximum distance objects should appear away from a user. In the presentation, Chu discusses optimal distances for 3D object presentation. As objects get closer to your face, your eyes will begin to strain to focus on them. Around 0.5 meter away from the user and closer is typically the distance where this strain begins to occur; Oculus recommends a minimum distance of at least 0.75 meter in order to prevent this strain from occurring. Between that minimum distance and about 10 meters is where the strongest sense of stereoscopic depth perception occurs. This effect begins to fade between 10 and 20 meters, and after 20 meters, the effect basically disappears. These limitations give you an area between 0.75 and 10 meters in which you should display your main content to the user. Content any closer will cause eye strain to your users, and any farther out will lose the 3D effect you’re trying to achieve. As the resolution of VR headsets improves, the stereoscopic effect may be retained the farther you get from the user, past the 20 meters or so in which the effect disappears today. For now however, the 20-meter mark is still a good rule of thumb for content design. Google VR Designer Mike Alger, in his “VR Interface Design Pre-Visualization Methods” presentation, also discusses the range of motion users can comfortably rotate their heads horizontally and vertically. Chu and Alger both mention that the range users can comfortably rotate their heads horizontally is 30 degrees, with a maximum rotation of 55 degrees. Combined with the field of view (FOV) of the higher-end, tethered headsets (averaging around 100 degrees), this gives a user a range of around 80 degrees to each side for comfortable viewing of the main content, and around 105 degrees to each side for peripheral content. When displaying content to your users, focus on keeping your main content within the user’s horizontal comfort zone of viewing. As FOV of headsets improve, values will change to allow further visibility to the side. However, it is worth noting that most headsets (with a few exceptions such as Pimax) seem to be unconcerned with greatly improving FOV in the upcoming second generation of devices. Regardless, you’ll be able to use the same calculations to determine the comfortable viewing area yourself in the future. Similarly, there is a comfortable range of motion for users to rotate their heads vertically. The comfort zone here is around 20 degrees comfortably upward, with a maximum of 60 degrees upward, and downwards around 12 degrees comfortably and 40 degrees maximum. Most VR headsets don’t publish their vertical FOV, only horizontal. We use 100 degrees as an average vertical FOV, as represented by circle A. The comfortable viewing zone is represented by circle B with the rotation combined with the headset FOV. A user can comfortably rotate her head upward 20 degrees and downward 12 degrees. Circle C represents the extremes, with a maximum vertical rotation upward of 60 degrees and a maximum rotation downward of 40 degrees. Although horizontal head movements are a small annoyance, vertical head rotation can be extremely taxing to a user to hold for long periods of time. Vertical FOV of headsets is also not typically published, so it’s approximated here. On some headsets, it may be even smaller. As a best practice, try to keep the user’s vertical head rotation to a minimum for the most comfortable user experience. Using the preceding information, you can establish a set of guidelines for placing VR content relative to the user. You can place content wherever you like of course, but important content should stay within the areas where the horizontal, vertical, and viewing distance comfort zones converge. Content in areas outside of these zones is less likely to be seen. If you’re creating content that is meant to be hidden, or only discoverable through deep exploration, areas outside of the comfort zone can be good areas to place that content. However, avoid keeping your content there once discovered. If a user has to strain for your content, he won’t stick around in your app for long.

View Article
Virtual Reality Devices: Pimax, Looxid, and Varjo

Article / Updated 10-24-2018

Virtual reality (VR) devices are still in their infancy. But, it you are hoping to get a feel for what current VR devices have to offer, you might give Pimax, Looxid, or Varjo a try. Take a peek to see what these VR devices have to offer. VR devices: Pimax 8K Pimax is a Chinese startup that appeared on Kickstarter in 2017 and surprised many with a claim that the company had plans to release the world’s first 8K headset. While the “8K” claim is a bit of a marketing trick (in that its two 3,840-x-2,160-pixel displays don’t actually make the headset 8K), many have been impressed with the visuals offered by its two 4K displays. The Kickstarter offering was a roaring success, with an original goal of $200,000 being obliterated as pledges shot past $4.2 million. The human eye’s natural FOV is around 200 degrees. Pimax’s claim of a 200 degree FOV has yet to be verified, but by all accounts the FOV of the headset stands well above the FOV of almost every other current-generation and even most next-generation HMDs. Early reviews indicate that the unique lenses and insanely high-resolution screens give the feeling of the world wrapping around you as you would experience in real life. FOV aside, Pimax also offers a number of other unique modules the group hopes to bring to its headset, such as eye tracking (the ability of the headset to monitor your eye movement and adjust based on where you’re looking), inside-out tracking, hand tracking, and even scent enabling (yes, just what it sounds like). It also offers a number of the things you would expect to exist in the current generation of headsets — positional tracking via base stations, motion controllers, and so on. Pimax is compatible with OpenVR as well, meaning it can be used by other items that follow the OpenVR specification (such as the Vive controllers). OpenVR is a software development kit (SDK) and application programming interface (API) built by Valve for supporting SteamVR (powering the HTC Vive) and other VR headsets. Early reviews praised the increased FOV of the Pimax, but they were also careful to point out some of the Pimax’s current issues. Positional tracking of both the headset itself in space and tracking of controllers are cited as some of the kinks that may need to be ironed out in order for the Pimax to reach its full potential. Due to its incredibly high resolution, the Pimax will also require a very high-end computer and graphics card to adequately power the experience. The Pimax also currently requires tethering to that computer, eschewing the wireless direction many next-generation headsets appear to be targeting. Final price and release date will hopefully be available soon, though Pimax is hoping to ship to Kickstarter backers in 2018 and have indicated the price range will be approximately $400 to $600. Technology releases at mass-consumer scale become a numbers game of finding the price point consumers will pay. This price point can often determine the features and specifications you build into your headset. Although it may be possible to manufacture a true 8K headset with a full 200-degree FOV and full inside-out tracking (and, in fact, such a headset may exist already in the enterprise world), a consumer-scale release of such a headset is cost-prohibitive at this time, and the market for purchasing it likely does not exist. It remains to be seen whether Pimax will be able to iron out the issues that currently exist with the headset while retaining what makes the headset unique. But companies such as Pimax that seek to push the envelope are a good thing for the industry as a whole. Regardless of whether the Pimax 8K becomes a success, it does signal a next step forward for VR in attempting to remove yet another barrier between the medium and full immersion. There are any number of VR headsets that are coming out (or even already released) that could be compared to the headsets on this list. For example, StarVR is a headset with similar specs to the Pimax 8K in terms of FOV and refresh rate. Currently StarVR is enterprise-level hardware whereas Pimax is looking to target the mass consumer market. If you’re developing VR applications, especially targeted for enterprise level customers, be sure to research all the potential options available. VR devices: LooxidVR Looxid Labs is a startup responsible for the LooxidVR system, a phone-based VR headset created to capture insights into human perception within VR. The LooxidVR headset incorporates both EEG sensors to measure brain waves and eye-tracking sensors to determine what a user is looking at. Combining this data could allow for better understanding of users’ emotional reactions to various stimuli and could lead to more immersive experiences. Individual VR consumers are not Looxid’s current target. You likely won’t find yourself buying a LooxidVR device for single-use consumption anytime soon. However, by selling its system to researchers and businesses, Looxid could begin to have a deep impact on the VR industry as a whole. The Looxid system could find a great deal of use in the healthcare industry, particularly in therapy and in measuring users responses to mental trauma. It could also be used in gaming, with games modifying their gameplay based on your biometric response. Is a certain area of the game causing you stress as measured by Looxid? Perhaps the game will modify itself to make that area easier. Playing a horror game and one section of the game elicits elevated responses? The game could modify itself to include more of whatever it is that seems to be triggering that response from you, making it even more intense. With its incorporation of both eye tracking and brain monitoring, the Looxid system could also find uses as a powerful tool for advertising and user analytics. Advertising is a field that VR has yet to unravel, but many are attempting to do so, as the payoff could be huge. Google has begun to experiment with what advertising in VR could look like. Unity has started to experiment with VR ads as well, putting forth the idea of “Virtual Rooms,” which would provide separate branded experiences included in users’ main applications. With Looxid’s system, it will be possible to capture analytic data from these advertisements deeper than any current VR offering, including how well these ads succeed with their target markets. Unity’s “Virtual Room” ad technology is Unity’s answer to how advertising in VR should look. The Virtual Room is a VR native ad format Unity is creating in conjunction with the Interactive Advertising Bureau. The Virtual Room will be a fully customizable mini application that appears within your main VR application. A user can choose to interact with the Virtual Room or ignore it. VR devices: Varjo Varjo is notable in its claim that its current headset can offer an effective resolution of 70 megapixels (human eye resolution) in VR, whereas most current generation headsets sit at around 1 or 2 megapixels. Varjo aims to accomplish this utilizing eye tracking to follow where a user is looking and render the highest resolution only for that space, with items in the user’s peripheral vision rendered at a lower resolution. The Varjo headset is still in prototype mode, but the company hopes to release a beta version of its headset to the professional market in late 2018 and follow up with a consumer-market release. Things like production volume and final design are yet to be determined, but the initial messaging from the company lists the professional headset as “under $10,000.” That price doesn’t inspire confidence just yet, but you’d be wise to keep an eye on the technology and see if other manufacturers take note and begin incorporating foveated rendering techniques within their own headsets.

View Article
Virtual Reality Devices: Lenovo and Oculus

Article / Updated 10-24-2018

If you plan to dip your toe into the virtual reality (VR) waters, you probably are curious about what kind of VR devices you can lay your hands on. Two of the major companies to emerge in the VR world are Lenovo and Oculus. Take a look at their VR device offerings. VR devices: Lenovo Mirage Solo The Lenovo Mirage Solo is similar to the HTC Vive Focus. It’s an all-in-one standalone headset that doesn’t require any extra computers or mobile devices to power it. Similar to the Vive Focus, the Mirage Solo will allow 6DoF via a pair of front-facing cameras, allowing inside-out positional tracking. This setup enables you to wirelessly move about the virtual worlds the same way you navigate the real world. With a built-in display, a 3DoF controller, and positional tracking using Google’s WorldSense technology, the Mirage Solo eliminates the need for any external sensors. The headset is also built on top of Google Daydream technology, allowing the headset to tap into Google’s existing Daydream ecosystem of applications. The Mirage is set to release sometime in the second quarter of 2018. The original price point was set above $400, but Lenovo has since adjusted this and is looking to aim at a price below $400. It will be interesting to see where the price finally lands. It’s clear, however, that companies such as Lenovo are keeping an eye on the casual market to try to determine the correct price point to target that market. VR devices: Oculus Santa Cruz The Oculus Santa Cruz was originally announced at Oculus Connect 3 in 2016. Oculus seems to be positioning this new product as a mid-tier headset similar to the Vive Focus and Lenovo Mirage Solo. However, it promises a higher-end VR experience than current mobile VR models such as the existing Gear VR or upcoming models such as the Oculus Go. However, it doesn’t quite deliver the same level of experience as the PC-powered Oculus Rift. Oculus co-founder Nate Mitchell has confirmed this to Ars Technica, framing the Santa Cruz as the mid-tier product in Oculus’s three-headset strategy for VR hardware. Like the Focus and Solo, the Santa Cruz is a self-contained VR headset. Instead of being powered by any external device, it contains everything you need in the headset itself. No more tripping over external wires or cords leading to your device. It purportedly will allow 6DoF for motion and positional tracking via the headset’s inside-out tracking. Similar to tethered headsets that rely on sensors, Santa Cruz will display a virtual grid if you come too close to a physical barrier such as a wall. The Santa Cruz also appears to be designed to utilize a pair of 6DoF wireless controllers, which could put its motion controls a cut above other mid-tier current- and next-generation wireless headsets whose controllers only allow for 3DoF. Santa Cruz’s four arrayed cameras around the edges of the headset allow for a very large area in which to track the controllers’ position. Some current-generation headsets that utilize inside-out tracking for their controllers can cause tracking to be lost if the controllers move too far out of the line of sight of the headset sensors. Oculus appears to have taken steps in order to solve this issue with the Santa Cruz. The Santa Cruz appears to be following other headsets in regards to audio as well. The Go and Santa Cruz both utilize a new spatial audio system that, instead of relying on headphones, will place speakers onto the sides of the headset, allowing audio to be broadcast not only to the HMD wearer but the rest of the room as well. There is still a 3.5mm audio jack for those who prefer headphones, but the convenience of speakers is a nice touch. On the surface, the Santa Cruz sounds like a promising device. The biggest question marks surrounding the Santa Cruz at the moment are timeline and final specs. Oculus has been silent on final product specifications, release date, and so on, so no final product specs can be listed. Oculus is shipping devices to developers in 2018; this would lead most to believe that the final hardware specifications are close to being locked in. Based on previous products’ timelines between developer release and final release, a good estimate for consumer release date for the Santa Cruz would likely be early 2019, though only Oculus knows its final release date. With the current generation of VR headsets, audio consumption has often been cited as one of the reasons VR experiences feel like solitary experiences. Most current-generation headsets come with or require a set of headphones to experience what’s happening in VR. This can lead to a very immersive experience for the wearer but effectively shuts the wearer off from the outside world, completely covering his eyes and ears. Many of the newer headsets appear to be leaning toward implementing speakers on the headset itself alongside headphone audio ports. This will enable the headset wearer to still have at least an auditory connection to the outside world, as well as let others hear what the wearer is currently experiencing. VR devices: Oculus Go Oculus appears to be targeting a different crowd with its Oculus Go standalone headset. Whereas the Mirage, Focus, and Santa Cruz all appear to be positioned as mid-tier options between the existing desktop and mobile VR markets, the Go looks to take over (while up-leveling) the current mobile VR experience. The Go is a standalone headset that doesn’t require a mobile device. It offers 3DoF, providing rotational and orientation tracking but not the ability to move backward or forward physically in space. This makes the Go more suited to seated or stationary experiences. Although the Go doesn’t appear to offer a number of the features that the Mirage and Focus do (most notably the addition of 6DoF tracking), Oculus likely hopes to use a lower price point (around $200) to lure in those entry-level VR users who may have previously considered purchasing a Gear VR or Google Daydream for mobile VR experiences. This image depicts the new form factor of the standalone Oculus Go headset. Similar to other standalone headsets such as the Mirage and Focus, the Go controller will provide 3DoF tracking. Much simpler than the current Rift motion controllers, the Go controller aligns itself more closely with the current Daydream or Gear VR options. The Go will have its own library of games but will also offer support at launch for many of the existing Gear VR titles. This image displays the new Oculus Go controller. The Go controller will be slightly different in form but offer the same functionality as Samsung’s current Gear VR controller.

View Article
The Technology Hype Cycle: Is Virtual and Augmented Reality Just Hype?

Article / Updated 10-24-2018

As we look at how virtual reality (VR) and augmented reality (AR) will impact our world, we need to consider the technology hype cycle. Technological waves go through various peaks and troughs before they reach mass consumer adoption. Information technology research firm Gartner once proposed what it called the Gartner Hype Cycle, a representation of how the expectations around transformative technologies play out upon release. The Gartner Hype Cycle can help predict how a technology will be adapted (or not) over time. Both the Internet (with the dot-com crash) and mobile pre-2007 went through similar (if not exactly analogous) market curves. In the beginning, an Innovation Trigger kicks off interest in the new technology, triggered by early proof-of-concepts and media interest. Next is the Peak of Inflated Expectations. Buoyed by the early work and media buzz, companies jump in with higher expectations than the technology can yet deliver upon. What follows is the Trough of Disillusionment, where interest in the technology begins to dip as implementations of the technology fail to deliver on the lofty expectations set by the initial Innovation Trigger and media buzz. The Trough of Disillusionment is a difficult space for technology, and some technologies may die out in this space, never fulfilling their initial promise. Those technologies that are able to weather the storm of the Trough of Disillusionment reach the Slope of Enlightenment, as second- and third-generation products begin to appear and the technology and its uses are better understood. Mainstream adoption begins to take off, often paying dividends for the early adopters able to see their way through the trough with their ideas and executions intact. Finally, we reach the Plateau of Productivity, where mass adoption truly begins, and companies able to weather the stormy waters of the hype cycle can see their early adoption profit. Determining where VR and AR are in this cycle can be useful in making your decisions on how to approach these technologies. Does it make sense for your business to jump into VR and AR technologies now? Or are things not ready for prime time, and should you perhaps hold off for a few more years? Gartner claims that VR is just leaving the Trough of Disillusionment and headed into the Slope of Enlightenment at the end of 2017, with a payoff of mass adoption within two to five years. AR, on the other hand, is listed by Gartner as currently wallowing in the Trough of Disillusionment, putting mass adoption for AR at a more conservative five to ten years out. Though the Trough of Disillusionment sounds like an ominous place for AR to be, it’s a necessary phase for technology to pass through. Innovative technology, before hitting consumers’ hands, needs to go through the grind of establishing an identity and determining where it fits in the world. Manufacturers need to figure out what problems VR and AR solve well and what problems these technologies do not solve well. That often requires numerous trials and failures to discover. AR as a mass consumer device is in its adolescence. Manufacturers and developers need time to figure out what form factor it should exist in, what problems it can solve, and how it can best solve them. Rushing a technology to market before these questions can be answered can often cause more problems than it solves, and is something that manufacturers of any emerging technology, including VR and AR, should be wary of. Further, Gartner released this Hype Cycle report for VR and AR less than one month after Apple’s ARKit announcement and a full month ahead of Google’s ARCore announcement. An argument could be made that those two releases technically triggered mainstream adoption purely by the install base of ARKit and ARCore. However, that feels slightly disingenuous. Installed base alone does not automatically equal mainstream adoption (though it is a large piece of the puzzle). When using a technology becomes frictionless and nearly invisible to the end user, when using that technology becomes as second nature as starting up your web browser, checking your email on your mobile device, or texting a friend, that is when a technology has truly hit mainstream adoption. Neither technology has yet reached this level of ubiquity, but both are looking to hit their stride. The long run of VR and AR holds the same promise of technological waves as the personal computer and the Internet. The time for you to take action on these technologies is now, whether it’s to simply research what they can do for you, to dive into purchasing a device for your own consumption, or even begin creating content for VR and AR.

View Article
10 Augmented Reality Mobile Apps

Article / Updated 10-24-2018

One of the biggest challenges that virtual reality (VR) and augmented reality (AR) face today is the lack of consumer device availability. This is especially true for AR, where the best form factor experiences (glasses or headsets) are out of reach for all but the most dedicated early-adopter tech enthusiasts. Luckily, the rise of mobile AR has given way to a number of AR apps for mobile devices. These apps may not provide the optimal hardware form factor, but they can start to paint a picture for users about what sorts of problems AR will be able to solve. Here, you take a look at ten (or so) AR apps you can experience today with little more than an iOS or Android device. Due to the mobile form factor, some of these apps may not be the idealized form factor for AR. However, the engineers at Apple and Google have done an incredible job of shoehorning AR into devices that were not originally built for AR experiences. As you review these apps, imagine how these experiences could be delivered to you in the future: in high fidelity via a pair of unobtrusive AR glasses. Consider what benefits that change in form factor could offer, how it could improve these already interesting executions. That’s the promise of near-future AR. Mobile AR is an amazing feat of engineering, and it has many intriguing use cases. However, AR’s ultimate form factor is likely an execution that feels less obtrusive and can be used hands-free. Keep this in mind when reviewing any mobile AR experiences. What may feel a bit awkward or strange to use today may only be a form factor upgraded version or two away from incredible. Google Translate Google Translate is a wonderful example of the power of AR. And not because it stretches AR to its visual technical limits — it doesn’t. Its visuals are simple, but its inner workings are technically complex. Google Translate can translate signs, menus, and other text-based items in more than 30 different languages. Simply open the app and aim the device’s camera at the text you want to translate and — voilà! — you get an instant translation digitally placed on top of the original block of text. This image shows a screenshot of Google Translate in use. A sign written in Spanish is seamlessly translated to English on the fly via the Google Translate app, which replaces the Spanish characters on the sign with a similar English font. Imagine traveling in a foreign country equipped with a pair of AR glasses powered by Google Translate. Signs and menus that previously were nothing but strings of unrecognizable characters become instantly readable in the language of your choice. The same Google Translate includes automatic audio translation, too. Imagine those same AR glasses paired with unobtrusive headphones (which, unsurprisingly, Google also manufactures) translating foreign language audio on the fly. Both the audio and visuals “augment” your current reality and make language barriers a thing of the past. Google Translate is available for both iOS and Android devices. Amazon AR View One of the obvious questions that AR can help answer is: “How does this item look in real life?” A number of companies have attempted AR implementations of their physical catalogs, but it has generally been confined to larger items such as furniture. Furniture and other large pieces can be notoriously difficult to shop for online — picturing how these large items may look in your home can be tough. Retail giant Amazon recently added an AR feature to its Amazon standard shopping app called AR View. Amazon’s AR View enables you to preview thousands of products in AR — not just larger furniture pieces, but toys, electronics, toasters, coffee makers, and more. Open the Amazon app, select AR View, navigate to the product you want to view, and place it in your space via AR. You can then walk around it in three dimensions, check out the sizing, and get an idea of the object’s look and feel within your living space. The image below shows Amazon’s AR View in action, displaying a digital Amazon Echo Look via AR in a real environment. Only a small subset of Amazon’s offerings are currently available in AR View, but that will likely change soon. The main drawback of Amazon being an online-only store has meant that customers can’t experience the physical products as they would at brick-and-mortar locations. AR might allow Amazon to help alleviate that issue. You can imagine Amazon requesting that a majority of its vendors’ catalogs be digitized in order for users to be able to experience their product listings digitally via AR. Amazon AR View is available for iOS devices and is coming soon to Android. Blippar Blippar is a company with a lofty goal: to be the company that bridges the gap between the digital and physical world via AR. Blippar envisions a world where blipp becomes a part of our everyday lexicon in the same way you may use Google as a verb today: “Just Google it!” To Blippar, a blipp (noun) is digital content added to an object in the real world. And to blipp (verb) means to unlock Blippar’s digital content via one of Blippar’s applications in order to recognize the object and display the content on your mobile, tablet, or wearable AR device. Blippar is not just an AR execution — it’s a clever mix of many technologies. Blending many technologies such as AR, artificial intelligence, and computer vision, the Blippar application can recognize and provide information about millions of real-world objects and even people. After you download the Blippar app and point your mobile phone at an item, such as your laptop, Blippar scans the device, recognizes the item, and offers information about it. For example, for a laptop, it may show you facts about laptop via Wikipedia, let you know where to buy laptops online, and point to YouTube videos of laptop reviews. If you aim your Blippar application at a famous person, such as the chancellor of Germany, Blippar tells you her name and offers up various bits of information and news about her. Blippar also offers branded experiences. Companies that want to provide AR data about their products can work with Blippar to create their own branded AR executions. For example, Nestlé may request that whenever a user blipps an image of one of its candy bars, the Blippar app delivers an AR game to the user. Universal Pictures may request that whenever a user blipps any of its posters for Jurassic Park, a dinosaur pops out of the poster in AR and provides a trailer link for the movie. Or, as the image below shows, Heinz may request that any time a user blipps an image of its ketchup bottles, the app displays an AR recipe book. Unlike many of the AR applications available today on mobile devices for consumer use, Blippar’s use of computer vision offers functionality beyond just placing a model in 3D space. Blippar’s ability to recognize objects and utilize AR alongside those objects may be a sign of where AR will end up next. It remains to be seen whether one day we’ll be telling our coworkers, “Just blipp it,” but Blippar’s future looks bright. Blippar is available for both iOS and Android devices. AR City AR maps are an application whose time has long been coming. The ability to project directional arrows leading to your destination onto your car’s windshield or a pair of wearable glasses has long been a goal. Created by Blippar, AR City enables you to navigate and explore more than 300 cities worldwide using AR. As you travel to your destination, AR City visualizes your route on top of the real-world view via 3D overlays of your surroundings. In certain larger cities and metropolitan areas, enhanced map content provides further information about the places around you, including street names, building names, and other local points of interest. This image shows an example of AR City in use. In a select few cities, Blippar further utilizes what it has termed its urban visual positioning (UVP) system. Blippar claims that UVP enables the company to get up to twice the accuracy of GPS, the current technology behind standard mobile phone mapping applications. Using UVP, Blippar claims it can get data so precise that it could begin placing virtual menus on walls in front of restaurants or interactive guides on famous monuments, with pinpoint accuracy. As with the other items on this list, the ultimate form factor of AR navigation as shown in AR City is likely not within a mobile phone. A similar navigation system embedded in glasses or projected on your windshield may sound like something from the far future, but AR City proves that that future is fast approaching. ARCity is available for iOS devices. ARise ARise is a departure from many of the AR game apps currently in the Apple App Store and Google Play Store. Unlike many games that feature AR as an add-on to the main gameplay mechanic, ARise is designed specifically to make use of AR features. In ARise, your goal is simple: to guide your hero to his target. However, you’re provided with few controls for doing so. You never touch the screen or swipe to solve the puzzles. Line of sight and perspective are your only methods of navigating these virtual worlds. This image shows ARise being projected into a user’s environment for gameplay. The goal and gameplay are both relatively simple. What makes ARise a good example of AR for beginners is its requirements for users to get up and move around the game board in order to accomplish their goals. The levels within ARise are fairly large and complex. In order to correctly align your perspective to reach your goal, you’ll have to navigate around the digital holographic world by moving around in the real world. By no means should every AR game require the same amount of physical interaction that an experience like ARise requires. There are plenty of instances where gamers would prefer to sit on their couches instead of having to constantly move around a digital hologram in physical space. However, for beginning users unfamiliar with what AR can do, a game such as ARise strikes the right balance between tech demo and full-blown gaming experience, serving as a basic introduction for what AR can do. ARise is available for iOS devices. Ingress and Pokémon Go It would be difficult to make a list of AR applications you should try out and leave off two of the apps that launched interest in AR and location-based gaming. The gameplay in Ingress is fairly simple. Users choose a team (“the Enlightened” or “the Resistance”) and try to capture portals, locations scattered throughout the globe at places of interest, such as public art, landmarks, parks, monuments, and so on. The user’s map within the game displays her location in the real world and the portals closest to her. In order to capture a portal, a user must be within a 40-meter radius of the portal, making Ingress a great game for getting users to walk around and explore within the real world. Pokémon Go is cut from a similar cloth. The gameplay of Pokémon Go aligns with its slogan: “Gotta Catch ’Em All.” The user is cast as a Pokémon trainer and shown a digital representation of himself on a map, as well as the location of nearby Pokémon. As with Ingress, users playing Pokémon Go have to travel to a real-world location close enough to the Pokémon in order to capture it. When a user is within range of the Pokémon, the user can try to capture it by throwing Pokéballs at it in either a fully digital or AR environment. A trainer can use his captured Pokémon to battle rival teams at virtual gyms throughout the world. Ingress and Pokémon Go were both very early entries into the AR space. And purists may argue that the lack of digital visual holograms interacting with the real world means neither is a true AR game. (Pokémon Go does allow you to try to catch a Pokémon as if it were visually in the “real” world, but with no interaction with the real-world environment.) However, AR can be more than just a visual display. AR can mean any method of digitally enhancing the real world. Both Ingress and Pokémon Go augment the real world with digital data and artifacts. The debate over what is and isn’t “AR” may be best left for the terminology purists to decide. In the meantime, both games are worth exploring if for no other reason than to decide for yourself what you think makes an AR experience “augmented reality.” Ingress and Pokémon Go are available on both iOS and Android devices. MeasureKit and Measure MeasureKit and Measure are two apps that can introduce users to the power of AR through simple utilities. MeasureKit is the iOS version, and Measure is a similar Android version. The concept of both applications is simple: Using the live video feed from the camera on your mobile device, you point at a spot in the real world. Target the spot you’d like to start measuring from and click to begin measuring. Then target a second spot and click to stop measuring. It’s not the flashiest use of AR, but the apps are a good example of utility AR applications for the real world. With MeasureKit and Measure, you can measure length, width, height, and even volume of objects, all while building a virtual outline of the space your measurements take up in the real world. Plus, the types of measurements the apps can capture, such as volumetric measurements, are often much easier to capture and visualize within AR. As with AR in general, both apps have some work to do before they’re ready for prime time. However, you can easily envision the utility these types of applications could provide for workers on a factory floor or contractors on construction sites, especially combined with the form factor of AR glasses. Virtual measurements could be shared among all workers on a construction site between each pair of AR glasses, displaying entire lists of virtual measurements overlaid on top of an unframed room, removing the possibility for errors or mixups during construction. MeasureKit is available for iOS devices, and Measure is available for Android devices. InkHunter InkHunter is an application enabling you to try on virtual tattoos via AR before they’re inked onto your skin for eternity. Simply download the app, draw a marker for where you would like the tattoo to appear, and select a tattoo you’d like to see visualized virtually on your skin. The inner workings of the app will detect the marker and keep the tattoo mapped onto your body, even as you move around in space, allowing you to “try on” and evaluate any number of tattoos, even tattoos made up of your own photos. The ultimate goal of most AR applications is to function marker-less — that is, with no fixed reference point in the real world. In the case of InkHunter, however, although it can use AR and computer vision to detect surfaces around you, it would have no way of knowing what surface to apply the tattoo onto. The marker serves as a way to allow InkHunter to determine the surface and direction on which to overlay the tattoo. InkHunter is available for both iOS and Android devices. Sketch AR Sketch AR enables users to virtually project images onto a surface, and then trace over the virtual images with real-world drawings. It’s similar to illustrators’ use of light boxes or projectors to transfer artwork onto various surfaces. Choose a drawing surface, bring up the various sketches available to trace, select the image to trace, and then hold the camera in front of your drawing surface. The image will now appear mapped to your piece of paper, so you can trace over the lines to create your image. Although its utility for practicing sketching on a piece of paper with your mobile device is limited, a more intriguing use case is the use of Sketch AR within the Microsoft HoloLens, which enables users to transfer small sketches onto much larger murals. Like many current mobile AR-powered applications, the ultimate form factor of Sketch AR is not your mobile device. When illustrating, you want your hands as free to move as possible. With one hand busy trying to hold your device steady at all times, the experience isn’t a perfect one. Seeing that the app has been built for not only mobile devices but HoloLens as well is heartening. more AR headsets and glasses are released at a consumer level, companies will hopefully follow suit in bringing their mobile AR experiences to these various wearable AR form factors. Sketch AR is available for iOS and Android devices, as well as Microsoft HoloLens. It has been speculated that Apple’s reasoning for releasing ARKit to mobile devices now was to provide a glimpse of the future while allowing developers to access the sort of application program interface (API) that will be available to them if Apple’s long-rumored AR glasses come to fruition. Find Your Car and Car Finder AR AR car finder applications are available today, and they work well, but similar technology has broader implications for future applications. Both Find Your Car (for iOS) and Car Finder AR (for Android) are simple applications that work in similar ways. Park your car (or whatever you want to find a way back to), drop a pin, and then when you’re returning to your car, a compass arrow will guide you back, providing distance and direction to where you left it. Being able to drop a pin and be guided back to your misplaced vehicle via overlaid directions solves a problem many people struggle with, but apps to locate your car via a dropped pin have existed in some form or another for a while. Certainly AR improves the experience, but what if it went even further? A proof-of-concept app, Neon, is looking to do just that. Billing itself as the “world’s first social augmented reality platform,” Neon allows users to leave 3D AR holographic messages in the real world for friends to view, which they can find by following Neon’s mapping system. Plus, Neon plans to enable you to locate friends who also have the Neon app in a crowded stadium, festival, or anywhere that might necessitate the usage of the app. (Neon is not yet released to any app stores.) With a pair of AR glasses, parents could track their children at crowded playgrounds down to the direction and distance they are apart. You could be alerted that a friend or acquaintance in your social network is nearby. Or simply never forget a name or face again — your AR glasses could recognize a user via computer vision and serve up his profile information to you via AR directly to your glasses. Find Your Car is available on iOS devices. Car Finder AR is available on Android devices. Neon is currently in beta.

View Article
page 1
page 2
page 3
page 4