Design / UI Guidelines Optimize your workspace

One of the biggest differences between augmented reality and virtual reality is the fact that augmented reality UIs live in combination with the real world environment. At its best, AR should complement and integrate into your physical workspace environment, rather than obscuring it. Understanding the basics of Meta’s AR technology will help you design better experiences for your users. 

Here are the basics that relate to the technology of your headset, the optimum UI positioning and environmental considerations to keep in mind when developing your UIs.

Understanding the basics of Meta’s AR technology will help you design better experiences for your users.

SLAM

SLAM stands for Simultaneous Localization And Mapping. It is the headset’s way to interactively map your surroundings by identifying features in your environment, and use this information to determine the headset’s position and orientation. In a nutshell, it is the technology that allows holograms to retain their location in your physical environment, tracking the motion of your head.

When Developing:

  • Position features (clutter) around your environment to better enable SLAM to track your space. Avoid highly repetitive uniform patterns, though, as they are difficult to track.
  • Maintain a 10 - 20 inch (1.2ft - 1.8ft) buffer distance between the cameras and the closest objects. The cameras are able to see more features within your environment within this range.
  • During the initialization phase, the user needs to turn their head slowly as per the instructions. This allows the system to best capture the user's space.

Hands Interactions

Ui_Fig20_Image.png
Figure 1. Hands Interactions, Grab shown.

The Meta 2 supports direct object manipulation. To initiate a grab:

  1. Move toward the object with your open hand
  2. Grab the object by closing your hand
  3. Move the object while keeping your hand closed
  4. Release the object by opening your hand

Hands Detection

The Meta 2 includes a depth sensor that allows the headset to recognize your hands, and hence their interaction with objects. The depth sensor has a fixed field of view and is a part of the headset (unlike the external tracking technology used in an Oculus Rift), so it moves with the user's head. This means that hand detection is most successful when the hands are kept within the volume depicted in Figure 2 below.

Ui_Fig1_Image.jpg
Figure 2. Hand Detection Volume, seen from above and to the side.

Augmented Field of View

The augmented Field of View (FOV) is everywhere that a user can see augmentations through the display lens on their headset. The Meta 2 Headset has a 90° top/ 80° bottom horizontal FOV and a 50° vertical FOV.

Hand Field of View

The Hand Field of View is defined by the Depth sensor and represents the volume where hands are detected for interactions.

Interaction Region

There is an overlap between the volume where hands are detected by the Depth sensor and the Field of View (FOV) that we refer to as the Interaction Region. This is the best volume to populate with the primary UI and content. See Figures 3 and 4 below.

One way to create a secondary UI is to place less utilized UI elements just outside of the interaction region, in the peripheral portion of the Augmented Field of View.

We’ve found that using a “comfortable reach” of 0.35 m to 0.55 m (1.2ft–1.8ft approx.) is best for placement of objects and UIs, since that is the distance the sensors are optimized to detect.

 

Ui_Fig2_Image.jpg
Figure 3. Interaction Region (while standing) shown in blue.

 

Ui_Fig3_Image.jpg
Figure 4. Interaction Region (while sitting at a desk) shown in blue.

UI Placement and Position

In regard to the placement of UI elements relative to the user, there are a few paradigms to consider, which can be used separately or in conjunction with each other:

Meta Compositor (MetaCameraRig)
Meta Compositor (MetaCameraRig) UIs occupy a space independent of the user’s head or body position or rotation. These UIs are primarily mediated by SLAM.

Screen Space/HUD UI
A UI that is locked to the user’s Heads Up Display (HUD) that persists in the user’s immediate field of view as they move.

Ui_Fig4_Image.jpg
Figure 5. Interaction Region shown in blue.

Optimal Workspace Conditions

For optimal tracking of your environment and hands, the following conditions are recommended:

Lighting
It is important to have a moderately lit area in order for the camera and sensors in the Meta 2 to track the environment properly. Avoid sitting or standing in extreme lighting conditions such as direct sunlight from a window, or a very dark room. This will help ensure that the display’s brightness and contrast are as good as possible.

Also, avoid from rapidly moving to a new environment or location where your surroundings change significantly from where SLAM was initiated. 

Desk Environment
It is best to have a reasonably clutter-free desk, especially within the Interaction Region, but keep in mind that physical objects placed at varying distances actually help the headset’s SLAM system map the environment.

Our hand-sensing technology works best in an environment with non-reflective surfaces vs. glossy/shiny surfaces (anything with shiny highlights can potentially confuse the sensors, resulting in interaction problems).

SLAM is capable of ignoring moving objects with your environment, but if too much is moving, SLAM may have trouble tracking the static features within your space. Try to avoid extremely dynamic environments

Bracelets, Watches, and Long Sleeves
Current hand-sensing technology works best when the user’s hands and arms are free of objects such as watches, heavy bracelets, and long sleeves. As our sensors and algorithms continue to improve, we see this as only a temporary constraint.

Note: To re-initialize environmnt mapping at any time, use the F4 key.

 See Slam Events to use to understand the various changes in the state of SLAM.

Previous Next