This SDK provides several ways to introduce hands interactions into your Unity project.

The MetaHands prefab provides data about the user’s hands and must be imported into your scene for access to the Meta 2’s hand interactions within Unity.

Additional components and APIs provided by this SDK allow you to customize hand interaction settings, e.g. by attaching advanced hand gesture interactions to your objects.

Please read on for instructions on how to add and customize hand interactions within your Unity project.

Getting Started

1. Set up a scene with the MetaCameraRig.
See Create a Meta Scene for instructions.

2. Add the MetaHands prefab to your scene.
Locate the MetaHands prefab within the Project tab, either by searching for it or by navigating to Assets > MetaSDK > Meta > Resources > Prefabs.

Once you’ve located the prefab, simply drag it into the Hierarchy tab to import it into your scene.

Note that you can customize settings for all hands interactions on the MetaHands prefab.

3. Play the scene.
While playing the scene, hold one hand out in the center of the Meta 2’s field of view with the back of your hand parallel to your face. For improved detection, do this in an area with less clutter behind your hand. With your other hand, expand the MetaHands prefab and observe that the corresponding child GameObject has been made active.

Now remove your hand from in front of the Meta 2’s field of view. Observe that the child of the MetaHands prefab is made inactive when the Meta 2 can no longer see your hand.

Introduction to Grab Gestures

The easiest way to add hands interactivity to a gameobject in your scene is to use the provided GrabInteraction component. The following instructions will show you both how to add this functionality to an object in your project and how to use your hand to interact with that object.

The same steps can be applied to a variety of other hands interactions provided with this SDK.

1. Set up a scene.
See the previous section.

2. Add a Cube to the scene.
Right click in the Hierarchy and select 3D Object > Cube.

3. Add the GrabInteraction component to the cube.
Select the Cube within the Hierarchy view, click Add Component, and select GrabInteraction.

4. Add a Rigidbody component to the cube.
Select the Cube within the Hierarchy view, click Add Component, and select Rigidbody.

5. Disable gravity and set custom physical properties.
On the Rigidbody component, uncheck Use Gravity. Set the Mass, Drag, and Angular Drag values to 1.

Note: Hand interactions temporarily override kinematic settings (including gravity) and constraints. These settings are automatically restored when the interaction disengages. In other words, you can apply gravity to a GameObject with an attached RigidBody, and gravity simply won’t be applied while the user is grabbing the object.

It should now look like this:

Cube with rigid body and hand interactions

7. Put on the Meta 2 and play the scene.
The cube will appear in front of you.

8. Grab the cube.
Put your hand on the near side of the cube and angle your wrist so that the back of your hand is parallel to your face. Now put your hand up to the edge of the cube and observe the visual change to the status indicator floating on the back of your hand. Finally, gently make a closed fist. An audio cue will play to indicate a successful grab.

9. Move the cube.
Keep your hand closed into a fist and move it around within the sensor’s field of view. The cube will follow your hand.

10. Release the cube and pull your hand away.
Slowly open your hand back to the position you used just before grabbing the cube. The cube will be released and an audio cue will play to indicate that you have released the cube.

Alternate Interaction Gestures

This SDK provides a variety of other one and two hand interactions. Interactions with “grab” in their name support translation; for example, TwoHandGrabScaleInteraction allows scaling and translation, whereas TwoHandScaleInteraction only allows scaling but locks the object’s position.

Multiple interactions can be composed by adding them to a single GameObject. See the HandCubeInteraction scene for an example.

Standard interactions include:

  • TwoHandGrabScaleInteraction - Translate and scale an object with two hands.
  • TwoHandGrabRotateInteraction - Translate and rotate an object with two hands.

Supplementary interactions include:

  • OrbitRotateInteraction - Touch to rotate in an orbital manner.
  • TurnTableInteraction - Rotate a carousel about the Y axis. Tweak the damping setting to your liking.
  • TurnTableSwipeInteraction - Carousel interaction with discrete steps.
  • TwoHandScaleInteraction - Grab with two hands to scale but do not allow translation.
  • TwoHandGrabSwitchRotationInteraction - Grab with two hands, and depending on orientation of your hands, this will rotate around either X or Y.

Advanced Interactions

Hand Trigger

The HandTrigger component can be used to track any HandFeatures (Center, Top, and Farthest) which enter the trigger’s volume. Hand Trigger requires an attached Collider component with the Is Trigger flag set to true on the target GameObject.

Functions may be attached to the Unity event callbacks using the Inspector window. They can also be subscribed to through C# code using AddListener().

The HandTrigger component supports a variety of specific hands interaction events, e.g. FirstHandFeatureEnterEvent. It also provides a variety of utility methods to query hands status. See HandTrigger.cs for details.

Hands Data

The Meta SDK provides direct access to hand tracking data through the HandsProvider system. This tracking information supports implementation of custom hands-based interfaces not possible with the aforementioned hand interaction scripts provided with this SDK.

The API provides the following information for both the left and right hands:

  • Whether the hand is detected
  • The current position of the center (i.e. palm) of the hand
  • Whether the hand is in a closed or open position
  • The current position of the top (i.e. highest fingertip) of the hand

Note that if a specific hand is not detected, information regarding that hand is obviously unavailable.

You can access this information by subscribing to events on the HandsProvider component of the MetaHands prefab.

You can also access this data by directly referencing the children of the MetaHands prefab.

Accessing Raw IR Data

The Meta 2 emits IR light that is visible to our depth camera. This information is used to generate depth data. We expose access to this depth data in Unity.

See here for details.

Visual Cues for Interactions

Visual cues may be used to improve accuracy of Meta Hand interactions. User testing has demonstrated that in certain conditions, outlining objects of interest near the user’s hands provides valuable feedback and improves the user experience.

The MetaInteractionHalo scene has been added to demonstrate how this effect may be applied to GameObjects which include the existing Interaction scripts. If you intend to recreate the effect, please note that the scene includes two additional scripts which are not present in the HandCubeInteraction scene:

  • The StereoCameraObjectOutline script is attached to the StereoCameras GameObject within the MetaCameraRig prefab. It contains references to two cameras: the LeftCamera and RightCamera. The effect is only visible to these cameras.
  • The InteractionObjectOutlineSettings script is attached to the MetaCube1 GameObject. This is necessary for the effect to be applied to the GameObject.

Best Practices

Our hands interaction technology is constantly improving, which means the way in which users interact with your application using their hands may change in the future. In the meantime, we have identified some tips and tricks to help you get the most out of the hands interaction system provided by the current version of the SDK.

  • Your hands must be within the Meta 2’s depth sensor’s field of view to be recognized. Developers may wish to provide a visual cue to users indicating that their hand is nearing the edge of the interaction space.
  • The hands system recognizes whether a given hand is left or right based primarily on where it enters the sensor’s field of view.
  • The hands system tracks two “features” of your hands:
    • Palm: This roughly represents the center of your palm (and the back of your hand). The palm feature is used for grabbing and related gestures. The grab gesture is most reliably recognized when you begin with the back of your hand parallel to your face. The grab is then activated when you close your palm into a fist. To release the grab, return your hand to a vertical orientation which is again parallel to your face. Note that as you close your hand into a fist, the palm feature may move slightly toward your wrist. Developers may wish to account for this when using the grab gesture for precise object placement.
    • Top: This represents the point on your hand furthest from your wrist (typically the tip of your most extended finger). This feature is used for precision interactions such as pressing buttons or scrolling.
  • Two primary two-handed gestures are currently supported:
    • Rotation: For precise rotation control, grab the target object with one hand and keep that hand still. Next, grab with your other hand and then move that hand to apply rotation. Alternatively, you can move both hands simultaneously to achieve more rotation.
    • Scaling: Grab with both hands and move your hands closer together or further apart to apply scale. Note that this gesture relies only on the changing distance between your hands. In other words, this gesture is not used to stretch an object by dragging its edges to specific positions, but rather to apply a scaling factor.
    • Several other hands interaction gestures are also supported; see scripts named *Interaction, e.g. TurnTableSwipeInteraction.
  • For two-hand grab interactions, we recommended that you activate the grab with one hand first, and then activate the grab on the other hand. This sequence provides more precise control than attempting to grab with both hands at the same time.
  • When using the provided gesture interaction scripts, visual cues indicating the position and state of the hands system will be automatically displayed. By default, audio cues will also play.