Place tools and content in space
Replace flat layouts of windows, menus and buttons, which generally favor only half of the visual system, with a Spatial Interface that arranges tools and content in 3D space around the user. This leverages the user’s full perception of form and depth.
UI design suggestions
1. Reconsider traditional Windows GUI conventions for AR
For example, don’t place a “start menu” in space, since it was designed for a small screen and presents a cluttered mess of menus, buttons, abstract icons and tools. Instead...
2. ...separate constituent elements into two buckets: volumetric tools and content
An application designed with spatial design principles should maintain a clear separation between volumetric tools and content. Tools, in particular, must conform to the users’ priors—the mental model they bring with them from the real world into AR—in a way that suggests their use and function (see The Neural Path of Least Resistance above). Content is more likely to vary in its form between abstract flatness and realistic volume. Text and video, for instance, are inherently flat, and should be contained within tangible 3D structures with their own sense of affordance. 3D models, on the other hand, may exist as free-standing objects of their own.
3. Distinguishing between tools and content
Regardless of appearance, tools and content differ in one significant way: content is an experience in and of itself; it conveys information or some kind of sensory experience simply by existing. Tools, on the other hand, serve the purpose of creating, modifying or in some way interacting with content.
4. Avoid expandable or hidden menus in AR
UI such as drop downs are no longer needed since there’s plenty of open space for tools and content around the user, and the Dorsal Pathway of the visual system makes it easy to process them quickly. Take inspiration from real workspaces, like art studios and workshops, rather than mimicking the flat world of traditional UIs. Think in terms of tools that the user can grab, rather than menus and buttons. For instance, if you need a variant of a paintbrush, why not have a paint brush holder where you physically grab it, rather than have a menu?
The neuroscience behind it
Our brains have evolved enormous visual processing capabilities, particularly for recognizing 3D objects and integrating information from multiple perspectives. Meta advises building applications in 3D from the ground up, rather than adapting 2D interface paradigms.
Although we tend to imagine visual perception as a linear process, it’s actually the product of two distinct but complementary systems. One is the Ventral pathway, primarily concerned with recognizing and classifying objects, answering the question of “what” we see. The other is the Dorsal pathway, which identifies relationships through space and movement, addressing questions of “how” and “where” to interact with objects and the environment.
Ideally, these systems work in tandem to achieve comprehensive awareness of one’s surroundings. Screen-based UIs, however, present a flat plane of dense objects and icons, which tends to favor the Ventral pathway over the Dorsal pathway. By arranging volumetric interface elements in true 3D space around the user, we can engage both pathways, giving the user a deeper, fuller understanding.
Insights into the complementary nature of the two visual pathways can be found in Kravitz, Saleem, Baker and Mishkin, 2011.