Skip to main content

The year 2023 is coming to an end and we yet again wonder whether the long-awaited mass adoption of “XR” is finally upon us. So far, we have often been disappointed, but is this disappointment justified, and what does it take for the so-called “XR mainstream”? Does XR have any relevance at all in the broader society? And do we even need the abbreviation XR, if we want to sell the concept to the masses?

Dall-e cration

To put it in a nutshell: The relevance of xR (deliberately written here in the placeholder version) is probably suspected in the broad masses, but cannot yet be precisely classified by them. In my perception, the abbreviation “XR” and even the term “Extended Reality” is still mainly used in professional communities by developers, researchers and content creators or in the context of funding programs. The acronyms AR and VR are more common outside of these fields. Even when end-consumer-facing tech giants want to sell the XR concept (thus their hardware), they avoid using any acronym at all. In the B2B sector “XR” may be used by device manufacturers, such as the “Vive XR Elite” or the “Varjo XR-3” to attach spatial computing or mixed reality capabilities to their products. In the enthusiast bubble, the two letters are used as a catch-all term for augmented, virtual, and mixed reality, or as shorthand for eXtended Reality, which describes the extension of our reality through digital overlays.

A lot of effort is being made by explaining “XR” as a term. We, the enthusiasts, agreed on “XR” because it makes it easier to communicate with each other. However, I believe that we have now reached a point in technological development and an amount of customer facing products, where we may only communicate the right abbreviations for the respective applications. If we want to try to communicate the overall concept, we should start talking to people on two levels instead of focusing on the definitions of XR and lumping applications and hardware together.

The first level is the technology level. We are seeing the emergence of a new genre of consumer facing devices or technical solutions that enable “Spatial Computing”. Many of those devices, ranging from smart phones to smart watches, earbuds, smart glasses to complex wireless head-mounted displays (HMDs), already have more or less limited functionality to augment reality with digital content, enable spatial interaction or even let you immerse yourself in virtual worlds and they fall under the category of spatial computers. Those devices are already in the hands of the consumer.

We should avoid using the acronym “XR” when communicating about the concept of spatial computing. Since we often refer to “VR/AR/MR glasses” as “XR devices,” this inevitably leads to confusion because we might be excluding other devices from the concept, namely those which may not include the visual component. People would never refer to EarPods as an augmented reality device, even if they listen to an auditive application which is overlaying virtual sound on real world objects. People name their products “smart”. And that´s great. That´s what they are today. Maybe soon, people won´t say “smart glasses” any more, but “assistants”. Because that’s what these devices ultimatly become.

On a technical level, devices are subject to constant change and advancement, they are getting stuffed with new features every day. Computing in space and in sight includes not only putting glasses on our faces and consuming three-dimensional content. We need to consider new human-computer interfaces where input is provided by speech, gaze, gestures, body posture, BCIs (Brain Computer Interfaces), as well as so-called Spatial Awareness, which incorporates and even semantically recognizes the state and physicality of the surrounding space and other users in the application. Let’s not forget the integration of AI assistants. A simple smartphone with a camera can already be considered a Spatial Computer if it runs an Augmented Reality application.

The second level concerns the applications and this is where the terms Virtual, Augmented, or Mixed Reality are correctly placed. These terms are very closely related to the purpose and use-case of a certain application. Only at the application level can the utility be questioned. Not every device is useful for every application and every x-Reality has its own utilities. Us developers introducing XR started to make communication about the software product and the use-case complicated. If we use the acronym XR to summarize all “reality” technologies, we should do so primarily in terms of development while talking to each other. But when we are talking to our customers, let´s stop confusing them and continue to write down the unabridged version (i.e. an augmented reality training scenario or a virtual reality game). It does not help if the name of your brand new application, a fishing taining-simulator , is “FishingXR”. Your dad will probably ask you whether this “game” is only for the new iPhone XR2.

Finally, I’d like to address the much-cited “iPhone moment” of XR. I don’t think we’ll ever see a particular moment. The transition from mobile to spatial computing is a very long process that started over 10 years ago and will take another 10 or more years. We will still see many changes and innovations in user interfaces and user experience architecture. In addition, the different concepts are at different stages of development. While virtual reality applications are already achieving a relatively good balance between utility and usability, augmented and mixed reality applications are still lagging well behind on their way to mass adoption. So, instead of forcing the benefits of XR on people and praying for a soon-to-come mass adoption, let´s simply be thankful for the technical possibilities we already have and put our effort into the creation of super-beneficial applications and further explore newly designed human-computer interaction interfaces to create a high degree of utility over usability.

Rene

Expert in Media Technology, 3D and VR / AR

Leave a Reply