Life, the Metaverse and Everything – AR’s true killer app



TL;DR: the killer Augmented Reality app is not one app. It’s adding one more degree of freedom to the human innate need to shape reality.
 


You wake up in the morning and your alarm clock is not ringing from the smarthone by your bed. Rather, your Augmented Reality (AR) device holographically wakes you up in your favorite way: you open your eyes in your physical room, but somehow your carpet is a flowery meadow and birds, or Metallica, are singing from your closet. Outside, the November rain is pouring, but you can just add sunshine to your window by issueing a vocal command to Arexa, your AR assistant so you can enjoy a nice sunny morning. Your love is out of town, but she can still materialize by your bed and give you a morning kiss.  You have a business meeting, but you just put on that grey T-shirt and tell Arexa to show you as wearing an Armani suit or a Prada dress you bought as a virtual skin – everyone in the Metaverse will see you wearing it (and you can even post a pic on meta-social media).

You review your notes on that business meeting: you like paper, so you wrote them down, but the Metaverse can augment your notebook instantly with data available to you. At noon, you wanted to play tennis with your buddy, but he’s sick, so you’re launching a simulator that lets you play golf in your office with the colleague at the end of the hall. A friend is physically visiting Milano and sits in a coffee shop, shares the place with you: from your chair, you choose to have coffe with him and the Metaverse is  projecting the Milano coffee shop in your room. He’s getting a hologram of you at his table. All your common friends send likes, which land on your table in the jealous admiration of the other people. You pass by a poor neighborhood where folks are not connected to the Metaverse, so you don’t see them. At the train station, you wait for your partner in the car and see exactly when she’s stepping out of the train although you’re on the street: she’s sharing that with you. It’s been a long day, so you start a meditation app thats transports you to a Tibetan temple for a couple of minutes before you fall asleep.

We’re not quite there yet. In 2021, Snap Inc.1 has released what can be regarded as the first consumer-friendly version of AR glasses, the current generation of Spectacles. Previously, wearable AR devices were realeased by Microsoft (HoloLens v2), Magic Leap (which recently anounced their next version) and other companies. A lot has been going on in Mixed Reality (a term that has been marketing-loaded by Microsoft to denominate what the rest of us would call “AR done right” – correct and consistent interaction with both real and augmented entities in the 3D world). It is tale-telling that AR has been removed from the list of Gartner’s emerging technology hype cycle, many taking this as a signal that AR is now considered a relatively mature technology, bringing solid return-of-investment.

And yet: how many people running around with AR glasses – rather than staring into their smartphones – have you seen out there ?

The reality is that, in spite of significant technical advancement and apart for a few niche use cases, companies and experts in the AR space are still looking for the “killer app” of AR in general and AR glasses in particular: that one thing that would turn AR into a mass-adopted technology, potentially replacing the smartphone. There are good reasons to search for that holy grail: everyone wants to be “there” when the AR revolution happens.

This has made skeptics ask questions like “what exactly can you do with AR that you cannot do with existing technology ?” and concluding that AR is currently sort of a solution waiting for a problem. While that might be true, it also happens to have been true for many disruptive tech breakthroughs, including those from horse-and-carriage to car and computer to smartphone: in principle, you could do everything they did using previous tech. But with new tech you could do it better and, crucially, more “anytime and anywhere“: you could go faster and further with the car and you could use your smarthpone in a lot more places and times that you could use your computer. Ubiquitousness is, it seems, a major adoption motivator.

Following this line of reasoning, why and how exactly would AR extend the current status quo of “anytime and anywhere” ? It seems that we already can use our phones anytime and anywhere and indeed technology is ubiquitous. What can a pair of smart glasses can add to that ?

To sketch an answer to that question, let’s start by observing that we humans don’t really live in the physical time and space.  We live in an intersubjective2 reality, as Yuval Harrari adequatly puts it, built upon physical time and space, but transcending it to a certain degree that hasn’t been constant over time: ever since humanity’s dawn, we have tried to take over that intersubjective space and make it work for us. Our imaginative power is what sets us apart from other animals and has give us the unique power to  evolve much faster that evolution alone permits. Long before computing, the first stage of what I call the take-over-reality movement was to created myths, religions, nations, companies and other collective naratives that only exist in our minds2, but shape our world – for better or worse.

Within those collective narratives, individuals have a psychological need to create an image of themselves that they like and seek confirmation in others. Then social media came along. Its main driver – and killer app – that made it to what it is today, was fulfilling exactly this need by adding a degree of freedom to how we’re perceived by others, which, in turn, reflects onto how we perceive ourselves, in a circle that can be either virtuos or vicious. Social media space brought along the second stage in the take-over-reality movement: control, to a limited degree, who we “are” in that intersubective reality, in the perception of others: friends (Facebook), professionals (LinkedIn) romantic interests or sex (Tinder), etc. It brought more “anytime, anywhere”, beyond physically meeting other people.

With social media democratizing the ability to shape our intersubjective digital selves, a complementary need arises: to shape and control our own perception of that intersubjective reality. We don’t only want to influence and control how we are perceived, we also want to influence and control what we perceive. And since we’re at that: why restrict to the intersubjective reality and not make a step towards controlling how we perceive the physical reality too ? This is where AR comes into play.

Social media’s killer app was adding a degree of freedom to how we can shape our image towards the world. The third stage in the take-over-reality movement and AR’s killer app is adding yet another degree of freedom for shaping both our image towards the world and, crucially, how the world appears towards us, including both intersubjective reality and the physical world.

AR’s core proposition is not AR glasses, which are just a means to an end, but rather something called “the Metaverse”: an explicit meta-reality on top of, and encompassing, the physical one (see picture below).

AR’s core proposition, the Metaverse


This is game changing in several ways, and I won’t claim all are good. In the social media space, the degree of take-over-reality is limited by a central authority (the network’s algorithms which decide what you get to see) and by the inherent limits and constraints of the network, particularly the decoupling from the physical world.

In the Metaverse, you get to look at the world through a different pair of glasses – literally ! AR’s killer app is your own cognition: you get to choose what – and how – you see and not see, filter information in and out, become oblivious to certain features of the physical world, enhance or change others, decide who and how they can perceive you and, generally, shape your intersubjective reality, as well as your own identity in that reality, to a previously unseen degree. It sure is a double-edged sword, just as social media is, and can span both vicious and virtuous circles. It also raises a lot of ethical questions which probably deserve another article.

We’re not there yet, of course, and I don’t know when we’re going to be. It is perfectly possible that AR glasses are simply not the right tool for the job due to their adoption friction. Maybe AR glasses are just a step in the journey and we’ll have contact lenses. Maybe implants, or neural links. But we’ll almost certainly going to have a version of the Metaverse one day. To me, it’s inevitable because it follows the same pattern we’ve come along so far: taking over reality.


1 about me: I work for Snap Inc. on the Spectacles, Snap’s AR glasses. Note, however, that all content is this article is 100% my individual opinion and is in no way related to Snap’s business or product strategy. I’ve been working in the AR space for the last five or so years, on experimental research projects (neARtracker), augmented art apps (ConnectedART) and newly on AR glasses.

2 Yuval Harari – “Sapiens: A Brief History of Humankind”

What’s inside ?

One of the many use cases for near-surface Augmented Reality is taking a really close look at what’s inside something without taking it apart. Conventional AR technology cannot zoom in enough for this kind of application because it relies on inside-out tracking of the device position and orientation. This works well both indoor and outdoor, but stops working when getting too close to plain surfaces: tracking is lost and the magic is gone.

NeARtracker is all about bringing the lost AR magic back and being able to place innovative AR content formats overlaid directly on printable physical flat surfaces, including features like annotations and an e-commerce “paper interface”. Here’s how it turns a smartphone into a “magic lens” showing the inner life of a MacBook pro – something we envision to be used for training, industrial marketing or simply understanding complex things.

This “paper app” has been sketched up in less than ten minutes using the powerful Vuforia + Unity combo and, of course, our neARtracker sensor. Enjoy !

 

The quest for pervasive displays in times of wearables @PerDis2018

Munich, June 6-8:  the Seventh ACM International Symposium on Pervasive Displays (PerDis) just happened, a small, but critically important conference brought to life by a handful of “true believer” type of people, this year Prof. Albrecht Schmidt and his team at the Ludwig Maximilian University (LMU) in Munich.

perdis-2018

Why do I say “critically important” ? Because it fosters research to counter-balance what can be regarded as the current technological “local optimum” in AR/VR/XR, namely near-field display technology. Most mainstream VR/AR currently focuses on technology that works close to the human eye (near-field): VR headsets like Oculus, AR see-through displays (HoloLens, Meta) and prospectively retina micro-projection and BCI (Brain-Computer Interfaces). All these technologies are useful and have important applications as we speak. They have also been euphorically claimed to be the pinnacle of what can, and should, be achieved in terms of bridging the digital to the physical, with some people in the field prophesying a “display-less world” in a matter of decades (meaning that only near-eye displays will exist).

However, technologies bridging the digital to physical “out there” in the physical world – far-field 2D and 3D displays – have several unique desirable properties: (1) they are intrinsically shared and social (although more research is needed to develop meaningful interactions in shared display environments, as pointed in the keynote by Prof. Nigel Davis at PerDis 2018) (2) don’t require any awkward human augmentation with wearable devices and (3) they allow a probably healthy degree of control over the “over-virtualization” of the physical world – the unforeseeable negative effects that might arise from replacing physical reality with a 100% controlled digital environment in which everything happens “at will”.

Thus, they must have their own place in our digital development. I would argue that the main reason for the industry’s focus on wearable, near-field display technology today is because it is significantly easier (albeit not by any means trivial !) to implement “immersively” than truly pervasive displays (“anything is a display”), 3D holograms and immersive display environments. That’s why I think of wearable displays as a “local optimum”: given sufficient advance in far-field technology, it would be probably preferable to near-field due to the reasons exposed above.

Long-term, thus, we can expect to see a shift towards far-field displays and one key problem to solve by research is meaningful interaction. As we demoed at PerDis2018, neARtracker technology turning smartphones into tangible interfaces is a powerful tool to use with the additional opportunity of using the smartphone’s display as a “magic lens”. The video below exemplifies this setup on a cultural heritage application: the virtual reconstruction of a lost garden in Sansoucci, Potsdam. The paper in which we’re discussing, among other things, where technology could head to in the future under the title “An 1834 Mediterranean Garden in Berlin: Engaged from 2004, 2018, 2032, and 2202” is also available in the Proceedings of PerDis2018.

   

 

 

Unfold your phone – the tangible AR way

At neARtracker.com, we share the vision of using the physical world as a big pervasive digital display and fusion it with the power of smart mobile devices. It is a bold and mid-to-long-term bet, but we think the dominating current trend of wearable near-eye displays is going to reverse at some point, once technology allows for truly ubiquitous pervasive displays “out there”.

It is along this lines of thought that we develop applications showcasing our vision of combining the ease of use of a smartphone with the immersive-ness of spatial AR  and pervasive displays – our tracking sensor makes this possible already. You can think of these applications as regular smartphone apps, but additionally with the key ability to “unfold” your smartphone onto a larger, more immersive physical display.

The first application is a collaborative photo sharing illustrating the key concept of “unfolding” photo content onto a shared physical display and use the smartphone to manipulate, present and share pics.

The second application is a tangible Air Hockey AR game that uses a smartphone for real-time interaction on a projected surface. Just as the previous app, it combines both near-surface AR on the smartphone’s display and spatial AR on the larger display.

Enjoy !

 

What is near-surface Augmented Reality and why should I care ?

This is understandably one of the most frequently asked questions received during AWE2017, so I believe it deserves a blog post.

Some context first: Augmented Reality, or AR, is the process of overlaying something digital on top of the real world. Exactly in which way and where the digital/physical interface should be located is not fixed a priori.  That being said, most current mainstream AR are near-eye, placing the digital/physical interface more on the user’s side, either on a smartphone screen or on a head-mounted display/smart glasses. The opposite approach is obviously to place the augmenting digital information more on the world side, directly on surfaces in the real world. One example is projection-based AR like Lampix. Another one is Neartracker, which is smartphone-based. AR that happens directly on or very close to real-world objects is what I call near-surface AR, to differentiate it from the first near-eye category.

near-surface

As is often the case in engineering, there are pros and cons to the different AR paradigms described above. A full comparison is beyond the scope of this post (check [1] and [2] for more details).

Near-eye AR in a nutshell:

  1. Conventional smartphone-based AR is readily available and takes advantage of existing hardware. However, it forces the user to actively hold and move a device with a small screen, which is tiresome, looks awkward, keeps your hands busy and also potentially rises privacy concerns which is probably why smartphone-based AR is not more prevalent.
  2. Head-mounted display-based AR offer a much more immersive, hands-free experience and has known an huge technological push lately, especially in industrial applications. However, AR displays and glasses are still heavy, expensive and regarded as intrusive and lacking naturalness by many users.

Near-surface AR essentially frees the user’s hands and eyes from wearing any devices.  The “reality” part in “augmented reality” is seen directly with the naked eye.

  • Projection-based AR (also called spatial AR) uses either a fixed or a mobile projector plus optionally a camera system to allow user interaction/input on the projected surface – typically a desk.
  • Near-surface smartphone-based AR is what we envision with Neartracker: smartphones placed directly onto arbitrary real-world surfaces. It blends together a few key features:
    • it is essentially hands-free, allowing precise touch-screen interaction
    • takes full advantage of existing, wide-spread hardware (smartphones) as well as existing AR SDKs and frameworks
    • unlike conventional smartphone-based AR tracking specific markers, images or objects, it enables “magic lens” usage of smartphones across surfaces of arbitrary size via a grid of almost-invisible markers.
    • compatible with – but not requiring –  projection-based spatial augmentation (only for use cases where projection makes sense, like interactive games).
    • unlike projection-based AR, it is compatible with printed content (as long as  tracking gird is still partially visible): it can turn a paper sheet into an UI
    • turns smartphones into tangible digital avatars, effectively bridging AR and tangible user interface technology which seeks to use physical objects to control digital environments – check our game example.

As a conclusion, it should be noted that none of the different AR flavours are good for every use case and near-surface AR is no exception to that. It makes most sense for scenarios in which the interaction naturally happens on a surface (print + digital magic lens, virtual desktop, mixed reality games, smart tables for collaboration, exhibitions, education, etc.). In other scenarios that for example require to display objects in mid-air, head-mounted display are the way to go.

 

 

 

NeARtracker @AWE 2017

“This is the most original idea I’ve seen in this hall” – Prof. Thad Starner, Google Glass Project
“De puta madre !” – some guys from Mexico
“Hey, you actually did something I’ve also envisioned, but didn’t think it’s possible !”

These are some first-hand reaction we received at the #AWE2017 demo stand in the startup area. It was really exciting and rewarding to see people faces lighten up when presenting them the technology. There’s a lot of input, ideas and challenges to digest and follow up and we’re confident we’re now significantly closer to bringing neARtracker to real-world projects.

Below a few pictures:

 

Awakening paper to life: neARtracker for Vuforia + Unity

We’re excited to announce that we have reached an important milestone in bringing about near-surface Augmented Reality for smart devices: the neARtracker sensor and software has been integrated with one of the industry-leading AR platforms, Vuforia + Unity, to create a Proof-of-Concept AR application that uses printed paper as an immersive digital medium.

Using Vuforia + Unity efficient authoring workflow and 3D performance as well as high-quality plant models from our partners at Laubwerk on top of our neARtracker technology enables the creation of compelling AR experiences directly on printed paper. The neARTracker PaperTrack app is showcasing the augmentation of a printed house and garden plan. We will demo this – and more – at AWE Europe in Munich, 19-20 October 2017.

Highlights:

  • Works with the standard Vuforia + Unity editor / distribution
  • Currently available on Android devices, iOS to follow
  • A commercial version of the sensor currently in advanced development

Unlike conventional AR, near-surface AR does not constrain the viewing device to a minimum distance away from the tracked object(s). Rather, the device can freely move directly on a surface, thus enabling an immersive “magic lens” type of experience that is fully integrated with the printed content and removes the awkward “focus-on-a-single-hand-held-small-screen” type of interaction. The clever use of device sensor data allows the experience to partially transcend the 2D surface and permit the user to also navigate in 3D space.

 

Print meets digital: on-surface physical+digital fusion

We’re developing the neARtracker sensor to enable our near-surface AR vision: turn potentially any physical surface into a shared digital environment access point using smart devices in a non-intrusive way.

One step along this road is the fusion of printed paper – one of the most familiar information support – and digital. Here’s a short early-phase demo on how that works:

 

Why near-surface AR

At devEyes, we are working on near-surface AR for existing mobile devices. This post attempts to explain why.

Augmented/Virtual/Mixed Reality applications (see this for an explanation of the differences) are perceived more and more as the next big things in technology these days. It is sufficient to look at this list of people and companies working in the field: virtually every major technology player is pursuing at least one of the following topics:

  • AR mobile apps using existing technology (basically a Smartphone with camera) to enhance (=augment) the surrounding world with additional items – think Pokemon Go. This is already mainstream: from beauty apps (ModiFace) to world discovery (Blippar, Snapchat’s World Lenses)  it seems that sky is the limit. There is also another limit, which is precisely the point of this post.
  • Mixed Reality platforms like Microsoft HoloLens, Magic Leap and the like are basically refining the AR idea to a seamless integration of virtuality into reality via dedicated, currently expensive and rather bulky, hardware: glasses, helmets, head-up display and the like.
  • VR gear – like Oculus Rift (Facebook) and the like. Unlike AR or MR, VR attempts to fully replace the perceived reality with a virtual one. Which makes a lot of sense for some things (immersive games, sports events, etc)

None of the above technologies is currently suitable for near-surface interaction, at least to our knowledge (and apart from some academic research). It’s a neglected area where existing technology breaks, while it has, in our view, a currently underestimated potential for AR/MR applications, for a multitude of reasons:

  1. Availability: not only is office work typically done near-surface (on a desk/table), but also many recreative activities are suitable for near-surface interactions (table games, etc.)
  2. Naturalness and precision of interaction: handling a touch display placed on a table while moving it around feels natural.
  3. Tangible: devices can be used as tangible UI alements on a surface
  4. Low intrusiveness: no helmet, no “in-the-face” device screen,  no hardware installation except an additional device sensor (which can be incorporated in the device itself)
  5. Co-location: it offers an excellent sharing environment for social, multi-device co-located interaction – video here.
  6. Projection-enhanceable if a more immersive experience is desired –  video here.

We envision near-surface AR as a complementary, not competing, technology to the continuum of currently existing approaches. Find out more on our home page.