This is understandably one of the most frequently asked questions received during AWE2017, so I believe it deserves a blog post.
Some context first: Augmented Reality, or AR, is the process of overlaying something digital on top of the real world. Exactly in which way and where the digital/physical interface should be located is not fixed a priori. That being said, most current mainstream AR are near-eye, placing the digital/physical interface more on the user’s side, either on a smartphone screen or on a head-mounted display/smart glasses. The opposite approach is obviously to place the augmenting digital information more on the world side, directly on surfaces in the real world. One example is projection-based AR like Lampix. Another one is Neartracker, which is smartphone-based. AR that happens directly on or very close to real-world objects is what I call near-surface AR, to differentiate it from the first near-eye category.
Near-eye AR in a nutshell:
- Conventional smartphone-based AR is readily available and takes advantage of existing hardware. However, it forces the user to actively hold and move a device with a small screen, which is tiresome, looks awkward, keeps your hands busy and also potentially rises privacy concerns which is probably why smartphone-based AR is not more prevalent.
- Head-mounted display-based AR offer a much more immersive, hands-free experience and has known an huge technological push lately, especially in industrial applications. However, AR displays and glasses are still heavy, expensive and regarded as intrusive and lacking naturalness by many users.
Near-surface AR essentially frees the user’s hands and eyes from wearing any devices. The “reality” part in “augmented reality” is seen directly with the naked eye.
- Projection-based AR (also called spatial AR) uses either a fixed or a mobile projector plus optionally a camera system to allow user interaction/input on the projected surface – typically a desk.
- Near-surface smartphone-based AR is what we envision with Neartracker: smartphones placed directly onto arbitrary real-world surfaces. It blends together a few key features:
- it is essentially hands-free, allowing precise touch-screen interaction
- takes full advantage of existing, wide-spread hardware (smartphones) as well as existing AR SDKs and frameworks
- unlike conventional smartphone-based AR tracking specific markers, images or objects, it enables “magic lens” usage of smartphones across surfaces of arbitrary size via a grid of almost-invisible markers.
- compatible with – but not requiring – projection-based spatial augmentation (only for use cases where projection makes sense, like interactive games).
- unlike projection-based AR, it is compatible with printed content (as long as tracking gird is still partially visible): it can turn a paper sheet into an UI
- turns smartphones into tangible digital avatars, effectively bridging AR and tangible user interface technology which seeks to use physical objects to control digital environments – check our game example.
As a conclusion, it should be noted that none of the different AR flavours are good for every use case and near-surface AR is no exception to that. It makes most sense for scenarios in which the interaction naturally happens on a surface (print + digital magic lens, virtual desktop, mixed reality games, smart tables for collaboration, exhibitions, education, etc.). In other scenarios that for example require to display objects in mid-air, head-mounted display are the way to go.