1 UNDERSTANDING AUGMENTED REALITY
Imagine your car breaks down in the middle of the highway. You know very little about vehicle mechanics, and the next garage is miles away. Today, this breakdown is likely to cost you lots of time and money to fix; but tomorrow, it may be no more than a minor glitch in your day. Using your smart glasses, you’ll be able to launch your repair app and assess the problem through your normal line of sight accompanied by a step-by-step repair guide for your particular make and model of car, without the help of a mechanic.
Or imagine you’re out shopping, and want to know how other consumers have rated a jacket that you’re thinking of buying. By glancing down at the product with your smart glasses, you’ll instantly see extra information displayed alongside the jacket – user ratings, product price range, and supply information – all of which empowers your purchasing decision.
This is Augmented Reality (AR) – where every object you see could be enriched with additional and valuable information. AR is defined as the expansion of physical reality by adding layers of computer-generated informa- tion to the real environment.1 Information in this context could be any kind of virtual object or content, including text, graphics, video, sound, haptic feedback, GPS data, and even smell. But AR is more than a simple displaying technology. It also represents a new type of real-time natural user interface for human interaction with objects and digital devices.
AR is made possible by performing four basic and distinct tasks, and combining the output in a useful way.
Figure 1: Basic Functionality of Augmented Reality; Source:
- Scene capture: First, the reality that should be augmented is captured using either a video-capture device such as a camera, or a see-through device such as a head-mounted
- Scene identification: Secondly, the captured reality must be scanned to define the exact position where the virtual content should be embedded. This position could be identified either by markers (visual tags) or by tracking technologies such as GPS, sensors, infrared, or
- Scene processing: As the scene becomes clearly recognized and identified, the corresponding virtual content is requested, typically from the Internet or from any kind of
- Scene visualization: Finally, the AR system produces a mixed image of the real space as well as the virtual content.
Experts also differentiate between Augmented Reality and Virtual Reality (VR). VR is a completely computer- generated, immersive and three-dimensional environment that is displayed either on a computer screen or through special stereoscopic displays, such as the Oculus Rift.
Despite the surge in widespread media coverage over the past 12 months, the majority of AR solutions that we read about today are still in development. Only a few hardware solutions are being mass produced and readily available to purchase off the shelf.
Just a couple of years ago, there were only a handful of available commercial AR applications – in fact the first AR app for the iPhone was not released until 2009.2. In 2011, global AR revenues were as low as USD 181 million3 and, at that time, AR was often perceived by the public as just a marketing gimmick: a technology in search of a useful application. There was little public awareness, and applications were primarily developed to gain quick PR wins, or their value was limited to attention-grabbers such as adding video effects.
However, latest forecasts predict that by 2017 the AR market will grow to USD 5.2 billion – an impressive annual increase of almost 100 %. With substantial funding being poured into AR projects and start-ups, especially by large corporations such as Google, Canon, and Qualcomm, we can expect the first significant wave of consumer-ready AR products to be launched over the next 12 months. And with concrete business benefits coming to light, experts are convinced that AR will be the ‘next big thing’ in the consumer, medical, mobile, automotive, and manufacturing markets.
AR is no longer just a marketing ploy. We will see continued uptake of AR and, as it grows, its application will be accelerated by technological progress.
Figure 2: Global AR Revenues 2012–2017 (estimate); Source: Xcubelab
1.2 Hardware Overview
Both the development and implementation of AR software solutions rely, in turn, on the development of suitable and robust AR hardware platforms. And this hardware development is driven by technological progress in the fields of computer processors, displays, sensors, mobile Internet speed, battery life, and more. By looking at the types of AR platform currently available, and predicting what lies ahead, the following AR items can be identified today:
- Handheld Devices
- Stationary AR Systems
- Spatial Augmented Reality (SAR) Systems
- Head-mounted Displays (HMDs)
- Smart Glasses
- Smart Lenses
Figure 3: Smartphone Example of a Hand- held Device; Source: Freshmindstalent
We are currently experiencing a massive boom in Handheld Devices such as smartphones and tablet computers, and this will accelerate AR adoption. These devices are appearing with ever-better features such as higher-resolution displays, more powerful processors, and high-quality cameras, along with a growing array of sensors providing accelerometer, GPS, and compass capabilities, making them very suitable AR platforms. Although handheld devices are the easiest way for consumers to access AR apps, most are not wearable and so they cannot give users a hands-free AR experience.
Stationary AR Systems
Figure 4: Stationary AR Wardrobe at a Topshop in Russia; Source: Mashable
Stationary AR Systems are suitable when a larger display or higher resolution is required in a permanent location. Unlike mobile AR devices, these motionless systems can be equipped with more advanced camera systems and can therefore provide more precise recognition of people and scenes. Moreover, the display unit often shows more realistic pictures and is not so much affected by environmental influences such as sunlight or dim lighting.
Spatial Augmented Reality (SAR) Systems
Figure 5: SAR System at Volkswagen; Source: Volkswagen;
In contrast to all other systems, Spatial Augmented Reality (SAR) Systems include virtual content directly projected on top of the real-world image. SAR systems are frequently stationary in nature. Any physical surface such as walls, desks, foam, wooden blocks, or even the human body can be turned into an interactive display.
With projectors decreasing in size, cost, and power consumption, and with progress in 3D projection, what’s emerging is a completely new range of interaction and display possibilities. The biggest advantage of SAR systems is that they provide a more accurate reflection of reality, as virtual information can be visualized with actual proportions and size. Furthermore, content can be made visible to a larger number of viewers, and this can for example enable simultaneous working.
Head-mounted Displays (HMDs)
Figure 6: Canon`s Mixed Reality Headset as an HMD Example; Source: Digitaltrends
Head-mounted Displays (HMDs) represent another rapidly growing AR hardware item. HMDs consist of a headset, such as a helmet, which is paired with one or more (micro-) displays. HMDs place images of both the physical world and virtual objects over the user‘s field of view. In other words, the user does not see reality directly, but sees an (augmented) video image of it. If the display is placed only in front of
one of the user´s eyes, it is called a monocular HMD (in contrast to binocular systems, where both eyes view the display). Modern HMDs are often capable of employing sensors for six degrees of freedom (allowing the user to move their head freely forward/backward, up/down, left/right, pitch, yaw, and roll). This enables the system to align virtual information to the physical world, and to adjust according to the user‘s head movements.
Figure 7: Vuzix M100 Smart Glasses; Source: Vuzix
Many companies from the consumer electronics industry are expecting Smart Glasses to be the next global consumer hit after smartphones. These AR devices are in essence glasses equipped with screens, cameras, and microphones. With this
concept, the user’s real world view is intercepted and an augmented view re-displayed in the user’s field of vision. AR imagery is projected through or reflected off the surface of the eyewear lens pieces. The most prominent examples of this technology are Google Glass and Vuzix M100. However, one of the most exciting smart glasses developments today is the Atheer One – these smart glasses are equipped with
3D depth sensors, allowing users to physically control the virtual content displayed in front of them
Figure 8: Lens Containing Metal Circuit Structures, Developed at the University of Washington; Source: Washington
Glasses are certainly not the end of the story. Research is gaining momentum into Smart Lenses that can display AR imaging; companies such as Microsoft and Google are busy unveiling their own smart lens projects.
The idea is to turn conventional lenses into a functional system by integrating control circuits, communication circuits, miniature antennas, LEDs, and other optoelectronic components. In future, hundreds of integrated LEDs could be used to form an image directly in front of the eye, transforming the lens into a display. However, before this can become reality, a couple of significant challenges must be overcome, such as how to power the lenses, and how to ensure that the human eye is not damaged
In this section, we explore how this emerging techno- logy is currently being used across different sectors, and anticipate best practice that’s likely to become mainstream in future. We have selected a number of innovative AR examples, clustered into four functional categories; each provides individuals or companies with significant benefits when using AR applications.
2.1 Context-sensitive Information – Information at the Right Time and Place
The first cluster is context-sensitive information, encompassing various applications that enable easy access in context-specific situations to static data that’s already available on the Internet.
Figure 9: Wikitude; Source: Wikitude
Wikitude and Metaio’s Junaio are two leading examples of AR browsers that provide context-sensitive information software capable of recognizing locations or objects to link digital information to real-world situations. The software runs on any smartphone and displays additional digital information about the user’s surroundings in a mobile camera view.
This additional digital information could be nearby places of interest, for example, such as museums, shops, restaurants, or the pedestrian route to the next bus stop. The software includes image recognition, user position localization via GPS and WiFi, and 3D modelling.
Figure 10: Word Lens; Source: Questvisual
One of the most promising areas of application in AR is the field of language translation. An existing app is Word Lens software which runs on almost any smart phone and simultaneously translates text from one language to another. With this app running, the user merely points their device to a piece of text written in a foreign language. Then their device displays this information translated into the user’s native language. It is written in the same font and on the same real- world background as the original text.
Figure 11: Infinity Augmented Reality App; Source: Infinity
Another example of easy access to Internet information in context-specific real-life situations is the combination of face detection and AR. An app that is promised to be available soon is the Infinity AR application. The concept is to analyze a face and compare and match it to profile pictures found in social networks (e.g., Facebook). Infor- mation posted in the matched profile is then displayedin the user’s field of vision.
As well as being useful in consumer applications, this technology is very promising for law enforcement agencies (e.g., scanning crowds for wanted criminals). Under- standably, this application has also raised many privacy concerns.
Figure 12: Volkswagen MARTA; Source: MARTA
A highly practical solution to the best practice of provi- ding the right information at the right place in the auto- motive sector is the MARTA (Mobile Augmented Reality Technical Assistance) system developed by VW. This system comes in handy when a car isn’t running properly, helping its user to perform vehicle repairs and maintenance. It recognizes vehicle parts via object recognition, and describes and pictures all required repair and maintenance steps in detail and real time, along with information about any equipment requirements. This app runs on various mobile devices. Currently, the system is for the exclusive use of VW Service, but it is conceivable such systems could become available for consumer mar- kets in future, helping everyone to fix their cars without knowing very much about mechanics.
2.2 Enhanced Senses – Becoming Human 2.0
Even today, AR applications can offer much more than just retrieving Internet information on the go. The following AR use cases enhance reality using newly generated information from data gathered mostly by the device’s sensors. They feature a range of devices that enhance the senses, extending human capabilities beyond our current achievements.
Figure 13: Recon Jet; Source: Recon Instruments
Recon Jet is an already available AR system for leisure activities. The device’s sports-oriented heads-up display connects to third-party sensors such as Bluetooth and WiFi, and offers navigation and weather information, access to social networks, and real-time information about performance – for example, a runner would want to know their speed, distance to the finishing line, current elevation gain, and their heart rate. With these capabilities, the Recon Jet points towards the future development of wearable AR that can monitor the vital signs and surroundings of people working in hazardous environ- ments or physically demanding jobs.
Figure 14: BMW HUD; Source: BMW
Another heads-up display (HUD) is used to project sensory information such as driving speed onto the windshield of some BMW cars. This enhanced-senses capability has been used by the automotive company since 2004, and BMW is constantly working to improve this HUD with additional features. BMW’s current ConnectedDrive HUD is augmented by virtual markings that are superimposed on real objects in the external environment. This allows navigation infor- mation or information from driver assistance systems to be displayed in exactly the right position on the driver’s view of the road scene. Navigation instructions can be blended into the road, and vehicles or safety-relevant objects can be highlighted or marked in context. A great example is the visual information provided by BMW’s night vision system.
Figure 15: iOnRoad; Source: iOnRoad
A similar but less advanced augmented driving assistance system for the mass market is the award-winning iOnRoad app. Using only the smartphone camera and some vision algorithms, it provides real-time features such as collision warning, headway monitoring, off-road warning, and a black box video recording function which can come in handy after an accident.
Figure 16: Liver Explorer; Source: Fraunhofer
In a completely different field of application, surgeons can access enhanced senses with the Liver Explorer app by the developer Fraunhofer MEVIS. This app provides real-time AR guides and assistance to the medical prac- titioner. The device’s camera films the liver and, using AR, superimposes surgical planning data onto the organ. In addition, the software can react in real time (e.g., updating the surgical plan according to the movement of blood vessels which the system tracks constantly). These capabilities go beyond the MARTA system’s provi- sion of context-sensitive information. Assuming the app receives positive evaluation, it is likely to be modified for future expansion into additional surgical fields.
Figure 17: Q-Warrior Helmet; Source: Telegraph
In dangerous situations it is especially important to have crucial information at hand. Therefore the military is one of the biggest investors in AR applications. One military app is the Q-Warrior Helmet. This AR item is intended to provide soldiers with “heads-up, eyes-wide, finger-on- the-trigger” situational awareness, friend-or-foe identifi- cation, night vision, and an enhanced ability to remotely coordinate small units. The helmet transmits detailed positional information about each wearer to the others, allowing the system to gather, map, and share information and positions in real time on the battlefield and during reconnaissance. It is easy to look ahead and anticipate similar systems being developed for other professionals working in dangerous environments, such as fire fighters and law enforcement personnel.
2.3 Mixed-reality Simulations – Exploring the Virtual in the Real
While the above examples augment reality by providing static digital information, this next AR cluster goes a step further. These so-called mixed-reality simulations allow users to dynamically adapt or change virtual objects in the real environment. Uniqlo’s Magic Mirror offers an even-more personal AR fitting experience. Introduced in 2012 in a Uniqlo shop in San Francisco, USA, this large augmented mirror recog- nizes the shopper’s size and selected fashion item, so there’s no need to try on different colors. The shopper simply puts on one item and steps in front of the mirror; a touchscreen then prompts the consumer to select other available hues, and projects back the modified reflection.
Figure 18: IKEA AR App; Source: IKEA
One of the most prominent examples is the latest Ikea catalog. Developed by Metaio, this AR app lets con- sumers use their mobile devices to ‘place’ digital versions of selected Ikea furniture in their real living rooms, making it easy to test whether the dimensions, style, and color of the furniture fit in a chosen position. This app also allows the user to change the size and color of each piece.
Figure 19: The Magic Mirror; Source: Trendhunter
Figure 20: MREAL; Source: Engadget
The Mixed Reality System (MREAL) by Canon supports the design process by enabling the seamless merging of 3D computer-generated models with real- world objects in a real environment. For example,
it can help with designing a new model of car in the automotive sector. MREAL allows multiple users to work collaboratively and simultaneously on a full-scale product design. The system can be used to analyze how real components will fit together with a newly planned design. It does this by creating a 3D model of both the existing components and the new concept, and then brings both together.
For example, an existing car seat can be integrated into the projection of a virtual new car design. Since MREAL delivers mixed reality, users can actually sit in the (real) seat and see both the real environment outside the car along with the digital representation of the car interior, including the planned new dashboard and steering wheel.
Figure 21: MiRA; Source: Highflyer
Another industrial AR app that’s already in use comes from Airbus. With the master for a new aircraft production process developed entirely with digital tools, Airbus colla- borated to create the MiRA (Mixed Reality Application) in 2009. This app increases productivity in production lines by using AR to scan parts and detect errors. On the A380, MiRA, which today consists of a tablet PC and a specifically developed sensor pack and software, has reduced the time needed to check tens of thousands of brackets in the fuse- lage from 300 hours to an astonishing 60 hours. Further- more, late discoveries of damaged, wrongly positioned or missing brackets have been reduced by 40%.5
Figure 22: The Interactive Hatsune Miku; Source: Hatsune Miku
Our final example in this AR cluster gives a glimpse of what we can expect of AR apps in the mid-range future. A hacker from Japan used an available 3D model and cheap motion sensors to have an AR ‘date’ with the famous virtual Japanese pop star Hatsune Miku. In his video, he shows how he ‘walks’ with Miku in a real park and how Miku recognizes and reacts to real-world objects (e.g., by sitting on a real chair). This software even makes it possible to interact with the virtual pop star (e.g., touching her tie or head). While this application is clearly sensationalist it is more than just a gimmick. It gives the idea that soon people may be accompanied by virtual companions who could provide assistance when needed (e.g., in medical or engineering tasks, or as a human-like interface for everyday digital issues such as managing a personal calendar, notes, and contacts).
2.4 Virtual Interfaces – Controlling the Real Through the Virtual
With more and more ‘smart’ objects connected to the Internet, and with new ways of accessing digital informa- tion, more and more people want to work with AR devices and data. Therefore, our fourth cluster, virtual interfaces, focuses on AR technologies that offer new options for controlling real-world objects through digital means. Essentially, this allows a mixed reality where real objects can be altered and controlled.
Figure 23: SixthSense; Source: Pranavmistry
An advanced way to interact with the digital world on the go is to use gestures. One example of a gestural inter- face system is SixthSense, developed by MIT. While this system currently uses spatial AR technology, it can also be used with all other technologies. The system allows the user to interact with information via natural hand gestures. In order to capture the intended input of the user, the camera recognizes and tracks the user‘s hand gestures using computer-vision-based techniques.
Figure 24: Revolv; Source: Revolv
AR-based interfaces are not limited to computer devices. They can be used to control cars, entertainment, and household appliances such as heating systems. One example is the home automation system Revolv, which is still under development. In combination with Google Glass, the system gives the user control over all digital devices in the household (e.g., the light system and locking system). The result is an augmented ‘smart’ household environment, which can be remotely controlled by voice or fingertip.
Virtual interfaces can go beyond the home, as shown by Yihaodian, the largest food e-retailer in China. The company recently announced that it was going to open up the first AR supermarket chain in the world.
Each of these supermarkets will have a floor space of around 1.200 m2 and will be located in ‘blank’ public spaces (e.g., train or subway stations, parks, and college campuses). While the naked eye will just see empty floors and walls, people using an AR-capable device will see a complete supermarket, with shelves filled with digital representations of real-world products. To buy products, the user just scans each product with their mobile device, adding it to their online shopping cart. After completing their AR shopping tour, the user receives delivery of the products to their home. This is an enhancement of similar concepts such as the QR-based Tesco supermarkets in South Korea’s subway stations.
Figure 25: Infinite Yihaodian; Source: Augmented Reality Trends
Part 2 :Enhanced Virtual Reality (AR) Technology in Logistics
Nguồn: DHL report