High-Precision Globally-Referenced Augmented Reality for Mobile Devices
[[{"type":"media","view_mode":"media_preview","fid":"597","attributes":{"alt":"","class":"media-image","height":"180","width":"180"}}]]
Augmented reality (AR) seeks to produce a blended version of the real world and context-relevant virtual elements that enhance or augment the user’s experience in some way. Since the term was coined back in the early 90s, AR has been a perennial disappointment, failing to live up to the original vision of immersive, spontaneous engagement with a blended virtual and real world.
The primary limiting factor of current AR approaches is the requirement of extremely precise positioning and orientation to maintain the illusion of realism as virtual objects augment a view of the real world. The illusion collapses whenever the AR application fails to accurately register the virtual elements to the real background. Techniques exist to create convincing augmented visuals with small registration errors, but these either require a prior map of the environment, which prevents spontaneous interaction with an unmapped environment, or are limited to relative placement of virtual objects with respect to local visual markers.
Our goal is to create a globally-referenced highly-accurate AR system that can be invoked spontaneously for construction, utility work, social networking, multiplayer games, etc., where a consistent large-area coordinate system is required and no prior map exists.
Global Reference Frame: Referenced to the gold-standard earth-centered, earth-fixed International Terrestrial Reference Frame.
High Accuracy: Absolute camera pose with centimeter-level or better positioning accuracy and 1/10th of a degree attitude accuracy.
The key to our approach is a near-optimal combination of visual simultaneous localization and mapping (visual SLAM) and centimeter-accurate carrier phase differential GNSS (CDGNSS) adapted for mobile devices. We believe that video-see-through AR built on this technology has the potential to become a must-have application for future mobile devices.
Our approach can also be seen as a platform for delivering robust position and orientation estimation in environments where GPS signals are weak or intermittent. Starting with an initial global pose estimate obtained in an environment where GPS signals are available, the platform can be taken into a GPS-denied environment and will maintain a highly accurate global pose estimate based on visual SLAM. The slow drift in the visual SLAM solution in such GPS-denied environments, which in simulation experiments amounts to less than 0.4% of distance traveled, is commensurate with the drift that would be obtained with a tactical-grade inertial measurement unit (IMU) or better, yet the size, weight, and cost of the visual SLAM approach are much less than an equivalent IMU solution.
Videos
Todd Humphreys's TED talk articulates our vision for precise, spontaneous augmented reality.
We demonstrate here an early prototype of our augmented reality system.
Papers
D.P. Shepard and T.E. Humphreys, "High-Precision Globally-Referenced Position and Attitude via a Fusion of Visual SLAM, Carrier-Phase-Based GPS, and Inertial Measurements," IEEE/ION PLANS, Monterey, CA, May 2014.
K.M. Pesyna Jr., Z.M. Kassas, R.W. Heath Jr., T.E. Humphreys, "A Phase-Reconstruction Technique for Low-Power Centimeter-Accurate Mobile Positioning," IEEE Transactions on Signal Processing, 2014.
D.P. Shepard, K.M. Pesyna, and T.E. Humphreys, "Precise Augmented Reality Enabled by Carrier-Phase Differential GPS," Proc. ION GNSS, Nashville, TN, 2012.