While also used in research and university contexts, “mixed reality” is Microsoft’s term for their new holographic computing line associated with the Hololens. It’s also the term used by the increasingly less-stealthy startup Magic Leap. HP calls it “blended reality”; Autdoesk refers to it as “reality computing”; and 3D Systems uses the phrase “digital thread”. But all of these terms are meant to represent the emerging 3D ecosystem.
But even “3D ecosystem” doesn’t quite encompass what’s happening. What we’re seeing is the development of a variety of technologies that represent a new way of interacting with the physical and digital worlds and marrying those two realms to the extent that they’re not even all that separate. The technologies that drive this new phrase, yet to be determined by the zeitgeist and large corporations, are: VR, AR, 3D scanning, 3D modeling, 3D printing, haptic devices, and probably some other stuff we can’t even conceive of yet. Altogether, these technologies, in their idealized forms, will allow us to seamlessly transfer digital and physical data across mediums, making computing more intuitive and manufacturing more fluid.
This basically means that digital data will be captured via 3D scanners, bringing physical data into the digital world to be modified with or accompanied by models created in 3D modeling software. This modeling and manipulation will be performed with haptic devices – gesture controllers, touch-sensitive tools, etc. – so as to make these models feel as though they’re physical objects. The 3D models are displayed with 3D VR and/or AR devices, so as to look as real as they feel. And all of this digital data could, potentially, be brought into the physical world with 3D printing.
Virtual Reality Is Among Us
At the moment, all of these technologies are in stages of conception to infancy to adolescence.
What we’ve got is an array of VR headsets, like the long-awaited Oculus Rift, HTC Vive, and those that rely on smartphones, such as the Samsung Gear VR, or the Structure Sensor, a 3D scanner that can be combined with an iPhone and headset to create a VR system. Then, there’s the HoloLens, an AR system that almost seems to grasp this whole mixed/blended reality computing concept better than any other product out there, as of now… if it works.
With such headsets, users will be able to immerse themselves completely in their computing experiences, the operating system wrapped around their heads and in 3D. There, 3D models, whether they be scanned reality data from the physical world or crafted characters in a video game, would begin to populate a hyperreal version of the real world.
Combined with haptic devices, like 3D Systems’ Touch stylus, and gesture controllers, like the LEAP motion, the computing experience becomes that much more immersive. Manipulating digital data feels as intuitive and natural as doing so with objects in the physical world. I mean, it’s like you can really feel them.
Then, to transfer any of this digital data back into the physical world, we’ve got the still nascent technology of 3D printing, which is becoming increasingly capable. More immediately, models created in the digital space can be made physical with a wide variety of thermoplastics, nylon, photopolymers, composites – including wood, metal, stone, carbon fiber, and graphene – reinforced carbon fiber, fiberglass, Kevlar, biological tissue, food, and conductive inks, all on desktop printers. Industrial printers offer a range of metals, sand for casting, rubber-like plastics, full-color gypsum, glass, cement and machines with huge build volumes. The next stage will be to get these materials to merge during the printing process, and with electronics, to create fully-functional objects. MIT is already getting there with a 3D printer capable of fusing 10 materials at once. Voxel8 is, too, with a desktop printer for combining conductive silver ink with PLA plastic and, soon, elastomers. Others, like Nano Dimension, are even 3D printing complete and highly accurate PCB’s.
Making the Virtual Real and the Real Virtual
To bridge the gap between the hardcore futurists and the mainstream population there are a number of consumer-friendly devices already out there or in development that most folks wouldn’t be scared of carrying around that will prep them for a time when mixed reality is the norm. Think pre-Google Glass, but post-iPhone.
For instance, the aforementioned Structure Sensor is an affordable 3D scanner from a forward-thinking startup called Occipital. What makes them forward-thinking is that their $379 scanner – which attaches to an iPad or iPhone for accessible 3D capturing of the physical world – is a gateway to the emerging mixed reality field. As a scanner, it can create accurate 3D models and, as a depth sensor, it can be used as an AR/VR headset that actually registers one’s physical environment. Ideal for interior decorating, architectural planning, and gaming. These features are still pretty young, but the device and headset are already available now, without the need to wait on a beta tester list.
Google will be taking this one step further with Project Tango, a Lenovo-produced phablet that will put 3D sensing into the pockets of consumers this summer for a very reasonable price tag of under $500. Apple, too, who many have been skeptical about when it comes to VR, may be producing a 3D sensing iPhone, the iPhone 7 Plus. According to KGI Securities analyst Ming-Chi Kuo, one of two iPhone 7 Plus models to be released this coming September will feature dual-rear cameras for possible 3D capture. If they can get to market soon enough, Project Tango and the iPhone 7 Plus could officially kickstart the mixed reality era, as ordinary individuals would begin computing the world in 3D, capturing their moments as 3D scans to share on sites like Sketchfab, which aims to be like the three-dimensional version of Facebook. It would make 3D displays actually matter to vloggers, bloggers, and social media users in general. In other words: everyone.
While the porn industry is already capitalizing on this technology (foreshadowing its success?), this limitless realm of creativity has vast potential for human life. The most banal examples include doctors rehearsing surgeries before performing them. In harder to picture scenarios, engineers could feel their way through complex shapes to solve intricate problems. Say, simulate the Deepwater Horizon catastrophe and design a solution that could only be manufactured with 3D printing, due to the technology’s ability to create complex shapes. Almost like lucid dreaming, individuals would be able to explore impossible landscapes, generate unthinkable objects, and bring real solutions into the physical world with 3D printing. We’d be realizing Buckminster Fuller’s concept of ephemeralization in which you do “more and more with less and less until eventually you can do everything with nothing.”
All of this is kind of hypothetical, of course. In the same way that desktop computing and its potential was hypothetical. Or how the potential of automated manufacturing was hypothetical. So, if this hypothetical reality of the future of mixed reality becomes real, the question will not just be what it will look like, but who will control it? Will it continue to be a tech ecosystem managed by giants like Alphabet, née Google? Or will the simultaneously emerging Maker movement be able to wrestle control for those outside of multinationals?
Because those giants already have terms for this ecosystem. So, before we can even begin to address those concerns, we must first be able to acknowledge that the mixed reality ecosystem is emerging in the first place. Maybe we should even come up with our own term for “mixed reality” before they remix our reality for us.