What Is Spatial Computing?


The new year is here and some of the tech sector’s biggest announcements are just around the corner. The Consumer Electronics Show (CES) takes place this week, and we will be hearing a lot more about spatial computing and mixed reality from the companies keynoting and exhibiting at one of the tech world’s biggest events.

Generative AI is currently top of mind for many and will be a big focus during the show, especially after the revolutionary impact GenAI had on the business world since last year. On the other hand, Spatial Computing is in its evolutionary phase. “Spatial computing” is not a term in widespread use, nor is it well-understood, and while many of us have been working in the field of spatial computing for years, its impact is just starting to be felt. For many in the industry this is a big year for spatial computing, and it all starts with CES.

How Academia And Tech Companies Have Defined Spatial Computing

For many professionals out there, Apple’s WWDC conference last June might have been the first place they heard the term spatial computing, but the term has been around for awhile, and many professionals have been working in the industry for a while.

Many attribute the academic introduction of the term spatial computing to Simon Greenwold’s 2003 MIT master thesis. As a researcher in the Aesthetics and Computation group at the MIT Media Lab, Greenwold explored spatial contexts for computational constructs. In his 2003 thesis he defines spatial computing as:

“…human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces. It [Spatial Computing] is an essential component for making our machines fuller partners in our work and play.”

Greenwold continued to describe spatial computing, “…as human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces. Ideally, these real objects and spaces have prior significance to the user. Spatial computing is more interested in qualities of experience. Mostly, it means designing systems that push through the traditional boundaries of screen and keyboard without getting hung up there and melting into interface or meek simulation. In order for our machines to become fuller partners in our work and play, they are going to need to join us in our physical world. They are going to have to operate on the same objects we do, and we are going to need to operate on them using our physical intuitions.”

Greenwold’s definition isn’t the only one out there. Magic Leap, once one of the darlings of the venture capital and tech world, described the device they were building as a spatial computing device very early on. They defined spatial computing as a new form of computing that uses AI and computer vision to seamlessly blend virtual content into the physical world around us. They did this through a device called the Magic Leap One. In a 2018 article titled Spatial Computing: An Overview for our Techie Friends, written by former CEO Rony Abovitz and several other prominent Magic Leap employees, they explain how the company defined spatial computing as a new form of computing that allows digital content to move beyond the confines of the 2D screens and computers of today. Since then Magic Leap has pivoted away from using the term “spatial computing” to using the term augmented reality (AR), as seen in their most recent media interviews and on their website.

During Apple’s WWDC June 2023 keynote, Apple CEO Tim Cook said spatial computing, “seamlessly blends digital content with the physical world while allowing users to stay present and connected to others.” This messaging is further reflected on their website and their literature for visionOS for developers.

During last year’s Meta Connect developer’s conference, Meta CEO Mark Zuckerberg announced the launch of their Meta Quest 3, which uses new chips that allow the device to deliver better pass-through mixed reality, better scanning of the physical world through advanced spatial mapping, and spatial anchoring of virtual objects that wearers can come back to each time they use the device. Zuckerberg also spoke about ushering the next computing platform through advancements in smart glasses. Meta’s CTO, Andrew Bosworth, proclaimed that their new headset was the “best value spatial computing headset on the market for a long time to come.” The company also announced its new Ray-Ban Meta Smart Glasses, which have become multimodal, allowing the glasses to understand the world around wearers using AI.

Microsoft has defined spatial computing as the ability of devices to be aware of their surroundings and to represent this digitally. Microsoft has said spatial computing offers novel capabilities in human-robot interaction.

Amazon Web Services (AWS) defines it as the combination of the virtual and physical worlds that allows users to interact with digital content in a natural and intuitive way, by allowing our physical world to be virtualized and by overlaying virtual information onto the physical world. To AWS, this combination enhances how we visualize, simulate, and interact with data in physical or virtual locations. In his post, “The Best way to Predict the Future is to Simulate it”, Amazon VP of Technology, Bill Vass, stated, “Spatial computing is what powers collaborative experiences.”

Additionally, in the book The Infinite Retina, the term is discussed like this: “with Spatial Computing, the Fourth Paradigm, computing escapes from small screens and can be all around us. We define Spatial Computing as computing that humans, virtual beings, or robots move through. It includes what others define as ambient computing, ubiquitous computing, or mixed reality. We see this age as also inclusive of things like autonomous vehicles, delivery robots, and the component technologies that support this new kind of computing, whether laser sensors, Augmented Reality, including Machine Learning and Computer Vision, or new kinds of displays that let people view and manipulate 3D-centric computing.”

This week, we will surely hear from many other tech companies about what they are planning to do in spatial computing, especially in the context of today’s AI revolution.

So, What Is Spatial Computing?

Spatial Computing is an evolving 3D-centric form of computing that, at its core, uses AI, Computer Vision and extended reality to blend virtual experiences into the physical world that break free from screens. In a robust spatial computing experience, almost any surface could serve the same role as a screen or even a touch-sensitive interface, making almost any surface a spatial interface. It allows humans, devices, computers, robots and virtual beings to navigate through physical 3D spaces via a new form of computing. It ushers in a new paradigm for human-to-human interaction as well as human-computer interaction, enhancing how we visualize, simulate, and interact with data in physical or virtual locations and expanding computing beyond the confines of the screen into everything you can see, experience, and know.

Spatial Computing allows us to navigate the world alongside robots, drones, cars, virtual assistants, and beyond. It is not limited to just one technology or just one device. It is a mix of software, hardware and information that allows humans and technology to connect in new ways, ushering in a new form of computing that could be even more impactful than personal computing and mobile computing have been on society.

It is a scale technology that gets its “eyes and ears” from AI and Computer Vision and ushers in the era of Large Vision Models (LVM). It includes elements of what some call ambient computing, ubiquitous computing and mixed reality, but it is not limited to just this.

A widely used definition for the business world is needed to make sense of spatial computing, its value, and how it will impact the future of business, work, education, shopping, leisure, etc. Spatial computing is the next shift in how humans interact with technology. It involves a range of technologies, from AI, XR, IoT, sensors, and more, in order to create a more immersive and impactful form of human and computer interaction. It will allow workers to bring their workstations with them with ease. It will replace screens with an infinite canvas. Through AI, spatial computing will usher in a new way of communicating with computers and machines, where those machines are able to interpret our world, enabling a new paradigm of human computer interaction.

Today’s rudimentary AR that we experience on our phones is planting the seeds for tomorrow’s spatial computing. The spatial computer will understand the wearer and their physical space, which in turn becomes updatable and interactive in real time. It will allow for more intuitive and natural interactions with our computers and enables our devices to better understand, map, and navigate our physical environment. These devices will see what we see and learn about our world. In some ways, it allows us to interact with the virtual world with the same ease we do the physical world.

Humans are naturally spatial beings that understand and engage with the world volumetrically, so spatial computing promises to return us to spatial thinking that is very often lost as we age and when we are forced to translate our creativity into flat surfaces. Its promise is to make us more productive, efficient, creative and facilitate communication with others. Spatial computing can eventually lead to making better decisions, whether in business or in other aspects of our lives. It’s an evolutionary technological shift away from static devices that must hang on our walls, sit on our desks, or rest in our hands to devices that start to fade into the background and allow us to go back to focusing on the physical space around us, albeit augmented.

Spatial computing will make the devices we use and how we use them blend into the daily natural flow and patterns of how we live our lives. It combines software, hardware, data/information, and connectivity.

Spatial computing brings digital information and experiences into a physical environment. It takes into account the position, orientation, and context of the wearer, as well as the objects and surfaces around it. It uses a new, advanced type of computing to understand the physical world in relation to virtual environments and the wearer. It does this by using emerging interfaces like wearable headsets that have high-resolution cameras, scanners, microphones, and other sensors built into the device. New interfaces come in the form of hand gestures and finger movements, gaze tracking, and voice. GPS, Bluetooth, and other sensors make creating digital content with physical context possible.

Spatial computing uses information about the environment around it to act in a way that’s most intuitive for the person using it. How businesses digitally transform using spatial computing will set them apart from the competition and set them up for success for generations who grow up in an increasingly blended virtual and physical world.

Many out there confuse the term and equate it to AR, VR, mixed reality (MR), or extended reality (XR), but per the definition above, it’s clear that these are not the only technologies that enable Spatial Computing. AI plays a critical role and is one of the most important underlying technologies that will help bring spatial computing to the masses.

The future of spatial computing is poised for substantial growth, driven by key advancements. These include radical progress in optics, the miniaturization of sensors and chips, and the ability to authentically portray 3D images. These innovations, supported by significant breakthroughs in AI, will make spatial computing increasingly compelling for businesses on a grand scale in the years to come

Why You Should Pay Attention to Spatial Computing And The Convergence Of Its Enabling Technologies (AI, AR, VR, XR, MR, etc.)?

As explained in the Harvard Business Review, “Spatial computing is an evolving form of computing that blends our physical world and virtual experiences using a wide range of technologies, thus enabling humans to interact and communicate in new ways with each other and with machines, as well as giving machines the capabilities to navigate and understand our physical environment in new ways. From a business perspective… it will expand computing into everything you can see, touch, and know.”

The Generative AI craze of 2023 is starting to give way to a race toward AI hardware and wearables. With companies like OpenAI partnering with Jony Ive, or Meta’s CTO, Andrew Bosworth, in a recent interview with Alex Heath from The Verge stating that Meta has developed what he believes might be “the most advanced piece of technology on the planet in its domain. In the domain of consumer electronics, it might be the most advanced thing that we’ve ever produced as a species.”

Spatial computing is a term that you will see increase in use across tech news and tech announcements in the months to come, yet it still is in its infancy. It will take several years for Spatial Computing to evolve into its full potential and impact business and how we engage with each other and with technology in the same way or in an even more impactful way than the past phases of computing have. The age of AI hardware, smartglasses, and spatial computing is here – and you can help shape that future today.



Source link

greg@ainewsbeat.com

Learn More →