top of page
  • Writer's pictureJed Ashforth

XR Expectations | A Symphony of Senses

Updated: Mar 24, 2023

XR Expectations Part 2

In the first part of this series looking at the foundations of Immersive Design, we learned why Expectations are one of the major components of any XR experience, and introduced the idea of Expectation Models. An Expectation Model is simply a shorthand way to view the whole package of sensory inputs and responses that we, as users, associate with doing specific activities and interactions in the real world. In essence - when we’re familiar with a thing, we expect certain behaviors from it. Expectations are a crucial consideration for any designer in any industry, and their importance in the user-centric experiential world of XR is hugely significant.


Whether a user’s expectations are successfully met by the virtual version of an experience can impact not only their degree of immersion, but also their level of comfort within a virtual experience. Since both immersion and comfort are primary goals for anyone building an XR experience, understanding and predicting what a user’s expectations are of any given activity is crucial to build a good user-focused experience. To do that, we need to have confidence about how any given user will react when facing a virtual version of something they know and understand in the real world. But with expectations, every user is different.


Expectation Models are built and developed as a result of any and all exposure to a real-life object, interaction, or event. The more familiar we are with a real-life experience, the stronger and more robust our expectation model becomes until we get to a state where we become hyper-sensitive to any little thing being amiss with that experience.


A familiar example, how we move around in a virtual world, is one of the most common design issues VR developers have to deal with. In real life, it’s easy to walk through real space to get somewhere we can see. If we walk towards a distant horizon in VR, sooner or later we’re going to walk into a real-world wall, or trip over a real-world table that we can’t see while we’re in VR. So there’s always been a need to move through a VR world without moving too far in the real world.


A view of the legs of people walking down a city street
Most users are hardcore walking experts, they practice every day. And the hardcore are the hardest to convince.

The problem is, navigating around and moving our bodies through the world is something that almost all of us are intimately familiar with, having done it every day of our lives since we could first walk. Nature has made us fantastically adept at walking and steering and balancing, and a lifetime of practice has made it effortless. We adjust our balance, our speed, our trajectory as we walk, with nothing more than a thought. For most of us, it’s the most natural thing in the world.


How? Well, it’s possible because our cerebellum is silently working away in the back of our brains, busily processing all the data coming in from our senses. Like a composer leading an orchestra, our cerebellum is making constant, immediate adjustments so that our motor systems work in symphony to move us in the right direction, and making sure we don’t lose balance and fall over. There are hundreds of variables involved in human locomotion - we use multiple joints in our skeletal system, all of which can have different constraints and ranges of movement, and a large number of nerves and muscles. In order for our body to be able to walk, and not simply to collapse, all of these individual instruments need to coordinate perfectly with each other. The process of doing this with a bipedal body is hugely complex. Boston Dynamics founder Marc Raibert has been working on teaching a robot to walk since the 1980s, and self-learning AI is also having a hard time of it too.


But as complex as bipedal locomotion is to achieve and maintain, the orchestration of which muscles we need to use and how we need to shift our balance is mostly invisible to us as we navigate a complex world. We can leave our cerebellum to manage these complex tasks as background processing for the most part. If we hit an unexpected or unfamiliar obstacle, suddenly the background task of navigation becomes a higher priority for us, foregrounding the need to solve how we get past the obstacle.

Boston Dynamic's ATLAS robot navigating the environment
Boston Dynamic's ATLAS robot has mastered complex environments and has since moved on to become highly skilled at dance and parkour.

At the moment, we can’t simulate all of the rich sensory inputs that we expect when walking, if we’re just pushing a stick forward on a VR controller. VR can simulate the visuals we would expect to see, but the accompanying sensory data that our cerebellum is expecting to see synchronized with that is simply incomplete if we’re not physically locomoting through real space. When there’s gaps in that symphony of sensory input data the cerebellum is relying on, we can immediately run into problems. If an orchestra suddenly loses a single string player only the most trained ear might spot it. But if it loses the entire string section, the music is going to sound very different. Lose the percussion and the brass as well and the tune may simply no longer be recognizable any more.


VR locomotion has this exact problem. If you’ve ever fallen over because you stood up and put your weight on a foot that has ‘gone to sleep’ (known as paresthesia), or veered and staggered about because you confused your cerebellum with alcohol (known as a great night out), or then you’ve got first-hand experience of how quickly that finely tuned symphony can fall apart when the incoming data we’re getting in is incomplete or distorted. Our poor cerebellum is doing the best it can to conduct as normal, but it can’t keep the tune together for long when important parts of the symphony start going missing.


This means that any solution for artificial locomotion in VR has a very tough time appearing authentic to users, because not all of those sensory expectations are happening. Your cerebellum is working with a compromised set of data that only partly matches your expectations. Parts of the Orchestra are never going to show up. Other parts are playing in an unexpected key.


Considered in this context, it’s not hard to see why emulating human locomotion in VR is so damn tricky to pull off – We’re all real-life experts at walking around, and it’s hard to fool an expert just through sounds and visuals alone.


There's a lot to unpack about VR locomotion, its challenges and the solutions we've seen so far. Much more than can be covered here. And the same is true of expectations and how they influence the user experience. It’s not just locomotion any and every action you do in VR carries some expectations with it, even if it's something you've never tried before. This is something I'll cover in more detail separately, but the key is to remember that everything can be considered to have an expectation package attached to it, a manifest of sensory inputs that your brain ticks off to confirm you are seeing what you think you are seeing, and doing what you think you are doing, and that everything is working as planned. The more familiar that thing is to a particular user, the richer the expectations they attach to it, and the harder it is to fool them through the current capabilities of immersive tech.


This is why it's unfair to view VR locomotion as something that can be solved in software. It’s simply a by-product of the way our bodies and minds work in symphony. Our physiology is designed this way, and it's so finely tuned to some tasks that it can spot a fake a mile off. The challenge is simply that the quality of the fake info we’re feeding in isn’t complete and robust enough to fool us completely. It’s natural, and it’s inevitable that anything we’re really familiar with in real life is going to be hard to fool us with in VR.

 
A general guideline for designers is that if a user is familiar enough to do a thing on ‘autopilot’ in real life, it’s going to be hard for VR to pull it off convincingly for that user.
 

This covers all sorts of activities and interactions that we might wish to emulate in VR because we do them every day. The more we do something, the better attuned we become to doing it well, and efficiently. Like a conductor leading an orchestra, our cerebellum is highly attuned to all the instruments that need to work in symphony, with efficient, highly trained expertise. Being good at something in real life makes it harder to believe an artificial version. It's another case of an Uncanny Valley effect.


A conductor of an Orchestra holding his magical musical wand
Like a conductor leading an orchestra, our cerebellum is highly attuned to all the instruments that need to work in symphony, with efficient, highly trained expertise.

Because of our intimate familiarity with our most common activities, that becomes a thorny challenge for XR designers to wrestle with. For the activities and experiences that we do the most, we can have the hardest time making them believable. The symphony will not sound complete to us, the experience will be discordant, and the disharmony between our expectations and our actual experience comes at a cost. At best, we might be pulled out of our immersion for a moment, like a bum note being hit by a single musician that stops us, for a brief moment, from being lost in the music. At worst, this can be a catastrophic collapse of the whole orchestra, a cacophony of wrong and missing notes that so barely resemble our expectations that it hurts our heads to try and pick the tune out that's supposed to be there.


That harmonic discordance between our expectations and our experience is often a strong trigger for discomfort, and can even trigger the onset of 'simulator sickness' and other symptoms for some users. And that becomes a serious problem for designers in any XR field. Most of us don't want to be making our users feel bad. But we also want to be able to enable the most natural and intuitive immersion in the realities we're designing. In many cases, those two goals can be at odds with each other. The richness of our expectations can't be fully simulated with virtual or augmented proxies of those activities. We're experts that can't easily be fooled.


This conflict is something we see happening with all sorts of interactions and representations in VR all the time. In the next part of this series we'll run into this again we look at how and why driving a virtual car can be so hard to get right.

38 views0 comments

Recent Posts

See All
bottom of page