7 min readBy Muhammad Hazimi Bin Yusri

User Testing Reality Check: When Your VR App Meets Real People

I thought my VR audio-visual app was pretty good until I put it in front of actual users. Here's what 6 participants taught me about the difference between 'technically correct' and 'actually usable'.

#User Testing#VR#UX#Audio#Feedback#Reality Check

User Testing Reality Check: When Your VR App Meets Real People

You know that moment when you've been working on something for months, you think it's pretty solid, and then you hand it to someone else and they immediately break it or find problems you never even considered?

Welcome to my user testing experience with the VR audio-visual scene reproduction app.

I tested with 6 participants - half of my project's total user evaluation - and boy, did they teach me some things I wasn't expecting.

The Setup: What I Thought I Was Testing

The app lets users explore recreated 3D environments with spatial audio. You can:

  • Navigate through different premade scenes (meeting rooms, studios, etc.)
  • Move sound sources around in real-time
  • Experience spatial audio that changes based on your position
  • Switch between VR and desktop modes

Technically, everything worked. The spatial audio was mathematically correct, the scene rendering was smooth, the interaction systems responded properly. I was pretty proud of it.

Then I actually watched people use it.

What Users Actually Experienced

The "Realistic Audio" Problem

Me thinking: "The spatial audio occlusion is working perfectly! When you're behind an object, the sound gets muffled just like in real life."

User feedback: "The sound would sometimes be very low compared to my expectations when I was behind some objects."

This was a huge eye-opener. The audio was behaving exactly as designed - sound gets blocked by objects, just like physics says it should. But users found this confusing and frustrating.

They expected to still hear things clearly even when "behind" virtual objects, because in a VR environment, they're still trying to experience the space, not have parts of it become inaccessible.

Lesson learned: "Physically accurate" and "user-friendly" are not the same thing.

The "One Speaker" Limitation

Another participant suggested: "It would also be nice if there could be two speakers for example in the music hall."

This seemed like such an obvious feature in hindsight, but I'd been so focused on perfecting the spatial audio for a single source that I never considered the user might want multiple sound sources in the same scene.

It's a great example of how developer tunnel vision works - you solve the technical problem you set out to solve, but miss obvious UX improvements.

The Doppler Effect Discovery

One thing that surprised me in a good way: several participants noticed the Doppler effect when moving sound sources around quickly. I'd implemented this as part of the real-time audio system, but I wasn't sure if people would actually notice or care.

Turns out, they did notice, and they thought it was really cool! It added to the sense that the virtual environment was behaving like a real physical space.

Lesson learned: Sometimes the technical details you're unsure about are exactly what makes the experience feel polished.

The VR vs Desktop Mode Divide

I built the app to work both in VR and on desktop, thinking this would make it more accessible. What I found was that users had very different expectations for each mode:

VR Mode Expectations:

  • More immersive, "I'm really in this space"
  • Natural hand interactions
  • Comfortable navigation (teleportation vs smooth movement)
  • Clear visual feedback for interactions

Desktop Mode Expectations:

  • More like a traditional 3D application
  • Mouse and keyboard controls should feel familiar
  • Faster navigation through spaces
  • More detailed information displays

The spectator window feature (F2 key) was actually a hit - people liked being able to see what was happening from different angles, especially when demonstrating to others.

The Learning Curve Observations

Watching people learn to use the app was fascinating:

VR Novices (2 participants):

  • Needed more time to get comfortable with basic VR navigation
  • Preferred teleportation over smooth locomotion
  • Found the hand controllers confusing initially
  • But once they got it, they were really engaged

VR Experienced (4 participants):

  • Jumped right into the interactions
  • Immediately started experimenting with sound positioning
  • Gave more technical feedback about performance and features
  • Had higher expectations for polish

This reinforced something I'd read about VR design: you need to accommodate both skill levels, not just aim for the middle.

The 20-Minute Sweet Spot

Each session was about 20 minutes, which turned out to be perfect. Long enough to:

  • Get comfortable with the basic controls
  • Try multiple scenes
  • Experiment with different features
  • Form real opinions about what worked and what didn't

But not so long that VR fatigue became a factor. Several participants mentioned they could have kept going, which is a good sign for engagement.

Unexpected Positive Feedback

Some things worked better than I expected:

Lighting Variations: Multiple people commented on how different scenes had distinct "feels" due to mesh and texture colors. I'd put effort into making each environment visually distinct, but wasn't sure if it would matter much.

Scene Transitions: The loading system worked smoothly enough that people didn't lose immersion when switching between environments.

Overall Experience: Most feedback was along the lines of "Overall pretty good," which for a research prototype felt like a win.

What I Would Change Based on Testing

Audio Improvements

  1. Adjustable occlusion levels - Let users control how much objects block sound
  2. Multiple sound sources - Support for several audio streams in one scene
  3. Audio visualization - Some way to "see" where sounds are coming from

Interaction Improvements

  1. Better onboarding - Clearer tutorial for VR novices
  2. More feedback - Visual cues when interacting with objects
  3. Comfort options - More locomotion choices for different preferences

Scene Improvements

  1. More diverse environments - Different types of spaces beyond meeting rooms
  2. Interactive objects - Things to manipulate beyond just sound sources
  3. Information overlays - Context about what you're experiencing

The Technical vs. User Experience Gap

This testing really drove home the difference between technical success and user experience success:

Technical Success:

  • ✅ Spatial audio working correctly
  • ✅ Cross-platform compatibility
  • ✅ Stable performance
  • ✅ Real-time interaction

User Experience Success:

  • ⚠️ Audio behavior matching user expectations
  • ⚠️ Intuitive controls for different skill levels
  • ⚠️ Feature completeness (multiple sound sources)
  • ⚠️ Clear feedback and guidance

The "Good Enough" Trap

One pattern I noticed: areas where I thought "this is good enough for a prototype" were exactly the areas users wanted to see improved. The rough edges that I was planning to "polish later" turned out to be the things that most affected user experience.

Examples:

  • Menu navigation could have been smoother
  • Audio controls could have been more intuitive
  • Visual feedback could have been clearer

Users don't experience your app as a "prototype" - they experience it as an app. Their feedback reflects what they need for it to feel complete, not what you think is acceptable for a research project.

Key Takeaways for VR Development

  1. Test early and often - User behavior is impossible to predict from developer assumptions

  2. "Realistic" isn't always "usable" - Sometimes you need to break physical laws for better UX

  3. Different user backgrounds need different approaches - VR novices and experts have very different needs

  4. The details users notice might surprise you - Doppler effects matter, but occlusion frustrates

  5. 20 minutes is a good testing session length - Long enough for real feedback, short enough to avoid fatigue

  6. Cross-platform means different expectations - VR and desktop users want different things

Moving Forward

The user testing phase completely changed my priorities for future development. Instead of focusing on more advanced technical features, I'm now thinking about:

  • Multiple audio source support
  • Adjustable realism settings
  • Better onboarding experiences
  • More diverse content

It's humbling to realize how much you don't know about your own project until you watch other people use it. But that's exactly why user testing is so valuable - it bridges the gap between what you built and what people actually need.


Have you done user testing for VR or other interactive projects? I'd love to hear about your experiences and what surprised you - reach out through the contact page!

Back to Blog
Share this post: