Best Practices for Developing Mixed Reality Data Visualizations
In 2025 the landscape of extended reality app development will continue to evolve at a rapid pace, fueled by unlocked adoption spurred by AI. Our mixed reality platform Aura, transforms data from flat lines of data to an immersive, 3D picture. Through commercial and for our government use cases, we’ve learned the key principles to ensure the data is useful and our platform can remain nimble enough to deliver high value immersive experiences with flexibility on the data sources. Below are the best practices to build a platform that balances complexity with performance—optimizing for both hardware and software requirements.
1. Don’t make the headset do all the work. The computer and the cloud are your friends and you should offload computational and processing power to them whenever possible.
Rendering high-quality visuals and processing large datasets in real-time are computationally intensive tasks. Offloading these intensive tasks to the cloud computing or dedicated server infrastructures reduces the processing burden on local devices and optimizes overall performance and experience for the end user.
Why we do this:
XR devices have limited hardware to perform computationally intensive tasks. There's a trade off between mobility and computational power when it comes to headset selection. Choosing to offload processing means we can offer our apps on a wider range of headsets.
Latency issues are one of the quickest ways to get a user to exit your app. Lag is annoying on any device, but extra frustrating when you have something strapped to your head. The pursuit of performance HAS to be ruthless for any XR developer.
Removes potential bottlenecks from data processing. For developers, design the application architecture such that computationally expensive operations (e.g., statistical analysis, data transformations) can be handled server-side, while the client device is responsible for rendering and interaction. This minimizes the performance bottlenecks that can arise when all operations are run on the local machine by ensuring that the intensive tasks are handled by more capable hardware and allowing the XR devices to focus on handling the localized rendering on the client device. This approach ensures that the data handling operations are performed a single time for all client devices, further improving the overall efficiency of the application.
It protects headset battery life. By focusing on delivering low latency experiences and offloading computationally intense tasks to the cloud or a dedicated local server it reduces the energy needed to run the application thereby increasing battery life of your device and thereby improving the amount of time users can spend in your application.
How we do this:
With Aura, we’ve developed a hybrid approach where data computation and processing are offloaded to cloud-based solutions or remote servers, leaving the MR headset or local device to focus on rendering and user interactions. This allows us to deliver fast, smooth in-headset experiences across a wide spectrum of hardware without limiting the use by the amount of data they can visualize in a session.
This means our software focuses the GPUs (Graphics Processing Units) for rendering, and frees them from the load of CPU (Central Processing Unit)-intensive tasks. We also utilize cloud computing platforms and services (e.g., AWS, Azure), which leverages parallel processing power to handle large datasets, perform machine learning analytics, and process computationally expensive operations quickly. Offloading also allows for smoother user experiences by minimizing lag and preventing device overheating, which is a common issue when GPUs and CPUs are tasked with handling both rendering and computation.
Once the processing is complete, RESTful APIs and WebSockets execute the data transfer between cloud or on prem servers to the users in headsets.
2. Ensure Cross-Device Compatibility. Hardware Agnostic or Bust.
Imagine Word not running on a Mac. Word was out for two years before it was compatible with Mac OS, which is hard to imagine today, but all you have to do is look to XR. Today a lot of XR software is only compatible with one specific hardware platform. Not every XR app can run on every headset. We believe that our app’s capabilities are a foundational software requirement for the utility of XR devices and needs to be available on the broadest possible set of hardware. This means that whether your MR visualization is rendered on a GPU-heavy device or a more general-purpose CPU-powered device, it will still perform smoothly and deliver high-quality experiences.
Why we do this:
User Adoption. We don’t know who will win the hardware wars when it comes to enterprise XR. Avoiding vendor lock means we can meet our customers where they are already at.
How we do this:
Our software leverages the Unity engine, combined with Vulkan and DirectX for high-performance graphics rendering, while ensuring it scales down to lower-powered devices by using OpenGL and Metal for compatibility on mobile devices. We rely on OpenXR and Unity’s API to interpret data from the XR device hardware in a genericized manner that also allows us to interact with platform-specific features as necessary. Additionally, by utilizing Unity’s built-in prefabs and our proprietary Asset Manager system, we are able to create both genericized and platform-specific versions of objects and materials that are to be rendered on the end user’s XR device and stripping the unneeded prefabs. This allows the Hub to send a genericized message for prefab instantiation that the local device can use to render the referenced 3D object while keeping the application binaries minimal since they only contain definitions for the platform-specific prefab.
For higher-performance tasks (e.g., physics simulations, spatial mapping, or machine learning), we use libraries like TensorFlow.js for web-based applications and CUDA for GPU-accelerated processing in native applications. Aura Hub uses websockets and direct TCP/IP connections to deliver data to the Aura XR users, which are generalized so that the hub doesn't care what kind of device is listening. Furthermore, messages that are broadcast from the XR users are first sent to the Hub which then propagates out to the other users as opposed to XR devices sending messages to each other. This further enables cross-platform capabilities by ensuring that updates to session content are able to be sent and received regardless of the transmission abilities of the connected device.
3. Focus on Security and Privacy. Be your own garbage collector or risk getting hacked.
C languages have been critiqued as being potentially less secure than other programming languages. Unity, the engine behind many games and immersive applications (including ours), uses C#, a memory-safe language, as a wrapper that compiles down to C++. Unreal Engine also uses C++. Does this mean that all extended reality applications are inherently insecure? The answer is: it depends. Using a memory-unsafe language does not make your code memory unsafe. MR experiences are often built using a combination of server-side data storage, cloud services, and real-time data transmission, which can introduce other potential security risks.
Why we do this:
Customer expectation. Securing the data the application serves up to the users is table stakes, simple as that.
How we do this:
It’s ideal to protect the data within the application by limiting the number of times you access the data source. For us as XR developers, a lot of the content file types that users want to engage with are also accessible vectors for attack (a lot of malicious code can hide in a 3D model for example). If you can access the data source once on application launch and then only again if the user triggers the need/desire to bring more data in or refresh the data, that is preferred to constantly accessing/streaming the data.
We recommend XR developers adopt a burn policy in your applications, leaving no trace or data fragments. AKA: be your application's own garbage collector and don’t assume that automation will do this for you. At the end of an Aura session, all data is wiped. If the application were to be compromised, there would be no customer data there to be exploited.
Just like with web and mobile, developers should use End to End Encryption for data exchange in XR. This also means that sensitive data should be encrypted both at rest and in transit using AES (Advanced Encryption Standard) or RSA encryption algorithms. In Aura, these strategies are used when data is shared between the hub (where the processing happens) and the Aura XR users (in headsets).
For local device security, we use Trusted Execution Environments (TEEs) and hardware-backed security features (like TPMs) to ensure that critical computations and sensitive data storage are isolated from the rest of the system. This minimizes the risk of data leakage during runtime.
Looking Forward
Building spatial data visualizations isn’t just about creating visually impressive content; it’s about ensuring that complex datasets are presented in an intuitive, scalable, and secure manner.
To address all three of the problems above, Dauntless XR developed a patent-pending methodology to power Aura that centralizes data processing while maintaining a seamless user experience for multiple individuals in headsets.
From multi-aircraft mission debriefs for pilot training to space weather tracking and forecasting the Dauntless team loves seeing how Aura cuts down on decision making time and brings data to life in a new way. We look forward to seeing how other industries use the XR medium for data analysis. We encourage developers to apply these practices and share your experiences or challenges building for XR with complex data in the comments below.