- No Setup Required: Lasertag on Quest 3 and 3S now features continuous scene meshing, eliminating the need for time-consuming room scans.
- Real-Time Environmental Mapping: The game dynamically constructs a 3D volume of the room over time using the Depth API, allowing accurate laser collisions even with previously viewed objects.
- Pushing MR Limits: This approach trades battery life and performance for seamless, constantly updated scene understanding—surpassing Meta’s current static mesh system.
A major step forward in mixed reality development has been achieved with the game Lasertag, as its developer has implemented continuous scene meshing on the Meta Quest 3 and 3S headsets. This new approach eliminates the need for the traditional room scanning setup, which often slows down users with a time-consuming calibration process. Instead of relying on a static scan, Lasertag dynamically builds a real-time understanding of the physical environment using depth data.
Meta’s current system requires users to perform an initial scan of their surroundings to create a 3D mesh. However, this scan captures a snapshot in time and becomes outdated if objects move or change. This limitation affects mixed reality immersion, as the virtual content might not reflect the real-world environment accurately unless the user manually re-scans. The new implementation in Lasertag sidesteps this issue by constantly updating the environment during gameplay.
Using Meta’s Depth API, which provides live depth frames based on the headset’s camera disparity, Lasertag initially applied this for real-time occlusion and laser collision detection. In the game’s beta version, this functionality has been expanded significantly. The game now builds a 3D volume texture over time using these depth frames, allowing it to simulate realistic laser collisions even with objects no longer in view—as long as the headset has previously observed them.
Further experimentation in earlier builds even included networked heightmapping, where multiple headsets shared their individually constructed spatial maps. Although not currently implemented, this feature hints at future possibilities for shared spatial awareness in multiplayer mixed reality experiences. It represents a step toward a more collaborative and persistent understanding of physical environments in real time.
Unlike Apple Vision Pro and Pico 4 Ultra, which achieve continuous meshing with hardware depth sensors, the Quest 3 relies on computational vision, which consumes significant CPU, GPU, and battery resources. While Meta plans to eventually make their scene mesh system more adaptive, it appears the initial room setup requirement will remain. Meanwhile, Lasertag offers a compelling example of how developers can push the boundaries of what’s possible on current hardware, even at the cost of performance and battery efficiency.





















