QUASAR: Quad-based Adaptive Streaming And Rendering

Edward Lu
Carnegie Mellon University
Anthony Rowe
Carnegie Mellon University

ACM Transactions on Graphics (SIGGRAPH 2025)

paper
Paper (Author's Copy)
video
Main Page
video
Docs
video
Video (Coming Soon)
github
Code

Quality Comparisons

Scene:

Latency (Single Trip):

MeshWarp FOV:

Viewcell Size:


Robot Lab

(0.47M triangles, 635 objects, moderate depth complexity)

ATW

(~5 Mbps)
FLIP Error
Displayed Image

MeshWarp (120° FOV, 4K)

(~246 Mbps)
FLIP Error
Displayed Image

QuadStream (50 cm view box length)

(~1740 Mbps)
FLIP Error
Displayed Image

QUASAR (50 cm view sphere diamater)

(~156 Mbps)
FLIP Error
Displayed Image
Traces above were rendered at a resolution of 1920×1080 at 60 FPS (1500 frames collected).
Each scene contains image-based lighting, along with one directional light and four point lights, all of which cast PCF shadows. View-dependent effects are disabled.

(Reload the webpage if videos appear out of sync).

Data Rates

Data rates

Average data rates of all methods for each trace in our evaluations. Results for QuadStream and our technique are shown for three different viewcell sizes to highlight the effect of viewcell size on data rate. Reported values are the average of data rates across both tested latencies.


Choosing the Best Operating Point

Quad merging

Effect of quad merging on visual quality and data rate for the Robot Lab scene (w/ 50 cm viewcell). Trials vary quad merging thresholds, impacting geometric resolution and compression efficiency. Parameter values between adjacent points differ by a factor of 2. The chosen operating point (𝛿_sim=0.5 and 𝛿_flatten=0.2) is highlighted.


Citation

@article{lu2025quasar,
    title={QUASAR: Quad-based Adaptive Streaming And Rendering},
    author={Lu, Edward and Rowe, Anthony},
    journal={ACM Transactions on Graphics (TOG)},
    volume={44},
    number={4},
    year={2025},
    publisher={ACM New York, NY, USA},
    url={https://doi.org/10.1145/3731213},
    doi={10.1145/3731213},
}