QUASAR is a remote rendering system that represents scene views using pixel-aligned quads, enabling temporally consistent and bandwidth-adaptive streaming for high-quality, real-time visualization for thin clients.
Our repository below provides baseline implementations of remote rendering systems designed to support and accelerate research in the field. For detailed discussion on the design principles, implementation details, and benchmarks, please see our paper:
QUASAR: Quad-based Adaptive Streaming And Rendering
Edward Lu and Anthony Rowe
ACM Transactions on Graphics 44(4) (proc. SIGGRAPH 2025)
Paper: https://quasar-gfx.github.io/assets/quasar_siggraph_2025.pdf
GitHub: https://github.com/quasar-gfx/QUASAR
Minimum requirements:
To download QUASAR, either download the repository at https://github.com/quasar-gfx/QUASAR as a .zip file, or clone the repository using git:
git clone https://github.com/quasar-gfx/QUASAR
sudo apt install cmake libglew-dev libao-dev libmpg123-dev ffmpeg libavdevice-dev libavcodec-dev libavformat-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev
We recommend using Ubuntu, as some Linux distributions might not have all the required packages available.
Optional: Follow instructions https://docs.nvidia.com/video-technologies/video-codec-sdk/12.0/ffmpeg-with-nvidia-gpu/index.html for installing FFMPEG from source with CUDA hardware acceleration.
MacOS devices can run the scene viewer and the ATW client only. They are not recommended to run the servers or simulators, and cannot run any other clients.
brew install cmake glew ffmpeg
We also have implementations for scene viewing and streaming clients for Meta Quest VR headsets for testing on mobile GPUs. Please refer to https://github.com/quasar-gfx/QUASAR-client.
Sponza is cloned with the repo, but additional scenes used in our evaluations (along with some extra scenes) can be downloaded at https://drive.google.com/file/d/1zL_hsmtjyOcAbNbud92aNCxjO1kwEqlK/view?usp=drive_link.
Download and unzip into assets/models/scenes/
(this will be gitignored).
mkdir build; cd build
cmake ..; make -j
In the build/
directory, there will be a folder called apps/
, which follows the same directory layout as <repo root>/apps/
.
To build and link QUASAR as an external library (if you want to use the renderer and streaming system in another project), you can use this CMakeLists.txt
template:
cmake_minimum_required(VERSION 3.22)
set(TARGET my_project)
project(${TARGET})
set(CMAKE_CXX_STANDARD 20)
set(QUASAR_DIR ${CMAKE_CURRENT_SOURCE_DIR}/QUASAR) # copy or add QUASAR as a submodule
set(QUASAR_APP_COMMON_DIR ${QUASAR_DIR}/apps/Common)
set(QUASAR_APP_COMMON_SHADERS_DIR ${QUASAR_APP_COMMON_DIR}/shaders)
# QUASAR
set(QUASAR_BUILD_APPS OFF) # disable building apps
add_subdirectory(${QUASAR_DIR})
# QUASAR App Common
add_subdirectory(${QUASAR_APP_COMMON_DIR})
# my source files
file(GLOB_RECURSE SRCS "${CMAKE_CURRENT_SOURCE_DIR}/src/*.cpp")
add_executable(${TARGET} ${SRCS})
target_include_directories(${TARGET}
PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/include
${QUASAR_APP_COMMON_DIR}/include
${QUASAR_APP_COMMON_SHADERS_DIR}/include
)
target_link_libraries(${TARGET} PRIVATE quasar quasar_common)
file(CREATE_LINK ${QUASAR_DIR}/assets ${CMAKE_CURRENT_BINARY_DIR}/assets SYMBOLIC) # link assets directory if needed
All apps allow you to move through a scene using wasd+qe
controls.
The Scene Viewer app loads a scene and lets you fly through it.
# in build directory
cd apps/scene_viewer
./scene_viewer --size 1920x1080 --scene ../assets/scenes/robot_lab.json
The Depth Peeling app loads a scene rendered with Depth Peeling and lets you fly through it using wasd+qe
.
# in build directory
cd apps/depth_peeling
./depth_peeling --size 1920x1080 --scene ../assets/scenes/robot_lab.json
The ATW app warps a previously rendered frame on a plane using a homography.
To run the simulator (simulates streaming over a configurable network):
# in build directory
cd apps/atw/simulator
./atw_simulator --size 1920x1080 --scene ../assets/scenes/robot_lab.json
To run streamer (actually streams over a network):
# in build directory
cd apps/atw/streamer
./atw_streamer --size 1920x1080 --scene ../assets/scenes/robot_lab.json --pose-url 0.0.0.0:54321 --video-url 127.0.0.1:12345
In a new terminal, to run receiver (streaming client):
# in build directory
cd apps/atw/receiver
./atw_receiver --size 1920x1080 --pose-url 127.0.0.1:54321 --video-url 0.0.0.0:12345
Note: Replace 127.0.0.1
with the IP address of the machine running the streamer if you are running the receiver on a different machine.
The MeshWarp app warps a previously rendered frame by using a depth map to create a texture-mapped mesh.
To run the simulator:
# in build directory
cd apps/meshwarp/simulator
./mw_simulator --size 1920x1080 --scene ../assets/scenes/robot_lab.json
To run streamer:
# in build directory
cd apps/meshwarp/streamer
./mw_streamer --size 1920x1080 --scene ../assets/scenes/robot_lab.json --pose-url 0.0.0.0:54321 --video-url 127.0.0.1:12345 --depth-url 127.0.0.1:65432
In a new terminal, to run receiver:
# in build directory
cd apps/meshwarp/receiver
./mw_receiver --size 1920x1080 --pose-url 127.0.0.1:54321 --video-url 0.0.0.0:12345 --depth-url 0.0.0.0:65432
Note: Replace 127.0.0.1
with the IP address of the machine running the streamer if you are running the receiver on a different machine.
The Depth Codec app visualizes the differences of ground truth depth and a compressed depth map using a custom depth codec consisting of an 8x8 block BC4-like codec with ZSTD compression. This is the same depth codec we use in MeshWarp.
# in build directory
cd apps/depth_codec
./depth_codec --size 1920x1080 --scene ../assets/scenes/robot_lab.json
The QuadWarp app warps a previously rendered frame by fitting a series of quads from a G-Buffer.
To run the simulator:
# in build directory
cd apps/quadwarp/simulator
./quads_simulator --size 1920x1080 --scene ../assets/scenes/robot_lab.json
You can save a frame to disk by clicking View->Mesh Capture->Save Proxies
in the GUI.
To run the receiver (which loads a saved frame from disk):
# in build directory
cd apps/quadwarp/receiver
./quads_receiver --size 1920x1080
The QuadStream app fits a series of quads from multiple G-Buffers from various camera views inside a headbox. The code is a best effort implementation of QuadStream.
To run the simulator:
# in build directory
cd apps/quadstream/simulator
./qs_simulator --size 1920x1080 --scene ../assets/scenes/robot_lab.json
To run the receiver (which loads a saved frame from disk):
# in build directory
cd apps/quadstream/receiver
./qs_receiver --size 1920x1080
The QUASAR app fits a series of quads from multiple G-Buffers from various depth peeling layers with fragment discarding determined by a modified version of Effective Depth Peeling (EDP). This only lets potentially visible fragments pass.
To run the simulator:
# in build directory
cd apps/quasar/simulator
./qr_simulator --size 1920x1080 --scene ../assets/scenes/robot_lab.json
To run the receiver (which loads a saved frame from disk):
# in build directory
cd apps/quasar/receiver
./qr_receiver --size 1920x1080
Make sure you have the scenes downloaded and placed in assets/models/Scenes/
!
It is recommended you run this on a machine with ample resources (many CPU cores, high-end NVIDIA GPU w/ high VRAM). Our evaluation was run on an AMD Ryzen 9 7950X 16-Core Processor with an NVIDIA GeForce RTX 4090 (24 GB of VRAM) running Ubuntu 22.04.
Tested with Python 3.10.16
.
conda create -n quasar python=3.10
conda activate quasar
pip3 install -r requirements.txt
To run the evaluation described in the paper, you should run:
python3 run_eval.py 20 10 --pose-prediction # run 20+/-10ms trace (w/ pose prediction)
python3 run_eval.py 50 20 --pose-prediction --pose-smoothing # run 50+/-20ms trace (w/ pose prediction and smoothing)
These will run traces (found in ../assets/paths/
) for the Robot Lab, Sun Temple, Viking Village, and San Miguel scenes for 0.25m, 0.5m, and 1.0m viewcell sizes.
WARNING: these scripts will take a while to run and will use a lot of resources on your computer! The resulting videos are stored in very high quality.
--short-paths
.--view-sizes <comma-separated list>
.--scenes <comma-separated list>
.Example (this will run the Robot Lab scene with a shorter trace with viewcell sizes of 0.5m and 1.0m):
python3 run_eval.py 20 10 --pose-prediction --short-paths --view-sizes 0.5,1.0 --scenes robot_lab
See run_eval.py
for more command line parameters.
Results will be packed in tarball files in the results/
folder:
results/
└── results_20.0_10.0ms.tar.gz # results with 20+/-10ms of latency
Untarring the files will reveal:
results_20.0_10.0ms/
├── errors.json # json file containing FLIP, SSIM, and PSNR errors for each method
└── results/
├── stats/
│ └── robot_lab/
│ ├── atw_simulator.log
│ ├── mw_simulator_120.log
│ ├── mw_simulator_60.log
│ ├── qr_simulator_0.5.log
│ ├── qr_simulator_1.0.log
│ ├── qs_simulator_0.5.log
│ ├── qs_simulator_1.0.log
│ ├── scene_viewer.log
│ └── stats.json # json file containing performance timings and data payload statistics
└── videos/
└── robot_lab/
├── color/ # color videos for ground truth (scene_viewer) and all tested methods
│ ├── atw_simulator.mp4
│ ├── mw_simulator_120.mp4
│ ├── ...
│ ├── qs_simulator_1.0.mp4
│ └── scene_viewer.mp4
└── flip/ # FLIP error map videos for all tested methods
If you find this project helpful for any research-related purposes, please consider citing our paper:
@article{lu2025quasar,
title={QUASAR: Quad-based Adaptive Streaming And Rendering},
author={Lu, Edward and Rowe, Anthony},
journal={ACM Transactions on Graphics (TOG)},
volume={44},
number={4},
year={2025},
publisher={ACM New York, NY, USA},
url={https://doi.org/10.1145/3731213},
doi={10.1145/3731213},
}
We gratefully acknowledge the authors of QuadStream and PVHV for their foundational ideas, which served as valuable inspiration for our work.
This work was supported in part by the NSF under Grant No. CNS1956095, the NSF Graduate Research Fellowship under Grant No. DGE2140739, and Bosch Research.
Special thanks to Ziyue Li and Ruiyang Dai for helping on the implementation!
This webpage is adapted from nvdiffrast. We sincerely appreciate the authors for open-sourcing their code.