This is yet another path tracer written in rust, using nalgebra
for mathematical operations, gltf
for GLTF scene file support, and wgpu-rs
for frontend display. It is primarily written for personal education and entertainment purposes, though it is also intended to be somewhat performant and capable of supporting multiple commonly used file formats
This project heavily borrows from the book Physically Based Rendering: From Theory to Implementation
by Matt Pharr et. al. which can be read online here.
- Real time frontend preview for inspection and camera adjustments
- Remote render preview via the
tev
tool - GLTF file format support (also supports the
KHR_lights_punctual
,KHR_materials_ior
, andKHR_materials_transmission
extensions,KHR_materials_pbrSpecularGlossiness
support forthcoming) - Mitsuba file format support (Work in progress, support is very ad hoc)
- Supported light types
- Point Light
- Directional Light
- Area Light
- Mesh Emission Map
- Environmental Map
- Supported materials
- Diffuse (Lambertian)
- Metal
- Pure Mirror
- Glass
- Substrate (Plastic in Mitsuba)
- Microfacet model based on Torrance–Sparrow for metal, glass, and substrate materials
- Disney BSDF (limited support)
cargo build --release
./target/release/pathtracer-rs --help
pathtracer_rs 1.0
Eric F. <[email protected]>
Rust path tracer
USAGE:
pathtracer-rs [FLAGS] [OPTIONS] <SCENE> --output <output>
FLAGS:
--default_lights Add default lights into the scene
-h, --help Prints help information
--headless run pathtracer in headless mode
-V, --version Prints version information
OPTIONS:
-c, --camera <camera_controller> Camera movement type [default: orbit]
-l, --log_level <log_level> Application wide log level [default: INFO]
-d, --max_depth <max_depth> Maximum ray tracing depth [default: 15]
-m, --module_log <module_log> Module names to log, (all for every module) [default: all]
-o, --output <output> Sets the output directory to save renders at
-r, --resolution <resolution> Resolution of the window
-s, --samples <samples> Number of samples path tracer to take per pixel (sampler dependent) [default: 1]
--server <server> tev server address and port for remote rendering [default: 127.0.0.1:14158]
ARGS:
<SCENE> Sets the input scene to use
There are two camera control modes, first person and orbit, these are set using the CLI option -c orbit
or -c fp
Camera view always points to the origin. Mouse click drag rotates camera about the origin. Mouse wheel moves the camera closer or further to the origin.
- W/A/S/D: Moves the camera front, left, back, and right
- Z/X: Moves the camera up and down
- Q/E: Adjusts camera roll.
- Mouse click drag rotates the camera pitch and yaw in first person.
- R: Renders image according to current camera and sampling settings
- C: Clears current render and returns to real time preview
- CTRL+S: Saves current rendered image to the directory specified in
--output
with namerender.png
- ↑/↓: Increases or decreases sample increment
- CTRL+H: Toggles displaying of mesh
- CTRL+G: Toggles displaying of wireframe outline
If the --headless
flag is set, no preview window will be created. Rendering will proceed and the image will be saved at the --output
directory with name render.png
automatically.
Remote render preview is available via the tev tool by
Thomas Müller. The tool can be found here. Note that currently only the latest master of tev
is supported due to protocol switch to TCP. The --server
option can be set to point to the instance of tev
running and its listening port.
- Subsurface Scattering
- Volume Rendering
- Path Guiding
- Wavefront MTL support
- SIMD based intersection routine with batched ray casting
- Optix based GPU acceleration structure and intersection routine
- GPU based integrator
Can be found here.