View Camera Scene PostProcessing Shader: sampling rgb-d image camera(s)

Problem

In our multi-camera setups with known pose and RGB-Depth images, we can accurately sample pixels from an Image Camera into each pixel in View Camera via a fragment shader. The current approach of treating pixels as discrete squares leads to aliasing and perspective distortion issues, especially with nearby cameras and parallax effects.

Solution

Implement a post-process fragment shader in CesiumJS that:

Core Concept

1. World Position Reconstruction

Each pixel in the View Camera is treated as a directional ray on a panosphere, defined by azimuth (ϕ) and elevation (θ) and a guaussian ellipse sample (covariance matrix). The depth texture is used to reconstruct the world position of each pixel as follows:

P_world = M_viewProj-1 * P_clip

2. Reprojection to Image Camera

The world position is then projected into the Image Camera's screen space using its known view and projection matrices:

P_image = M_projimage * M_viewimage * P_world

The projected coordinates are then converted to texture space (UV) for sampling:

UV_image = (P_image.x / P_image.w * 0.5 + 0.5, P_image.y / P_image.w * 0.5 + 0.5)

3. Image Camera Sampling

Using the calculated UV coordinates, the corresponding color is sampled from the Image Camera's RGB-Depth texture:

Color_sampled = Texture2D(ImageColorTexture, UV_image)

Depth verification (optional) can be done to ensure correct correspondence and manage occlusions.

Rabbit Hole: Advanced Considerations

Risks

Why Now?

This implementation is crucial for accurate multi-view compositing and rendering in our current projects using CesiumJS, especially for scenarios involving nearby cameras with RGB-Depth information.

No-Gos

Next Steps

  1. Implement as a PostProcessStage in CesiumJS using GLSL for reprojection calculations.
  2. Test with various camera poses, fields of view, and depth configurations.
  3. Profile performance and optimize sampling density and texture lookups.