In our multi-camera setups with known pose and RGB-Depth images, we can accurately sample pixels from an Image Camera into each pixel in View Camera via a fragment shader. The current approach of treating pixels as discrete squares leads to aliasing and perspective distortion issues, especially with nearby cameras and parallax effects.
Implement a post-process fragment shader in CesiumJS that:
Each pixel in the View Camera is treated as a directional ray on a panosphere, defined by azimuth (ϕ) and elevation (θ) and a guaussian ellipse sample (covariance matrix). The depth texture is used to reconstruct the world position of each pixel as follows:
P_world = M_viewProj-1 * P_clip
P_clip
: Clip space position reconstructed from depth.M_viewProj-1
: Inverse View-Projection matrix of the View Camera.P_world
: Reconstructed world position.The world position is then projected into the Image Camera's screen space using its known view and projection matrices:
P_image = M_projimage * M_viewimage * P_world
M_viewimage
: View matrix of the Image Camera.M_projimage
: Projection matrix of the Image Camera.P_image
: Projected screen space position on the Image Camera.The projected coordinates are then converted to texture space (UV) for sampling:
UV_image = (P_image.x / P_image.w * 0.5 + 0.5, P_image.y / P_image.w * 0.5 + 0.5)
Using the calculated UV coordinates, the corresponding color is sampled from the Image Camera's RGB-Depth texture:
Color_sampled = Texture2D(ImageColorTexture, UV_image)
Depth verification (optional) can be done to ensure correct correspondence and manage occlusions.
This implementation is crucial for accurate multi-view compositing and rendering in our current projects using CesiumJS, especially for scenarios involving nearby cameras with RGB-Depth information.
PostProcessStage
in CesiumJS using GLSL for reprojection calculations.