SyB3R - Synthetic Benchmark for 3D Reconstruction
|
The goal of SyB3R is to provide a flexible framework that emulates the effects of real world cameras. In order to evaluate the impact of specific effects on, e.g., the reconstruction, it is necessary to change the strength and/or nature of those effects. Complete rerendering with Cycles, however, requires a sustantial amount of computation time. Thus, the image formation process is split into two parts: The part that has to be handled in Cycles on the one hand, and the part that can be implemented as a post processing effect on the other hand. The former contains all material, lighting, and projection effects while the latter handles all effects that can be implemented purely in image space. This allows to quickly test the effect of, e.g., camera motion blur on reconstruction quality without having to rerender the images.
The full list of effects can be found in the following table:
Effect | Location | Notes |
---|---|---|
Material & Lighting | Cycles | Transparency, Glossiness, Refraction, ... |
Depth of Field | Cycles | Focus breathing must be implemented manually by animating focus distance in tandem with focal length. |
Object Motion Blur | Cycles | |
Auto Exposure | Post Processing | Currently has no effect on Sensor Noise, Motion Blur, or Depth of Field. |
Camera Motion Blur | Post Processing | Small blur due to camera rotation through linear blur. |
Radial Distortion | Post Processing | Simple polynomial model with resampling. |
Sensor Noise | Post Processing | |
Demosaicing | Post Processing | Currently only optional simple linear interplation. |
Color Transformation | Post Processing | |
Tonemapping | Post Processing | |
Compression | Post Processing | Standard JPG compression |
The central class that represents a post processing chain is the ImagePostprocessor. It contains a list of ImagePostprocessorSteps that process the raw Cycles output and finally produce a JPG image. The input image from Cycles is extracted from the .exr file and passed as a FloatImage through the post processing steps. In addition, meta information, such as the location of the projection center (for radial distortion) or jpeg compression strength, is passed alongside the image data and allows post processing steps to generate information for later steps or to react to user defined settings.
When building custom post processing steps and/or chains, keep in mind that the initial input, produced by Cycles, is in a linear RGB color space, not the raw color space of a camera sensor. This eases tweaking of the lighting, materials, and textures in Blender. If, e.g., demosaicing is to be carried out in the color space of the camera sensor, the data has to be transformed with the inverse of the usual color transformation before the demosaicing.
todo: test
The following is the PP chain used in the paper
however, other constellations, including demosaicing filters are also possible.
Run the camera noise analysis tool to create a set of color images:
[path/to/syb3r/]buildRelease/executables/CameraNoiseAnalysis/cameraNoiseAnalysis --generate-imgs testimgs/
This will create a folder testimgs/ with 100 images of random color. Display them on a monitor in full screen and take one out of focus shot of each with fixed, reasonable exposure. Be sure to move the mouse cursor out of the way and that that there are no reflections in the shots.
Next, feed the images from the camera into the tool alongside the tonemapping curve [we estimated earlier]{#EstimateToneMappingJpg}.
[path/to/syb3r/]buildRelease/executables/CameraNoiseAnalysis/cameraNoiseAnalysis --estimate-noise-jpg [path/to/camera/imgs.JPG]
The result will be something like:
color_to_variance = 0.000506956 -7.32317e-05 -5.97523e-07 -2.92257e-05 0.00032779 -6.04901e-06 1.43484e-06 -6.07628e-05 0.000488199 variance_offset = -1.08624e-05 -2.31157e-05 -3.53225e-05
These are the matrix and offset in