SyB3R - Synthetic Benchmark for 3D Reconstruction
|
The goal of SyB3R is to provide a flexible framework that emulates the effects of real world cameras. In order to evaluate the impact of specific effects on, e.g., the reconstruction, it is necessary to change the strength and/or nature of those effects. Complete rerendering with Cycles, however, requires a substantial amount of computation time. Thus, the image formation process is split into two parts: The part that has to be handled in Cycles on the one hand, and the part that can be implemented as a post processing effect on the other hand. The former contains all material, lighting, and projection effects while the latter handles all effects that can be implemented purely in image space. This allows to quickly test the effect of, e.g., camera motion blur on reconstruction quality without having to rerender the images.
The full list of effects can be found in the following table:
Effect | Location | Notes |
---|---|---|
Material & Lighting | Cycles | Transparency, Glossiness, Refraction, ... |
Depth of Field | Cycles | Focus breathing must be implemented manually by animating focus distance in tandem with focal length. |
Object Motion Blur | Cycles | |
Auto Exposure | Post Processing | Currently has no effect on Sensor Noise, Motion Blur, or Depth of Field. |
Camera Motion Blur | Post Processing | Small blur due to camera rotation through linear blur. |
Radial Distortion | Post Processing | Simple polynomial model with resampling. |
Sensor Noise | Post Processing | |
Demosaicing | Post Processing | Currently only optional simple linear interplation. |
Color Transformation | Post Processing | |
Tonemapping | Post Processing | |
Compression | Post Processing | Standard JPG compression |
The central class that represents a post processing chain is the ImagePostprocessor. It contains a list of ImagePostprocessorSteps that process the raw Cycles output and finally produce a JPG image. The input image from Cycles is extracted from the .exr file and passed as a FloatImage through the post processing steps. In addition, meta information, such as the location of the projection center (for radial distortion) or jpeg compression strength, is passed alongside the image data and allows post processing steps to generate information for later steps or to react to user defined settings.
When building custom post processing steps and/or chains, keep in mind that the initial input, produced by Cycles, is in a linear RGB color space, not the raw color space of a camera sensor. This eases tweaking of the lighting, materials, and textures in Blender. If, e.g., demosaicing is to be carried out in the color space of the camera sensor, the data has to be transformed with the inverse of the usual color transformation before the demosaicing.
The following is the postprocessing chain used in the paper
However, other constellations, e.g. including demosaicing filters, are possible and will subsequently be added in the future. To build a postprocessing chain, instantiate the syb3r::synthesis::ImagePostprocessor class and add post processing steps. For example, the following sets up a very simple post processing chain with auto exposure, sensor noise, and tonemapping:
You can also create postprocessing chains that are closer to the actual operations inside the camera by first applying noise and then computing actual demosaicing and color space transformations on top of that.
Notice, how we first have to transform from the linear RGB color space of cycles into the color space of the camera.
Some of the predefined post processing steps require parameters that can be estimated from real cameras. The following describes how these parameters can be estimated and used in the framework.
Based on the work by Paul Debevec on stacking multiple camera JPG images into one linear HDR image, we provide a tool that computes the nonlinear mapping between jpg colors and linear intensities. The idea is to capture multiple images with varying exposure settings and then relate the known (relative) exposures to the observed values in the jpg images. The tool requires the library exiv2 and the cmake switch syb3r_has_exiv2
to be set to on (-Dsyb3r_has_exiv2=on
).
Setup your camera in the following way:
Now shoot a sequence of images with increasing shutter speeds, going all the way from dark to bright. The images might look like this:
Depending on the camera, there might be scripts, apps, or build in functions to automate this.
Usually selecting the lowest ISO sufficiently suppresses noise. If, however, your camera is really bad in terms of noise, you can downsample the images to further suppress the noise.
Finally, feed the images to the tool to estimate the curves:
[path/to/syb3r/]buildRelese/executables/ComputeCameraResponseCurve/computeCameraResponseCurve --generate-xml tonemappingCurves.xml --filenames [path/to/images/]*.JPG
The curves are stored in the xml file and can be loaded or passed to other tools. Sometimes, it is nice to hard code the curves into a program to remove the need to carry around lots of external files. In this case, you can also invoke:
[path/to/syb3r/]buildRelese/executables/ComputeCameraResponseCurve/computeCameraResponseCurve --generate-c++ --filenames [path/to/images/]*.JPG
The output will be similar to the following:
This c++ code can be copy&pasted and used in syb3r.
If the tool is run with the --generate-mat-plot
argument, it generates a short matlab script for plotting the curves. The plot should look similar to the following:
Run the camera noise analysis tool to create a set of color images:
[path/to/syb3r/]buildRelease/executables/CameraNoiseAnalysis/cameraNoiseAnalysis --generate-imgs testimgs/
This will create a folder testimgs/
with 100 images of random color. Display them on a monitor in full screen and take one out of focus shot of each with fixed, reasonable exposure. Be sure to move the mouse cursor out of the way and that that there are no reflections in the shots.
Next, feed the images from the camera into the tool alongside the tonemapping curve we estimated earlier
[path/to/syb3r/]buildRelease/executables/CameraNoiseAnalysis/cameraNoiseAnalysis --tonemapCurve tonemappingCurve.xml --estimate-noise-jpg [path/to/camera/imgs.JPG]
The result will be something like:
These are the matrix and offset in and can be fed into the noise post processor.
We differentiate between camera motion blur (primarily due to rotation of the camera) on the one hand and camera translation (only relevant in extreme cases) and object motion blur on the other hand. The former can be computed as a post process by filtering the rendered images while the latter has to be computed in Blender/Cycles. To simulate small amounts of camera motion blur, the Linear Blur post processing step can perform a linear blur of random direction with a blur amount/length sampled from a distribution that was fitted to measured values.
The distribution over the motion blur lengths, or by extension angular velocities, is not fixed and depends on many factors, such as weight of the camera, weight of the lens, active stabilization in the camera or lens, the ability and concentration of the photographer, etc. To capture and compute the distribution for a specific camera (and camera setting and photographer), we provide a small tool. The idea is to capture images of small bright dots and measure the length of the trails that those dots leave in the images. Exposure time and focal length have to be balanced such that the blur amount is in a reasonable range (a handfull of pixels). More blur usually results in curved trails which can neither be measured nor reproduced in this basic setup (though keep in mind that such strong blur is very detrimental to the all important small scale details in the images and should be avoided for MVS anyways).
We found the easiest way to capture images of bright dots is to display a black image with a couple of white pixels in native resolution on a computer monitor and take pictures of that. The downside is that, depending on the screen resolution, the dots can be quite large in the images. To generate a black image with white pixels, execute
[path/to/syb3r/]buildRelease/executables/CameraShakeAnalysis/cameraShakeAnalysis --generateImgs path/to/img.png
or simply draw one yourself in Gimp. Display the image on a computer screen in full screen native resolution and take images (~ 50-100) of it. The images should be completely black except for the white dots. Take the images with the desired focal length and exposure time and adjust aperture and/or iso to compensate. The images should look a bit like this (only crop is shown):
Feed those images into the analysis tool to compute the statistics:
[path/to/syb3r/]buildRelease/executables/CameraShakeAnalysis/cameraShakeAnalysis --filenames path/to/images/*.JPG