SyB3R - Synthetic Benchmark for 3D Reconstruction
Post Processing Chain

Table of Contents

Post Processing of Rendered Images

Intro

The goal of SyB3R is to provide a flexible framework that emulates the effects of real world cameras. In order to evaluate the impact of specific effects on, e.g., the reconstruction, it is necessary to change the strength and/or nature of those effects. Complete rerendering with Cycles, however, requires a substantial amount of computation time. Thus, the image formation process is split into two parts: The part that has to be handled in Cycles on the one hand, and the part that can be implemented as a post processing effect on the other hand. The former contains all material, lighting, and projection effects while the latter handles all effects that can be implemented purely in image space. This allows to quickly test the effect of, e.g., camera motion blur on reconstruction quality without having to rerender the images.

The full list of effects can be found in the following table:

Effect Location Notes
Material & Lighting Cycles Transparency, Glossiness, Refraction, ...
Depth of Field Cycles Focus breathing must be implemented manually by animating focus distance in tandem with focal length.
Object Motion Blur Cycles
Auto Exposure Post Processing Currently has no effect on Sensor Noise, Motion Blur, or Depth of Field.
Camera Motion Blur Post Processing Small blur due to camera rotation through linear blur.
Radial Distortion Post Processing Simple polynomial model with resampling.
Sensor Noise Post Processing
Demosaicing Post Processing Currently only optional simple linear interplation.
Color Transformation Post Processing
Tonemapping Post Processing
Compression Post Processing Standard JPG compression

Overview

The central class that represents a post processing chain is the ImagePostprocessor. It contains a list of ImagePostprocessorSteps that process the raw Cycles output and finally produce a JPG image. The input image from Cycles is extracted from the .exr file and passed as a FloatImage through the post processing steps. In addition, meta information, such as the location of the projection center (for radial distortion) or jpeg compression strength, is passed alongside the image data and allows post processing steps to generate information for later steps or to react to user defined settings.

When building custom post processing steps and/or chains, keep in mind that the initial input, produced by Cycles, is in a linear RGB color space, not the raw color space of a camera sensor. This eases tweaking of the lighting, materials, and textures in Blender. If, e.g., demosaicing is to be carried out in the color space of the camera sensor, the data has to be transformed with the inverse of the usual color transformation before the demosaicing.

Usage

#include <syb3r/tools/TmpDir.h>
#include <syb3r/synthesis/ImagePostprocessor.h>
#include <thread>
//...
// Initialize the global thread pool
syb3r::tools::TaskScheduler::Init(std::thread::hardware_concurrency());
//...
// Empty post processor
// Add predefined set of post processing steps
postprocessor.setupOldEOS400(); // todo: I want to move this into a separate function
// Set jpeg quality
postprocessor.initialProperties.set("jpegCompression", 98);
// Load a dataset (scene.xml as produced by blender)
dataset.loadFromXML(datasetFilename);
// Creates a temporary directory under "/tmp/" which automatically
// gets deleted once the tmpDir variable goes out of scope.
// Process all images in the dataset
std::vector<std::string> filenames;
postprocessor.process(dataset, tmpDir.getPath().string(), filenames);
// The vector "filenames" now contains one filename for each image in the
// dataset, in the same order. The list of filenames (images) can now be fed
// into a reconstruction pipeline, a feature detector, etc...

Building a Postprocessing Chain

The following is the postprocessing chain used in the paper

However, other constellations, e.g. including demosaicing filters, are possible and will subsequently be added in the future. To build a postprocessing chain, instantiate the syb3r::synthesis::ImagePostprocessor class and add post processing steps. For example, the following sets up a very simple post processing chain with auto exposure, sensor noise, and tonemapping:

// Empty post processor
postprocessor.appendStep(new syb3r::synthesis::ImagePP_TonemappingCurve(syb3r::models::curves_Eos400D_neutral_5200K_daylight));

You can also create postprocessing chains that are closer to the actual operations inside the camera by first applying noise and then computing actual demosaicing and color space transformations on top of that.

// Empty post processor
// Do "pre camera" distortions
// Switch from RGB to the color space of the camera
Eigen::Matrix3f eos400ToRgb = ImagePP_ColorMatrix::XYZ_to_sRGB * ImagePP_ColorMatrix::CanonEOS400_to_XYZ;
postprocessor.appendStep(new syb3r::synthesis::ImagePP_ColorMatrix(eos400ToRgb.inverse()));
// Sensor noise
postprocessor.appendStep(new syb3r::synthesis::ImagePP_SensorNoise(0.002f, 0.025f)); // add some noise, not based on actual measurements here.
// Demosaic
// Transform back to RGB
postprocessor.appendStep(new syb3r::synthesis::ImagePP_ColorMatrix(eos400ToRgb));
// Transform to sRGB
postprocessor.appendStep(new syb3r::synthesis::ImagePP_LinearToGamma()); // just a simple sRGB gamma curve, no actual tonemapping.

Notice, how we first have to transform from the linear RGB color space of cycles into the color space of the camera.

Estimating Camera Parameters

Some of the predefined post processing steps require parameters that can be estimated from real cameras. The following describes how these parameters can be estimated and used in the framework.

Tonemapping from JPG

Based on the work by Paul Debevec on stacking multiple camera JPG images into one linear HDR image, we provide a tool that computes the nonlinear mapping between jpg colors and linear intensities. The idea is to capture multiple images with varying exposure settings and then relate the known (relative) exposures to the observed values in the jpg images. The tool requires the library exiv2 and the cmake switch syb3r_has_exiv2 to be set to on (-Dsyb3r_has_exiv2=on).

Setup your camera in the following way:

Now shoot a sequence of images with increasing shutter speeds, going all the way from dark to bright. The images might look like this:

Depending on the camera, there might be scripts, apps, or build in functions to automate this.

Usually selecting the lowest ISO sufficiently suppresses noise. If, however, your camera is really bad in terms of noise, you can downsample the images to further suppress the noise.

Finally, feed the images to the tool to estimate the curves:

[path/to/syb3r/]buildRelese/executables/ComputeCameraResponseCurve/computeCameraResponseCurve --generate-xml tonemappingCurves.xml --filenames [path/to/images/]*.JPG

The curves are stored in the xml file and can be loaded or passed to other tools. Sometimes, it is nice to hard code the curves into a program to remove the need to carry around lots of external files. In this case, you can also invoke:

[path/to/syb3r/]buildRelese/executables/ComputeCameraResponseCurve/computeCameraResponseCurve --generate-c++ --filenames [path/to/images/]*.JPG

The output will be similar to the following:

syb3r::math::CameraRGBResponseCurve<256> curves = {
{
0.0243456, 0.0272423, 0.0304837, 0.0340851, 0.0380527, 0.0423849, 0.0470745, 0.0521101, 0.0574742, 0.0631421, 0.069084, 0.0752647, 0.0816487, 0.0881996, 0.09488, 0.101652,
// snip
5.46644, 5.57984, 5.69626, 5.81575, 5.93835, 6.0641, 6.19306, 6.32525, 6.4607, 6.59943, 6.74148, 6.88686, 7.03557, 7.18761, 7.34301, 7.50177,
},
{
0.0165029, 0.0187662, 0.0213399, 0.0242585, 0.0275453, 0.031216, 0.0352792, 0.0397369, 0.044584, 0.0498077, 0.0553888, 0.0613012, 0.0675129, 0.0739869, 0.0806837, 0.0875612,
// snip
6.10365, 6.2596, 6.42056, 6.58664, 6.75797, 6.93465, 7.11677, 7.30443, 7.49766, 7.69653, 7.90109, 8.11141, 8.32756, 8.54963, 8.77771, 9.01187,
},
{
0.0273939, 0.0305568, 0.0340849, 0.0379927, 0.0422857, 0.0469634, 0.0520214, 0.0574494, 0.0632307, 0.0693439, 0.0757623, 0.0824543, 0.0893851, 0.0965183, 0.103818, 0.111246,
// snip
6.76145, 7.04144, 7.33815, 7.65223, 7.98431, 8.33507, 8.70515, 9.09521, 9.50583, 9.93762, 10.3912, 10.867, 11.3657, 11.8879, 12.4344, 13.0061,
}
};

This c++ code can be copy&pasted and used in syb3r.

If the tool is run with the --generate-mat-plot argument, it generates a short matlab script for plotting the curves. The plot should look similar to the following:

Image Noise from JPG

Run the camera noise analysis tool to create a set of color images:

[path/to/syb3r/]buildRelease/executables/CameraNoiseAnalysis/cameraNoiseAnalysis --generate-imgs testimgs/

This will create a folder testimgs/ with 100 images of random color. Display them on a monitor in full screen and take one out of focus shot of each with fixed, reasonable exposure. Be sure to move the mouse cursor out of the way and that that there are no reflections in the shots.

Next, feed the images from the camera into the tool alongside the tonemapping curve we estimated earlier

[path/to/syb3r/]buildRelease/executables/CameraNoiseAnalysis/cameraNoiseAnalysis --tonemapCurve tonemappingCurve.xml --estimate-noise-jpg [path/to/camera/imgs.JPG]

The result will be something like:

Eigen::Matrix3f color_to_variance;
color_to_variance <<
0.00532287, -0.000395085, -8.36541e-05,
-0.000161487, 0.00249263, 0.000129661,
2.44104e-05, -7.88616e-05, 0.00377036;
Eigen::Vector3f variance_offset(0.000210778, -0.000130843, 0.000184041);

These are the matrix and offset in $\sigma_{r,g,b} = A \cdot I_{r,g,b} + b$ and can be fed into the noise post processor.

Motion Blur

We differentiate between camera motion blur (primarily due to rotation of the camera) on the one hand and camera translation (only relevant in extreme cases) and object motion blur on the other hand. The former can be computed as a post process by filtering the rendered images while the latter has to be computed in Blender/Cycles. To simulate small amounts of camera motion blur, the Linear Blur post processing step can perform a linear blur of random direction with a blur amount/length sampled from a distribution that was fitted to measured values.

The distribution over the motion blur lengths, or by extension angular velocities, is not fixed and depends on many factors, such as weight of the camera, weight of the lens, active stabilization in the camera or lens, the ability and concentration of the photographer, etc. To capture and compute the distribution for a specific camera (and camera setting and photographer), we provide a small tool. The idea is to capture images of small bright dots and measure the length of the trails that those dots leave in the images. Exposure time and focal length have to be balanced such that the blur amount is in a reasonable range (a handfull of pixels). More blur usually results in curved trails which can neither be measured nor reproduced in this basic setup (though keep in mind that such strong blur is very detrimental to the all important small scale details in the images and should be avoided for MVS anyways).

We found the easiest way to capture images of bright dots is to display a black image with a couple of white pixels in native resolution on a computer monitor and take pictures of that. The downside is that, depending on the screen resolution, the dots can be quite large in the images. To generate a black image with white pixels, execute

[path/to/syb3r/]buildRelease/executables/CameraShakeAnalysis/cameraShakeAnalysis --generateImgs path/to/img.png

or simply draw one yourself in Gimp. Display the image on a computer screen in full screen native resolution and take images (~ 50-100) of it. The images should be completely black except for the white dots. Take the images with the desired focal length and exposure time and adjust aperture and/or iso to compensate. The images should look a bit like this (only crop is shown):

Feed those images into the analysis tool to compute the statistics:

[path/to/syb3r/]buildRelease/executables/CameraShakeAnalysis/cameraShakeAnalysis --filenames path/to/images/*.JPG