SyB3R - Synthetic Benchmark for 3D Reconstruction
Generating own image sequences using SyB3R

Introduction

SyB3R comes with a small set of 3D models, image data, and ground truth. These dataset have been generated to illustrate the potential use of SyB3R and are not meant for a rigorous evaluation of any part of the 3D reconstruction pipeline. Nevertheless, the 3D models contain several challenging aspects (such as reflective or homogeneous surfaces, areas of low texture, self-occlusion, etc.) while the rendered images are deteriorated with realistic noise, camera motion blur, depth of field, etc.

You can use these models and images in order to run tests of your 3D reconstruction pipeline or its various components. This section describes how to generate your own image sequences and reference data in case you prefer to use your own benchmarks. If its cool, share it, eg. send it to us so we can upload it to the SyB3R webpage.

We will first describe how to set up a scene in Blender and then show how to render it. If you use one of our models and are ok with CPU only rendering, you can skip the first part and directly go to the rendering part.

Preparing Blender

SyB3R should work with all recent versions of Blender. The version distributed through your distribution's package manager is probably fine. If you want the latest and finest, or if you are rendering on a server to which you do not have root access, you can download a compiled binary from the Blender website which can simply be unzipped and run.

If you are running on an Optimus system (GUI rendering through the Intel GPU with optional NVidia GPU fallback for heavy duty rendering) remember to run Blender on the NVidia GPU.

Installing SyB3R Add-on

For setting up the scene, SyB3R provides a set of tools that can be installed as an add-on. This is only necessary for the Blender instance that is used to create the scene. The Blender instances on the render nodes can be vanilla Blender versions.

To install and activate the add-on, start Blender and open the user preferences.

Switch to the "Add-ons" tab and click "Install from File".

Now select the [path/to/syb3r/]blenderUtils/Syb3rBlenderTools.py file from the SyB3R directory.

The add-on is now installed but not immediately activated. To activate an add-on, check the small box left to its name. To keep the add-on active permanently, click on "Save User Settings".

Besides the SyB3R add-on, there are a couple of other add-ons that might be usefull to you. As a rule of thumb, if you want to import/export from/to a specific format there is a good chance that a corresponding add-on is available.

Enable CUDA based Rendering

If you want to render on your GPU, you have to activate CUDA in the user preferences. In the user preferences switch to the System tab...

... and then select "CUDA" as the Compute Device. If you have multiple NVidia GPUs, select your preferred GPU for rendering.

Scene Preparation

Digital 3D Models

Create scene in Blender

Load model in blender

To open an existing scene in blender click on "File" and then "Open" (or Ctrl+O) ...

... and select the corresponding model.

The shading of the viewport can be changed by the button at the bottom of the viewer.

Add camera(s)

To add a camera, click on "Add" (or Shift-a in Object Mode) and then "Camera".

Note that we don't support all camera settings yet. To setup a newly added camera, hit "Space" to open the operation search menu, type "Syb3r" and select "Syb3r -> Initialize Camera".

Init scene

The provided python script allows to automatically set up the scene. Hit "Space" to open the operation search menu, type "Syb3r" and select "Syb3r -> Init Scene".

To set the scale of the scene go to scene properties and select the appropriate blender unit to meter scale.

This is easier to set/verify with a reference object: Select an object of known real world size and press N to open the side bar denoting the object dimensions (in meters). The scene scale can now be tweaked accordingly.

Set camera properties

The camera resolution is set under Render settings

To set camera focal length and sensor size, select the camera and go to camera data settings.

The depth of field, focus point, as well as aperture size and shape can optionally be defined.

Animate camera (if wanted)

Camera animation is the recommended way to handle image sequences (even unordered ones).

In a first step, setup the frame range for example from 0 to 40 with a step of 10 to obtain five images in total.

Go to frame 0, select the camera and move/rotate it into the desired position. Then press I to insert a keyframe for the selected position and rotation.

Move to the the next keyframe position by increasing the framenumber by framestep (i.e. go to frame 10 in this example). Move and rotate the camera to its new position and press I to insert the keyframe. Repeat this procedure for all keyframes.

Set render Properties

There is a lot of information on the internet on how the path tracing parameters can be tweaked. A simple approach is the following:

  1. Set the number of bounces to zero
  2. Set all sample counts to one
  3. Tweak the number of anti-alising (AA) samples while paying attention to out-of-focus contours, edges, and downsampled textures. Make sure they are not noisy. Shadow penumbras and shading can still be noisy.
    • Switch to camera view, select a rect with CTRL+B, and press F12 to render that rect. Press CTRL+ALT+B to clear the rect.
  4. Set the number of diffuse bounces (usually 1-2 are sufficient)
  5. Set the number of glossy bounces (usually 1-2 are sufficient)
  6. Set the number of transmission "bounces": If you have a lot of transparency (e.g., from foliage) you might have to set this quite high, up to the maximum amount of leaf-quads that a view ray can look through.
  7. Increase the number of diffuse samples until the noise in the dark corners of the image, that are only reached by ambient light, are sufficiently smooth.
  8. Look at the shadow penumbras of lights. Increase the number of samples for the corresponding lights until the shadow penumbras are smooth.
  9. Look at the shadow penumbras of mesh-lights (meshes with strong emissive material). Increase the global "Mesh Light" sample count until those penumbras are smooth.
  10. Increase the number of glossy samples until the reflections and highlights are sufficiently smooth.

Things to consider:

Rendering

Once all settings have been made the scene can be finalized to be rendered. Make sure that you limited the risk of unsatisfying rendering results by rendering sample areas test-wise before (as described above). SyB3R provides a simple executable to create a rendering script, which needs the blender scene as input. Thus, save the current scene by clicking on "File" and selecting "Save as..." for example as "awesomeScene.blend".

Open a console, change into the directory of this scene, call the SyB3R executable to create the Python script, and start the rendering:

cd [path/to/blend/file]
[path/to/syb3r/]buildRelese/executables/SynthesizeDataset/synthesizeDataset --filename renderAwesomeScene.py --outputDir awesomeSceneOutput/ 
[path/to/blender/]blender -b awesomeScene.blend -P renderAwesomeScene.py

The last line creates an .xml file containing all necessary scene settings (e.g. camera positions, etc.) as well as the corresponding datafiles including HDR images, ground truth depth maps, and object ID files - all stored in two .exr files for each frame of the animated camera. To inspect the result run

[path/to/syb3r/]buildRelese/executables/ImageBeautyConverter/imageBeautyConverter --dataset awesomeSceneOutput/scene.xml --outputDir awesomeSceneOutput/humanViewable/

which will convert the .exr files to common .jpg files.