News

Nebula benchmarks the ray tracing competitors

Knowing how you stand up to the competition is critical

Jon Peddie

 

Last December, we told you about ray tracing startup, Nebula in Montreal and the founder, Yann Clotioloman Yéo. The company had developed a physically-based, unbiased stand-alone renderer with realtime DirectX 12 preview written in C++. It runs on Windows 10 (64-bit) and requires SSE4 to run. The program uses AMD Radeon Rays. Also since 2.0 they are using Intel’s Embree.

In the process of releasing their latest version of Nebula Render, the company made a benchmark to see how their software performed compared to the others. Nebula made a test scene using a bathroom model by artist Nacimus (see above). The scene features glossy reflections, mirroring reflections, and soft shadows from a single spherical light.

Nebula tested both CPU and GPU solutions. GPUs are currently known to be faster than CPUs on simple scenes while the contrary may be true for complex scenes. The current model has an average complexity (986K polygons) so that both types of hardware can perform well. Also, only unbiased solutions were considered. The main reason is that the biased solutions used for tests were very fast (realtime), but the difference of quality was too noticeable compared to pure ray-tracers.

Nebula ran two test scenarios: Rendering done at 4 and 20 minutes.

They rendered the scene at 1080 ×720. If a program did not support that resolution, perhaps because of licensing, they calculated the render time linearly to the resolution available.

The maximum ray tracing depth was set to (4 bounces) when such a setting was offered in the program.

With regard to CPU or GPU, the fastest mode for the test hardware was chosen.

Soft shadows and glossy reflections were possible using one area light. When spherical lights are not supported (in the program being tested), an emitter material is used.

Materials and lighting were also adjusted. The company tweaked materials for the benchmarking as well as lights and the camera so the different renders could have a similar look. When possible, the light falloff was edited.

No ambient occlusion was employed because the settings varied too much between programs.

All tests were run unbiased and at maximum quality. No light caches and Russian Roulette was disabled when possible.

The ray-tracing programs tested were:

  • Arnold 3.2.65 (CPU, 3ds Max)
  • Corona Renderer 5 (CPU, 3ds Max)
  • Cycles (CPU, Blender 2.8)
  • Nebula Render 2.1 (CPU)
  • Maverick Studio 400.420 (GPU)
  • Maxwell Render 5(GPU, 3ds Max)
  • Octane 4 (GPU, 3ds Max)
  • Owlet 1.7.1 (CPU)
  • ProRender 2.0 (GPU, Blender 2.8)

 

Notable programs could be missing due to either licensing or outdated plugins.

The hardware platform used for the testing consisted of an Intel i7-6700 HQ 2.60 GHz, 4 Cores, 16 GB RAM, and either an Nvidia GTX 960M 1096 MHz, with 640 Cuda cores, and  4 GB RAM, or the Intel HD Graphics 530 (unused since renderers supporting multi-GPU were Cuda based).

Nebula has a library of images (18) of the resting rendering after 4 minutes and 20 minutes. In general, the overall results are shown in the following table.

Engine

Bright area convergence Shadow convergence Glossy reflections
(Highlights of the sink compared to whole render)
Arnold Fast Slow Great
Corona Fast Very Fast Too many artifacts visible
Cycles Fast Average Great. Small artifacts are visible inside the whole surface.
Maverick Slow Average Artifacts are visible around the main highlight.
Maxwell Average Slow N/A. We have not been able to get the same highlight as other software
Nebula Average Fast Great. Small artifacts are visible inside the whole surface.
Octane Very Fast Very Fast N/A. We needed to put the light very low due to the high default falloff. No highlight peak could be observed in that case.
Owlet Very Slow Slow Correct
ProRender Very Fast Average Correct

 

Ranking (The lower the better)

Bright area convergence Shadow convergence Glossy reflections
ProRender Corona Cycles
Octane Octane Arnold
Cycles Nebula Nebula
Corona ProRender ProRender
Arnold Cycles Owlet
Nebula Maverick Maverick
Maxwell Maxwell Corona
Meverick Arnold  
Owlet Owlet  

 

Nebula didn't make quality/beauty ranking public because it can be very subjective. However, which may not come as a surprise, the company is definitely sure it is a strength of Nebula.

Conclusion

The results are quite interesting and gives valuable information about Nebula strengths and weaknesses, as well as some of the competition. Nebula made the files available and people are invited to send updates for a particular engine (until December 30) if you think it can be improved. Just make sure that it still has a common look with the other engines and respect the metrics used. One could also send them renders for all the software, but just make sure your hardware is not customized to favor the CPU or GPU.