Sunday, May 21, 2017

Practical light field rendering tutorial with Cycles

This week Google announced "Seurat", a novel surface lightfield rendering technology which would enable "real-time cinema-quality, photorealistic graphics" on mobile VR devices, developed in collaboration with ILMxLab:


The technology captures all light rays in a scene by pre-rendering it from many different viewpoints. During runtime, entirely new viewpoints are created by interpolating those viewpoints on-the-fly resulting in photoreal reflections and lighting in real-time (http://www.roadtovr.com/googles-seurat-surface-light-field-tech-graphical-breakthrough-mobile-vr/).

At almost the same time, Disney released a paper called "Real-time rendering with compressed animated light fields", demonstrating the feasibility of rendering a Pixar quality 3D movie in real-time where the viewer can actually be part of the scene and walk in between scene elements or characters (according to a predetermined camera path):


Light field rendering in itself is not a new technique and has actually been around for more than 20 years, but has only recently become a viable rendering technique. The first paper was released at Siggraph 1996 ("Light field rendering" by Mark Levoy and Pat Hanrahan) and the method has since been incrementally improved by others. The Stanford university compiled an entire archive of light fields to accompany the Siggraph paper from 1996 which can be found at http://graphics.stanford.edu/software/lightpack/lifs.html. A more up-to-date archive of photography-based light fields can be found at http://lightfield.stanford.edu/lfs.html

One of the first movies that showed a practical use for light fields is The Matrix from 1999, where an array of cameras firing at the same time (or in rapid succession) made it possible to pan around an actor to create a super slow motion effect ("bullet time"):

Bullet time in The Matrix (1999)

Rendering the light field

Instead of attempting to explain the theory behind light fields (for which there are plenty of excellent online sources), the main focus of this post is to show how to quickly get started with rendering a synthetic light field using Blender Cycles and some open-source plug-ins. If you're interested in a crash course on light fields, check out Joan Charmant's video tutorial below, which explains the basics of implementing a light field renderer:


The following video demonstrates light fields rendered with Cycles:



Rendering a light field is actually surprisingly easy with Blender's Cycles and doesn't require much technical expertise (besides knowing how to build the plugins). For this tutorial, we'll use a couple of open source plug-ins:

1) The first one is the light field camera grid add-on for Blender made by Katrin Honauer and Ole Johanssen from the Heidelberg University in Germany: 


This plug-in sets up a camera grid in Blender and renders the scene from each camera using the Cycles path tracing engine. Good results can be obtained with a grid of 17 by 17 cameras with a distance of 10 cm between neighbouring cameras. For high quality, a 33-by-33 camera grid with an inter-camera distance of 5 cm is recommended.

3-by-3 camera grid with their overlapping frustrums

2) The second tool is the light field encoder and WebGL based light field viewer, created by Michal Polko, found at https://github.com/mpk/lightfield (build instructions are included in the readme file).

This plugin takes in all the images generated by the first plug-in and compresses them by keeping some keyframes and encoding the delta in the remaining intermediary frames. The viewer is WebGL based and makes use of virtual texturing (similar to Carmack's mega-textures) for fast, on-the-fly reconstruction of new viewpoints from pre-rendered viewpoints (via hardware accelerated bilinear interpolation on the GPU).


Results and Live Demo

A live online demo of the light field with the dragon can be seen here: 


You can change the viewpoint (within the limits of the original camera grid) and refocus the image in real-time by clicking on the image.  




I rendered the Stanford dragon using a 17 by 17 camera grid and distance of 5 cm between adjacent cameras. The light field was created by rendering the scene from 289 (17x17) different camera viewpoints, which took about 6 minutes in total (about 1 to 2 seconds rendertime per 512x512 image on a good GPU). The 289 renders are then highly compressed (for this scene, the 107 MB large batch of 289 images was compressed down to only 3 MB!). 

A depth map is also created at the same time an enables on-the-fly refocusing of the image, by interpolating information from several images, 

A later tutorial will add a bit more freedom to the camera, allowing for rotation and zooming.

Friday, April 28, 2017

Cycles, the fastest GPU renderer thanks to new denoising algorithms

Cycles is Blender's native CPU/GPU renderer, originally created in early 2011 by Brecht van Lommel (who left the Blender Institute in 2014 to work on Solid Angle's Arnold, which was acquired last year by the innovation crushing Autodesk Corp.). In the past six years, it has slowly but steadily become a fully featured production ready renderer including motion blur, hair/fur rendering, OpenVDB volume rendering, Disney's OpenSubDiv and Principled PBR shader, GGX microfacet distribution, AOVs (arbitrary output volumes or render passes), filmic tonemapping and support for Alembic scene importing.

A video showing the stunning realism that can be achieved with Cycles:



Even though Cycles has been open source since the beginning, the Blender Institute decided in August 2013 to change the license for the Cycles source code from a restrictive GPL license to a permissive Apache 2.0 license, which allows Cycles to be integrated into commercial projects.

Although Cycles started out as an unbiased renderer, it quickly adopted many biased tricks to drastically cut down rendertimes such as clamping the bounces for different types of rays, blurry filters for glossy surfaces and switching over to shooting ambient occlusion rays after a certain number of bounces is reached.  

In recent months, Lukas Stockner, one of Cycles' developers (who was also responsible for adding light portals and IES light profile support) implemented a few remarkable noise reduction algorithms based on very recent research, which will certainly turn many rendering heads. Two features in particular have been added that reduce rendertimes by 8 times on average: scramble distance (which takes the randomness out of sampling and traces rays in a fully coherent way) and a noise filtering algorithm based on "weigthed local regression". The noise filter has been in development for over a year and has been available in experimental Cycles builds for beta-testing. It's currently under final review and is ready to be released into the Cycles master branch any day. The Blender community is going wild and for good reason. The new denoiser delivers exceptional results, preserving details in textures at very low sample rates and rendertimes:

Full HD render (1920x1080 resolution). Rendertime: 1m 24s
Fully denoised at 50 samples on a single GTX 1070.
Image from the Blender Artists forum
Final denoised and colour corrected render, 1m25s (from BlenderArtists forum)
Some of my own tests using one GPU:

20 samples, no denoising, render time 3m28s

20 samples, denoised, render time 4m09s

200 samples, no denoising, render time 31m58s

The new version of Cycles with built-in denoising will run on both CPU and GPUs from Nvidia and AMD. Experimental builds for CUDA and OpenCL are available here.

Experimental OpenCL/CUDA build Release notes:
  • OpenCL & Cuda GPU Denoise System (this is Lukas' latest denoise code system) 
  • Cuda & OpenCL supported
  • GPU Denoise Multi-GPU Support (even in viewport, definitely works for Cuda but not tested with multiple OpenCL GPUs)
  • Scramble Distance added for Sobol and multi-jitter (works on CPU & GPU) Also added to supported features render tab
  • Blue Noise Dithered Sobol with scramble distance
  • Thread Divergence Sort Reduction patch (gives 30% speedup in classroom and 8% in Barcelona scene)
More information on the denoising algorithm can be found in this thread on the Blender Artists forum and Lukas Stockner's Wiki page:

Experimental Cycles denoising build thread

https://wiki.blender.org/index.php/User:Lukasstockner97/GSoC_2016/User_Documentation

With this groundbreaking denoiser, Cycles leapfrogs all other GPU renderers, and will soon be making the dream of ultrafast photoreal rendering happen for anyone.  

Sunday, March 5, 2017

Web developer wanted

Our project is making great strides and we're currently looking for a top notch web developer to join our team.

Candidates for this role should have:

- a Bachelor of Computer Science 
- a minimum of 4 years of working experience with front-end and back-end web development (e.g. Node.js/npm, Rails, Go, Django, Ember.js, Angular.js, React.js, Bootstrap, jQuery)
- UI design skills are a plus
- an unbounded passion for and hands-on experience with real-time and offline 3D graphics
- creative and original problem solving skills
- unrelentless hunger to learn more and become an expert in your field
- ability to work independently
- be highly efficient, motivated, perfectionist and driven with heaps of initiative and
- New Zealand residency or be keen on moving to NZ (we consider remote contractor work if you are one of a kind)

 Send your cover letter and CV with a link to your portfolio or Github page to sam.lapere@live.be
Applications will close on 21 March.

Wednesday, January 11, 2017

OpenCL path tracing tutorial 3: OpenGL viewport, interactive camera and defocus blur

Just a link to the source code on Github for now, I'll update this post with a more detailed description when I find a bit more time:



 Part 1 Setting up an OpenGL window

https://github.com/straaljager/OpenCL-path-tracing-tutorial-3-Part-1




Part 2 Adding an interactive camera, depth of field and progressive rendering

https://github.com/straaljager/OpenCL-path-tracing-tutorial-3-Part-2



Thanks to Erich Loftis and Brandon Miles for useful tips on improving the generation of random numbers in OpenCL to avoid the distracting artefacts (showing up as a sawtooth pattern) when using defocus blur (still not perfect but much better than before).

The next tutorial will cover rendering of triangles and triangle meshes.