This is done using the existing Emission node and closure (we may add a volume
emission node, not clear yet if it will be needed).
Volume emission only supports indirect light sampling which means it's not very
efficient to make small or far away bright light sources. Using direct light
sampling and MIS would be tricky and probably won't be added anytime soon. Other
renderers don't support this either as far as I know, lamps and ray visibility
tricks may be used instead.
This works pretty much as you would expect, overlapping volume objects gives
a more dense volume. What did change is that world volume shaders are now
active everywhere, they are no longer excluded inside objects.
This may not be desirable and we need to think of better control over this.
In some cases you clearly want it to happen, for example if you are rendering
a fire in a foggy environment. In other cases like the inside of a house you
may not want any fog, but it doesn't seem possible in general for the renderer
to automatically determine what is inside or outside of the house.
This is implemented using a simple fixed size array of shader/object ID pairs,
limited to max 15 overlapping objects. The closures from all shaders are put
into a single closure array, exactly the same as if an add shader was used to
combine them.
This is the simplest possible volume rendering case, constant density inside
the volume and no scattering or emission. My plan is to tweak, verify and commit
more volume rendering effects one by one, doing it all at once makes it
difficult to verify correctness and track down bugs.
Documentation is here:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Materials/Volume
Currently this hooks into path tracing in 3 ways, which should get us pretty
far until we add more advanced light sampling. These 3 hooks are repeated in
the path tracing, branched path tracing and transparent shadow code:
* Determine active volume shader at start of the path
* Change active volume shader on transmission through a surface
* Light attenuation over line segments between camera, surfaces and background
This is work by "storm", Stuart Broadfoot, Thomas Dinges and myself.
This to avoids build conflicts with libc++ on FreeBSD, these __ prefixed values
are reserved for compilers. I apologize to anyone who has patches or branches
and has to go through the pain of merging this change, it may be easiest to do
these same replacements in your code and then apply/merge the patch.
Ref T37477.
* Remove the compatible falloff SSS implementation. We shouldn't support two implementations in the long term, and 2.7x is a good release number do break some compatibility as well.
* Version patch added, so Files with Compatible falloff will automatically use Cubic now.
It was already mentioned in the manual, that Compatible is deprecated.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Shaders#BSSRDF
* Avoid special code, when Subsurface is enabled.
Ideally we should only use the function, and get rid of the extra duplicate, but this is slower on CUDA.
New features:
* Bump mapping now works with SSS
* Texture Blur factor for SSS, see the documentation for details:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Shaders#Subsurface_Scattering
Work in progress for feedback:
Initial implementation of the "BSSRDF Importance Sampling" paper, which uses
a different importance sampling method. It gives better quality results in
many ways, with the availability of both Cubic and Gaussian falloff functions,
but also tends to be more noisy when using the progressive integrator and does
not give great results with some geometry. It works quite well for the
non-progressive integrator and is often less noisy there.
This code may still change a lot, so unless you're testing it may be best to
stick to the Compatible falloff function.
Skin test render and file that takes advantage of the gaussian falloff:
http://www.pasteall.org/pic/show.php?id=57661http://www.pasteall.org/pic/show.php?id=57662http://www.pasteall.org/blend/23501
* GPU kernel can now be compiled without __NON_PROGRESSIVE__ again, was broken after my last commit. Also add a check for have_error(), in case the GPU kernel comes without Non-Progressive, to avoid a crash.
* Don't compile progressive kernel twice on CPU, if __NON_PROGRESSIVE__ would be disabled there.
* Non-Progressive integrator is now available on the GPU (CUDA, sm_20 and above).
Implementation details:
* kernel_path_trace() has been split up into two functions:
kernel_path_trace_non_progressive() and kernel_path_trace_progressive().
* We compile two CUDA kernel entry functions (in kernel.cu) for the two integrators, they are still inside one .cubin file but due to the kernel separation there should be no performance problem. I tested with the BMW file on my Geforce 540M and the render times were the same for 100 samples (1.57 min in my case).
This is part of my GSoC project, SVN merge of r59032 + manual merge of UI changes for this from my branch.
* Render Passes are now available for Subsurface Scattering (Direct, Indirect and Color pass).
This is part of my GSoC project, SVN merge of r58587, r58828 and r58835.
* Added a Ray Depth output to the Light Path node, which gives the user access to the current bounce.
This can be used to limit the maximum ray bounce on a per shader basis. Another use case is to restrict light influence with this, to have a lamp only contribute to the direct lighting.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/More#Light_Path
This is part of my GSoC 2013 project. SVN merge of r58091 and r58772 from soc-2013-dingto.
* Avoid check for !LABEL_TRANSPARENT in "kernel_path_non_progressive_lighting", transparency is either handled in the outer loop or in the "kernel_path_indirect" function, but not here.
and sm_30 cards, so hopefully it should all work now.
Also includes some warnings fixes related to nvcc compiler arguments, should make
no difference otherwise.
instead of sobol. So far one doesn't seem to be consistently better or worse than
the other for the same number of samples but more testing is needed.
The random number generator itself is slower than sobol for most number of samples,
except 16, 64, 256, .. because they can be computed faster. This can probably be
optimized, but we can do that when/if this actually turns out to be useful.
Paper this implementation is based on:
http://graphics.pixar.com/library/MultiJitteredSampling/
Also includes some refactoring of RNG code, fixing a Sobol correlation issue with
the first BSDF and < 16 samples, skipping some unneeded RNG calls and using a
simpler unit square to unit disk function.
* Revert r57203 (len() renaming)
There seems to be a problem with nVidia OpenCL after this and I haven't figured out the real cause yet.
Better to selectively enable native length() later, after figuring out what's wrong.
This fixes [#35612].
* Rename some math functions:
len -> length
len_squared -> length_squared
normalize_len -> normalize_length
* This way OpenCL uses its inbuilt length() function, rather than our own. The other two functions have been renamed for consistency.
* Tested CPU, CUDA and OpenCL compile, should be no functional changes.
Now there is a single BVH traversal code with #ifdefs for various features.
At runtime it will then select the appropriate variation to use depending if
instancing, hair or motion blur is in use.
This makes scenes without hair render a bit faster, especially after the
minimum width feature was added. It's not the most beautiful code, but we can't
use c++ templates and there were already 4 copies, adding 4 more to handle the
hair case separately would be too much.
Code is added to restrict the pixel size of strands in cycles. It works best with ribbon primitives and a preset for these is included. It uses distance dependent expansion of the strands and then stochastic strand removal to give a fading. To prevent a slowdown for triangle mesh objects in the BVH an extra visibility flag has been added. It is also only applied for camera rays.
The strand width settings are also changed, so that the particle size is not included in the width calculation. Instead there is a separate particle system parameter for width scaling.
well as I would like, but it works, just add a subsurface scattering node and
you can use it like any other BSDF.
It is using fully raytraced sampling compatible with progressive rendering
and other more advanced rendering algorithms we might used in the future, and
it uses no extra memory so it's suitable for complex scenes.
Disadvantage is that it can be quite noisy and slow. Two limitations that will
be solved are that it does not work with bump mapping yet, and that the falloff
function used is a simple cubic function, it's not using the real BSSRDF
falloff function yet.
The node has a color input, along with a scattering radius for each RGB color
channel along with an overall scale factor for the radii.
There is also no GPU support yet, will test if I can get that working later.
Node Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Shaders#BSSRDF
Implementation notes:
http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/Subsurface_Scattering