* When Baking wasn't used we got an error.
* On top of Volume Nodes (NODES_FEATURE_VOLUME), we now also check if we need volume sampling code,
so we can disable that as well and save some further compilation time.
We don't have vectors re-allocation happening multiple times from inside
a loop anymore, so we can safely switch to a memory guarded allocator for
vectors and keep track on the memory usage at various stages of rendering.
Additionally, when building from inside Blender repository, Cycles will
use Blender's guarded allocator, so actual memory usage will be displayed
in the Space Info header.
There are couple of tricky aspects of the patch:
- TaskScheduler::exit() now explicitly frees memory used by `threads`.
This is needed because `threads` is a static member which destructor
isn't getting called on Blender's exit which caused memory leak print
to happen.
This shouldn't give any measurable speed issues, reallocation of that
vector is only one of fewzillion other allocations happening during
synchronization.
- Use regular guarded malloc (not aligned one). No idea why it was
made to be aligned in the first place. Perhaps some corner case tests
or so. Vector was never expected to be aligned anyway. Let's see if
we'll have actual bugs with this.
Reviewers: dingto, lukasstockner97, juicyfruit, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1774
When World MIS is enabled by the user, we now check if we actually need it.
In case of a simple node setup (no procedurals, no HDRs..) we auto disable MIS internally to save render time.
This change is important for upcoming default changes.
Patch from be28706 made it so integrator will use last shader's transparent
shadow flag, which is wrong since last shader might not have transparent
shadow while shaders prior to it might have one.
Basically we can not use sharp closure as a substitude when filter glossy is
used. This is because we can not blur sharp reflection/refraction.
This is quite quick and not really clean implementation. Not really happy
with manual handling of original settings, but this is as good as we can do
in the quick patch. It's a good acknowledgment and we now can re-consider
some aspects of graph simplification to make such cases more natively
supported.
P.S. This failure would have been shown by our regression tests, so please,
bother a bit to run Cycles's test sweep before doing such optimizations.
The goal is to be able to compile kernel with nodes which are actually needed
to render current scene, hence improving performance of the kernel,
The idea is:
- Have few node groups, starting with a group which contains nodes are used
really often, and then couple of groups which will be extension of this one.
- Have feature-based nodes disabling, so it's possible to disable nodes related
to features which are not used with the currently used nodes group.
This commit only lays down needed routines for this approach, actual split will
happen later after gathering statistics from bunch of production scenes.
Now we calculate color in range 800..12000 using an approximation a/x+bx+c for R and G and ((at + b)t + c)t + d) for B.
Max absolute error for RGB for non-lut function is less than 0.0001, which is enough to get the same 8 bit/channel color as for OSL with a noticeable performance difference.
However there is a slight visible difference between previous non-OSL implementation because of lookup table interpolation and offset-by-one mistake.
The previous implementation gave black color outside of soft range (t > 12000), now it gives the same color as for 12000.
Also blackbody node without input connected is being converted to value input at shader compile time.
Reviewers: dingto, sergey
Reviewed By: dingto
Subscribers: nutel, brecht, juicyfruit
Differential Revision: https://developer.blender.org/D1280
There were two major problems with the interactivity of material previews:
- Beckmann tables were re-generated on every material tweak.
This is because preview scene is not set to be persistent, so re-triggering
the render leads to the full scene re-sync.
- Images could take rather noticeable time to load with OIIO from the disk
on every tweak.
This patch addressed this two issues in the following way:
- Beckmann tables are now static on CPU memory.
They're couple of hundred kilobytes only, so wouldn't expect this to be
an issue. And they're needed for almost every render anyway.
This actually also makes blackbody table to be static, but it's even smaller
than beckmann table.
Not totally happy with this approach, but others seems to complicate things
quite a bit with all this render engine life time and so..
- For preview rendering all images are considered to be built-in. This means
instead of OIIO which re-loads images on every re-render they're coming
from ImBuf cache which is fully manageable from blender side and unused
images gets freed later.
This would make it impossible to have mipmapping with OSL for now, but we'll
be working on that later anyway and don't think mipmaps are really so crucial
for the material preview.
This seems to be a better alternative to making preview scene persistent,
because of much optimal memory control from blender side.
Reviewers: brecht, juicyfruit, campbellbarton, dingto
Subscribers: eyecandy, venomgfx
Differential Revision: https://developer.blender.org/D1132
This is the same as blender internal's texture mapping from another object,
so this way it's possible to control texture space of one object by another.
Quite straightforward change apart from the workaround for the stupidness of
the dependency graph. Now shader has flag telling that it depends on object
transform. This is the simplest way to know which shaders needs to be tagged
for update when object changes. This might give some false-positive tags now
but reducing them should not be priority for Cycles and rather be a priority
to bring new dependency graph.
Also GLSL preview does not support using other object for mapping.
This is actually correct for BI shading as well and to be addressed as
a part of general GLSL viewport improvements since it's not really clear
how to support this in GLSL.
Reviewers: brecht, juicyfruit
Subscribers: eyecandy, venomgfx
Differential Revision: https://developer.blender.org/D1021
This commit makes it so blackbody and beckmann lookup tables are stored on CPU
after being generated and then only being copied to the device if needed.
This solves lag of viewport update when tweaking shader tree by using 266KB of
CPU memory.
Issue was caused by wrong order of scene device update, which could
lead to missing object flags in shader kernel.
This patch solves a bit more than that making sure objects flags are
always properly updated, so adding/removing volume BSDF will properly
reflect on viewport where camera might become being in volume and so.
Create unique flag for output shaders with displacement data and use it
to calculate transformed normal. Implementation suggested by Brecht Van
Lommel.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D890
It is per-material setting which could be found under the Volume settings
in the material and world context buttons.
There could still be some code-wise improvements, like using variable-size
macro for interp3d instead of having interp3d_ex to which you can pass the
interpolation method.
Fix T41013: OSL and Crash
Fix T40989: Intermittent crash clicking material color selector
Issue was caused by not enough precision for inversion threshold.
Use double precision for this threshold now. We might want to
investigate this code a bit more further, stock implementation
uses doubles for all computation. Using floats might be a reason
of bad rows distribution in theory.
It turns out that the new Beckmann sampling function doesn't work well with
Quasi Monte Carlo sampling, mainly near normal incidence where it can be worse
than the previous sampler. In the new sampler the random number pattern gets
split in two, warped and overlapped, which hurts the stratification, see the
visualization in the differential revision.
Now we use a precomputed table, which is much better behaved. GGX does not seem
to benefit from using a precomputed table.
Disadvantage is that this table adds 1MB of memory usage and 0.03s startup time
to every render (on my quad core CPU).
Differential Revision: https://developer.blender.org/D614
This gives you "Multiple Importance", "Distance" and "Equiangular" choices.
What multiple importance sampling does is make things more robust to certain
types of noise at the cost of a bit more noise in cases where the individual
strategies are always better.
So if you've got a pretty dense volume that's lit from far away then distance
sampling is usually more efficient. If you've got a light inside or near the
volume then equiangular sampling is better. If you have a combination of both,
then the multiple importance sampling will be better.
In practice this means that if you don't connect a texture to your volume nodes
it will figure that out and render the node faster, rather than you having to
specify it manually.
Main weakness is custom OSL nodes where we have to assume it is heterogeneous
because we don't know what kind of data the node accesses.
Volumes can now have textured colors and density. There is a Volume Sampling
panel in the Render properties with these settings:
* Step size: distance between volume shader samples when rendering the volume.
Lower values give more accurate and detailed results but also increased render
time.
* Max steps: maximum number of steps through the volume before giving up, to
protect from extremely long render times with big objects or small step sizes.
This is much more compute intensive than homogeneous volume, so when you are not
using a texture you should enable the Homogeneous Volume option in the material
or world for faster rendering.
One important missing feature is that Generated texture coordinates are not yet
working in volumes, and they are the default coordinates for nearly all texture
nodes. So until that works you need to plug in object texture coordinates or a
world space position.
This is work by "storm", Stuart Broadfoot, Thomas Dinges and myself.
New features:
* Bump mapping now works with SSS
* Texture Blur factor for SSS, see the documentation for details:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Shaders#Subsurface_Scattering
Work in progress for feedback:
Initial implementation of the "BSSRDF Importance Sampling" paper, which uses
a different importance sampling method. It gives better quality results in
many ways, with the availability of both Cubic and Gaussian falloff functions,
but also tends to be more noisy when using the progressive integrator and does
not give great results with some geometry. It works quite well for the
non-progressive integrator and is often less noisy there.
This code may still change a lot, so unless you're testing it may be best to
stick to the Compatible falloff function.
Skin test render and file that takes advantage of the gaussian falloff:
http://www.pasteall.org/pic/show.php?id=57661http://www.pasteall.org/pic/show.php?id=57662http://www.pasteall.org/blend/23501
* Added a node to convert a temperature in Kelvin to an RGB color. This can be used e.g. for lights, to easily find the right color temperature.
= Some common temperatures =
Candle light: 1500 Kelvin
Sunset/Sunrise: 1850 Kelvin
Studio lamps: 3200 Kelvin
Horizon daylight: 5000 Kelvin
Documentation: http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/More#Blackbody
Thanks to Philipp Oeser (lichtwerk), who essentially contributed to this with a patch! :)
This is part of my GSoC 2013 project. SVN merge of r57424, r57487, r57507, r57525, r58253 and r58774