emitting objects or world lighting do not contribute to the shadow pass.
Consider this more as a pass useful for some compositing tricks, unlike
other lighting passes this pass can't be used to exactly reconstruct the
combined pass.
existing "Equirectangular". This projection is useful to create light probes
from a chrome ball placed in a real scene. It expects as input a photograph of
the chrome ball, cropped so the ball just fits inside the image boundaries.
Example setup with panorama camera and mixing two (poor quality) photographs
from different viewpoints to avoid stretching and hide the photographer:
http://www.pasteall.org/pic/28036
and 5 float image textures. For CPU render this limit will be lifted later
on with image cache support. Patch by Mike Farnsworth.
Also changed color space option in image/environment texture node, to show
options Color and Non-Color Data, instead of sRGB and Linear, this is more
descriptive, and it was not really correct to equate Non-Color Data with
Linear.
environment map, by enabling the Panorama option in the camera.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Camera#Panorama
The focal length or sensor settings are not used, the UI can be tweaked still to
communicate this, also panorama should probably become a proper camera type like
perspective or ortho.
Currently supported passes:
* Combined, Z, Normal, Object Index, Material Index, Emission, Environment,
Diffuse/Glossy/Transmission x Direct/Indirect/Color
Not supported yet:
* UV, Vector, Mist
Only enabled for CPU devices at the moment, will do GPU tweaks tommorrow,
also for environment importance sampling.
Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Passes
Contrast helps to adjust IBL (HDR images used for background lighting).
Note: In the UI we are caling it Bright instead of Brightness. This copy what Blender composite is doing.
Note2: the algorithm we are using produces pure black when contrast is 100. I'm not a fan of that, but it's a division by zero. I would like to look at other algorithms (what gimp does for example). But that would be only after 2.62.
By default lighting from the world is computed solely with indirect light
sampling. However for more complex environment maps this can be too noisy, as
sampling the BSDF may not easily find the highlights in the environment map
image. By enabling this option, the world background will be sampled as a lamp,
with lighter parts automatically given more samples.
Map Resolution specifies the size of the importance map (res x res). Before
rendering starts, an importance map is generated by "baking" a grayscale image
from the world shader. This will then be used to determine which parts of the
background are light and so should receive more samples than darker parts.
Higher resolutions will result in more accurate sampling but take more setup
time and memory.
Patch by Mike Farnsworth, thanks!
disk to be reused by the next render.
This is useful for rendering animations where only the camera or materials change.
Note that saving the BVH to disk only to be removed for the next frame is slower
if this is not the case and the meshes do actually change.
For a render, it will save bvh files to the cache user directory, and remove all
cache files from other renders. The files are named using a MD5 hash based on the
mesh, to verify if the meshes are still the same.
The rendering device is now set in User Preferences > System, where you can
choose between OpenCL/CUDA and devices. Per scene you can then still choose
to use CPU or GPU rendering.
Load balancing still needs to be improved, now it just splits the entire
render in two, that will be done in a separate commit.
This allows group nodes inside other group nodes in cycles and makes the
code more generic for all possible cases, like direct group
input-to-output links and unused group sockets.
Previous code tried to connect external nodes and internal group sockets
by following links until a "real" node input/output. This quickly
becomes complicated in corner cases as described above and can lead to
unexpected behavior when the group socket is of a different type than
the internal/external sockets, but that conversion is skipped.
The new code uses the concept of "proxy nodes" similar to what the new
compositor does. Each group socket is replaced with a proxy node with a
single input and output, to which other nodes in the same tree and
internal nodes can link to. After all groups have been expanded in the
graph, these proxy nodes are removed again, adding converter nodes if
necessary.
Node specially useful for Texture correction.
This is also a nice example of a simple node made from scratch in case someone wants to create their custom nodes.
Review by Brecht.
reviewed by Brecht, with help from Lukas.
Note: dot is reversed compared to Blender.
In Blender Normals point outside, while in Cycles they point inside.
If you use your own custom vector with the Normal Node you will see a difference.
If you feed it with object normals it should work just as good.
lower than 1.3, since we're not officially supporting these. We're already not
providing CUDA binaries for these, so better make it clear when compiling from
source too.
as with the HSV node the OSL code is relying on the (yet to be implemented) autorename.
Also the svm code could use mix (svm_lerp) instead:
32 . float3 color_inv = make_float3(1.0f, 1.0f, 1.0f) - color;
35 . . stack_store_float3(stack, out_color, svm_lerp(color_inv, color, factor));
I have a feeling that each node 'program' should have the least program as possible. I'll see with Brecht later.
But overall I don't know if that's any fast. And apart from that I think we will need this kind of function to move to a library if multiple functions linked in are not a problem.
----------------------------
reviewed and approved by Brecht
Important note:
the camera Z is reverted compared to Blender render.
Now it goes from zero (camera) to positive (in front of the camera)
.........................
note, the OSL code has a problem.
In the original node the input and output nodes have the same name (Color).
So this will be fixed here once Brecht come up with a nice autorenaming (or we do a doversion patch) for that.
Some drivers don't support passing include paths with spaces in them, nor does
the opencl spec specify anything about how to quote/escape such paths, so for
now we just resolved #includes ourselves. Alternative would have been to use c
preprocessor, but this also resolves all #ifdefs, which we do not want.
* Reduce kernel arguments size, helps compile for apple nvidia.
* Fix use of unitialized variable in displace kernel.
* Use build flags in opencl kernel md5 hash.
* Reorganize code for kernel feature #defines a bit.
using them, but rather do it now that I have the chance still. Highlights:
* Wood and Marble merged into a single Wave texture
* Clouds + Distorted Noise merged into new Noise node
* Blend renamed to Gradient
* Stucci removed, was mostly useful for old bump
* Noise removed, will come back later, didn't actually work yet
* Depth setting is now Detail socket, which accepts float values
* Scale socket instead of Size socket
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Textures