This does not actually work: The context must not be shared between threads, but using the same context between different samples actually seems to prevent OSL from switching between shaders. The proper solution would be to ensure memory pooling works correctly.
This reverts commit 69f87e69258d6266dcb20f09f7e3d4021e663432.
Regular rendering now works tiled, and supports save buffers to save memory
during render and cache render results.
Brick texture node by Thomas.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Textures#Brick_Texture
Image texture Blended Box Mapping.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Textures#Image_Texturehttp://mango.blender.org/production/blended_box/
Various bug fixes by Sergey and Campbell.
* Fix for reading freed memory in some node setups.
* Fix incorrect memory read when synchronizing mesh motion.
* Fix crash appearing when direct light usage is different on different layers.
* Fix for vector pass gives wrong result in some circumstances.
* Fix for wrong resolution used for rendering Render Layer node.
* Option to cancel rendering when doing initial synchronization.
* No more texture limit when using CPU render.
* Many fixes for new tiled rendering.
It was read of initialized memory around holdout_weight in cases when
holdout material is used. Seems that it should be assigned to result
of shader_holdout_eval here.
If Brecht could double check this it'll be great.
This could potentially fix#32224: Holdout Error with CUDA Cycles Render
direct and indirect lighting differently. Rather than picking one light for each
point on the path, it now loops over all lights for direct lighting. For indirect
lighting it still picks a random light each time.
It gives control over the number of AA samples, and the number of Diffuse, Glossy,
Transmission, AO, Mesh Light, Background and Lamp samples for each AA sample.
This helps tuning render performance/noise and tends to give less noise for renders
dominated by direct lighting.
This sampling mode only works on the CPU, and still needs proper tile rendering
to show progress (will follow tommorrow or so), because each AA sample can be quite
slow now and so the delay between each update wil be too long.
For sample images see:
http://www.dalaifelinto.com/?p=399 (equisolid)
http://www.dalaifelinto.com/?p=389 (equidistant)
The 'use_panorama' option is now part of a new Camera type: 'Panorama'.
Created two other panorama cameras:
- Equisolid: most of lens in the market simulate this lens - e.g. Nikon, Canon, ...)
this works as a real lens up to an extent. The final result takes the
sensor dimensions into account also.
.:. to simulate a Nikon DX2S with a 10.5mm lens do:
sensor: 23.7 x 15.7
fisheye lens: 10.5
fisheye fov: 180
render dimensions: 4288 x 2848
- Equidistant: this is not a real lens model. Although the old equidistant lens simulate
this lens. The result is always as a circular fisheye that takes the whole sensor
(in other words, it doesn't take the sensor into consideration).
This is perfect for fulldomes ;)
For the UI we have 10 to 360 as soft values and 10 to 3600 as hard values (because we can).
Reference material:
http://www.hdrlabs.com/tutorials/downloads_files/HDRI%20for%20CGI.pdfhttp://www.bobatkins.com/photography/technical/field_of_view.html
Note, this is not a real simulation of the light path through the lens.
The ideal solution would be this:
https://graphics.stanford.edu/wikis/cs348b-11/Assignment3http://www.graphics.stanford.edu/papers/camera/
Thanks Brecht for the fix, suggestions and code review.
Kudos for the dome community for keeping me stimulated on the topic since 2009 ;)
Patch partly implemented during lab time at VisGraf, IMPA - Rio de Janeiro.
Most of the changes are related to adding support for motion data throughout
the code. There's some code for actual camera/object motion blur raytracing
but it's unfinished (it badly slows down the raytracing kernel even when the
option is turned off), so that code it disabled still.
Motion vector export from Blender tries to avoid computing derived meshes
when the mesh does not have a deforming modifier, and it also won't store
motion vectors for every vertex if only the object or camera is moving.
=== BVH build time optimizations ===
* BVH building was multithreaded. Not all building is multithreaded, packing
and the initial bounding/splitting is still single threaded, but recursive
splitting is, which was the main bottleneck.
* Object splitting now uses binning rather than sorting of all elements, using
code from the Embree raytracer from Intel.
http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/
* Other small changes to avoid allocations, pack memory more tightly, avoid
some unnecessary operations, ...
These optimizations do not work yet when Spatial Splits are enabled, for that
more work is needed. There's also other optimizations still needed, in
particular for the case of many low poly objects, the packing step and node
memory allocation.
BVH raytracing time should remain about the same, but BVH build time should be
significantly reduced, test here show speedup of about 5x to 10x on a dual core
and 5x to 25x on an 8-core machine, depending on the scene.
=== Threads ===
Centralized task scheduler for multithreading, which is basically the
CPU device threading code wrapped into something reusable.
Basic idea is that there is a single TaskScheduler that keeps a pool of threads,
one for each core. Other places in the code can then create a TaskPool that they
can drop Tasks in to be executed by the scheduler, and wait for them to complete
or cancel them early.
=== Normal ====
Added a Normal output to the texture coordinate node. This currently
gives the object space normal, which is the same under object animation.
In the future this might become a "generated" normal so it's also stable for
deforming objects, but for now it's already useful for non-deforming objects.
=== Render Layers ===
Per render layer Samples control, leaving it to 0 will use the common scene
setting.
Environment pass will now render environment even if film is set to transparent.
Exclude Layers" added. Scene layers (all object that influence the render,
directly or indirectly) are shared between all render layers. However sometimes
it's useful to leave out some object influence for a particular render layer.
That's what this option allows you to do.
=== Filter Glossy ===
When using a value higher than 0.0, this will blur glossy reflections after
blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good
starting value to tweak.
Some light paths have a low probability of being found while contributing much
light to the pixel. As a result these light paths will be found in some pixels
and not in others, causing fireflies. An example of such a difficult path might
be a small light that is causing a small specular highlight on a sharp glossy
material, which we are seeing through a rough glossy material. With path tracing
it is difficult to find the specular highlight, but if we increase the roughness
on the material the highlight gets bigger and softer, and so easier to find.
Often this blurring will be hardly noticeable, because we are seeing it through
a blurry material anyway, but there are also cases where this will lead to a
loss of detail in lighting.
but this makes it more reliable for now.
Also add an integrator "Clamp" option, to clamp very light samples to a maximum
value. This will reduce accuracy but may help reducing noise and speed up
convergence.
emitting objects or world lighting do not contribute to the shadow pass.
Consider this more as a pass useful for some compositing tricks, unlike
other lighting passes this pass can't be used to exactly reconstruct the
combined pass.
Currently supported passes:
* Combined, Z, Normal, Object Index, Material Index, Emission, Environment,
Diffuse/Glossy/Transmission x Direct/Indirect/Color
Not supported yet:
* UV, Vector, Mist
Only enabled for CPU devices at the moment, will do GPU tweaks tommorrow,
also for environment importance sampling.
Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Passes
By default lighting from the world is computed solely with indirect light
sampling. However for more complex environment maps this can be too noisy, as
sampling the BSDF may not easily find the highlights in the environment map
image. By enabling this option, the world background will be sampled as a lamp,
with lighter parts automatically given more samples.
Map Resolution specifies the size of the importance map (res x res). Before
rendering starts, an importance map is generated by "baking" a grayscale image
from the world shader. This will then be used to determine which parts of the
background are light and so should receive more samples than darker parts.
Higher resolutions will result in more accurate sampling but take more setup
time and memory.
Patch by Mike Farnsworth, thanks!
* Passes renamed to samples
* Camera lens radius renamed to aperature size/blades/rotation
* Glass and fresnel nodes input is now index of refraction
* Glossy and velvet fresnel socket removed
* Mix/add closure node renamed to mix/add shader node
* Blend weight node added for shader mixing weights
There is some version patching code for reading existing files, but it's not
perfect, so shaders may work a bit different.
* Fix missing update when editing objects with emission materials.
* Fix preview pass rendering set to 1 not showing full resolution.
* Fix CUDA runtime compiling failing due to missing cache directory.
* Use settings from first render layer for visibility and material override.
And a bunch of incomplete and still disabled code mostly related to closure
sampling.
* Compute MD5 hash to deal with nvidia opencl compiler cache not recognizing
changes in #included files, makes it possible to do kernel compile only
once and remember it for the next time blender is started.
* Kernel tweak to compile with ATI/linux. Enabling any more functionality than
simple clay render still chokes the compiler though, without a specific error
message ..
* Add max diffuse/glossy/transmission bounces
* Add separate min/max for transparent depth
* Updated/added some presets that use these options
* Add ray visibility options for objects, to hide them from
camera/diffuse/glossy/transmission/shadow rays
* Is singular ray output for light path node
Details here:
http://wiki.blender.org/index.php/Dev:2.5/Source/Render/Cycles/LightPaths
* Add alpha pass output, to use set Transparent option in Film panel.
* Add Holdout closure (OSL terminology), this is like the Sky option in the
internal renderer, objects with this closure show the background / zero
alpha.
* Add option to use Gaussian instead of Box pixel filter in the UI.
* Remove camera response curves for now, they don't really belong here in
the pipeline, should be moved to compositor.
* Output full float values for rendering now, previously was only byte precision.
* Add a patch from Thomas to get a preview passes option, but still disabled
because it isn't quite working right yet.
* CUDA: don't compile shader graph evaluation inline.
* Convert tabs to spaces in python files.