Instead of treating Fermi GPU limits as default,
and overriding them for other devices,
we now nicely set them for each platform.
* Due to setting values for all platforms,
we don't have to offset the slot id for OpenCL anymore,
as the image manager wont add float images for OpenCL now.
* Bugfix: TEX_NUM_FLOAT_IMAGES was always 5, even for CPU,
so the code in svm_image.h clamped float textures with alpha on CPU after the 5th slot.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: brecht
Differential Revision: https://developer.blender.org/D1925
Seems particular CUDA implementations has some precision issues,
which made integer coordinate (which was expected to always be
positive) to go negative.
There are couple of reasons:
- Volume shader on hair does behave really weird anyway and it's
not something considered a bug really.
- Volume BVH traversal were only used by camera-in-volume check,
which doesn't really make sense to take hair into account since
it'll be rendered wrong anyway.
Such a removal makes both code easier to extend further (as in,
no need to worry about those traversal for hair bvh) and also
reduces stress on GPU compilers.
The improved Hosek / Wilkie model was added during my GSoC 2013 and the default since then.
The older model was kinda kept for compatibility, but after more than 2 years it's time to remove it.
The Hosek / Wilkie model is more realistic anyway, and people who really want a day / night transition can mix the Sky Shader with another one (e.g. color) and fade between the two.
This seems quite useful for the development, so you don't need to wait
all the kernels to be re-compiled when working on a new feature, which
speeds up re-iteration.
Marked as an advanced option, so if it doesn't work so well in practice
it's safe to revert anyway.
This commit makes it so casting subsurface rays will totally ignore all
the BVH nodes and primitives which do not belong to a current object,
making it much simpler traversal code and reduces number of intersection
tests.
Reviewers: brecht, juicyfruit, dingto, lukasstockner97
Differential Revision: https://developer.blender.org/D1823
We've got pixel-wide world-space derivatives which we can use in the
perspective camera sampling. This allows to get rid of two calls to
transform_direction() function.
In theory we can save two transform_perspective() calls if we'll also
save pre-calculated camera-space dx/dy.
Previously each call of this function was followed by a normaliztion, now it
is done in the function itself with an according note around the function.
At this point it's totally unclear why we're ignoring aperture and and rolling shutter
now for derivatives calculation but do not ignore direction change caused by stereo.
Was introduced by recent optimization. Not really sure derivatives are
intended to work like this, but better to stick to what Dalai had
originally for now.
This is a new option for panorama cameras to render
stereo that can be used in virtual reality devices
The option is available under the camera panel when Multi-View is enabled (Views option in the Render Layers panel)
Known limitations:
------------------
* Parallel convergence is not supported (you need to set a convergence distance really high to simulate this effect).
* Pivot was not supposed to affect the render but it does, this has to be looked at, but for now set it to CENTER
* Derivatives in perspective camera need to be pre-computed or we shuld get rid of kcam->dx/dy (Sergey words, I don't fully grasp the implication shere)
* This works in perspective mode and in panorama mode. However, for fully benefit from this effect in perspective mode you need to render a cube map. (there is an addon for this, developed separately, perhaps we could include it in master).
* We have no support for "neck distance" at the moment. This is supposed to help with objects at short distances.
* We have no support to rotate the "Up Axis" of the stereo plane. Meaning, we hardcode 0,0,1 as UP, and create the stereo pair related to that. (although we could take the camera local UP when rendering panoramas, this wouldn't work for perspective cameras.
* We have no support for interocular distance attenuation based on the proximity of the poles (which helps to reduce the pole rotation effect/artifact).
THIS NEEDS DOCS - both in 2.78 release log and the Blender manual.
Meanwhile you can read about it here: http://code.blender.org/2015/03/1451
This patch specifically dates from March 2015, as you can see in the code.blender.org post. Many thanks to all the reviewers, testers and minor sponsors who helped me maintain spherical-stereo for 1 year.
All that said, have fun with this. This feature was what got me started with Multi-View development (at the time what I was looking for was Fulldome stereo support, but the implementation is the same). In order to make this into Blender I had to make it aiming at a less-specic user-case Thus Multi-View started. (this was December 2012, during Siggraph Asia and a chat I had with Paul Bourke during the conference). I don't have the original patch anymore, but you can find a re-based version of it from March 2013, right before I start with the Multi-View project https://developer.blender.org/P332
Reviewers: sergey, dingto
Subscribers: #cycles
Differential Revision: https://developer.blender.org/D1223
This re-enables the AA jittering, but with proper clamping so that u >= 0,
v >= 0 and u+v <= 1.
Differential Revision: https://developer.blender.org/D1254
Seems to be some compiler fault which leads to a wrong flag being used,
making it so wrong number of samples is used for the background.
This should in theory fix issue reported in T47213.
The issue here was actually somewhere else - the attached scene from the report used a light falloff node in a sunlamp (aka distant light).
However, since distant lamps set the ray length to FLT_MAX and the light falloff node squares this value, it overflows and produces a NaN
weight, which propagates and leads to a NaN intensity, which is then clamped to zero and produces the black pixels.
To fix that issue, the smoothing part of the light falloff is just ignored if the smoothing term isn't finite (which makes sense since
the term should converge to 1 as the distance increases).
The reason for the different results on CPUs and GPUs is not perfectly clear, but probably can be explained with different handling of
Inf/NaN edge cases.
Also, to notice issues like these faster in the future, kernel_asserts were added that evaluate as false as soon as a non-finite intensity is produced.
This was a hard decision, because going newer CUDA toolkit makes
rendering up to 5% slower. But on another hand, it solves major
speed regressions (up to 30%) with branched path tracing on a
top level cards.
Neither of those regressions have a meaningful and sane workaround
from the code itself.
Toolkit 6.5 could still be used, but it's no longer recommended one.
When using multiple portals, scene areas behind one of the portals were rendered darker than they should.
The reason for that is a pretty stupid mistake: Since portals are only used at positions that aren't behind them,
only portals that are used should be accounted for in the PDF calculation. That was actually the case, but the final
divide incorrectly divided by the total amount of portals, not the amount of visible ones.
Another issue with areas behind portals was the PDF evaluation function.
The new evaluation code is shorter, simpler and fixes this issue.
Also, the threshold for the distance check was increased to avoid artifacts where portals touch a surface.
Supports both smoke/fire and point density textures now.
Reduces number of textures available for sm_20 and sm_21, but you have
to compromise somewhere on such a limited hardware.
Currently limited to linear interpolation only, and decoupled ray
marching is not supported yet. Think those could be considered just a
further improvement.
Some quick example:
https://developer.blender.org/F282934
Code is minimal and we can fully consider it a fix for missing
support of 3D textures with CUDA.
Reviewers: lukasstockner97, brecht, juicyfruit, dingto
Reviewed By: brecht, juicyfruit, dingto
Subscribers: mib2berlin
Differential Revision: https://developer.blender.org/D1806
There are several fixes in here, which hopefully will make the shader
working correct without too much magic in there.
First of all, this commit brings BURLEY_TRUNCATE down from 30 to 16
which reduces noise a lot. It's still higher than original truncate
from Brecht, but this reduces PDF value at a cutoff distance by an
order of magnitude (now it's 0.008387, previously it was 0.063521
for the albedo of 0.8 and radius 1.0). This should converge to a
proper result faster and don't have artifacts.
This kind of reverts fix for T47356, but after additional thinking
came to conclusion Burley is not being totally smooth, it is about
giving less waxy results which it's kind of doing in the file.
Second of all, this commit fixes burley_eval() to use normalized
diffusion reflectance. This matches the way we calculate CDF and
solves numeric instability close to 0, making PDF profile looking
closer to other SSS profiles:
https://developer.blender.org/F282355https://developer.blender.org/F282356https://developer.blender.org/F282357
Reviewers: brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1792