This patch changes the huge list of projects in visual studio into a nice tree matching the source folder structure. see D2823 for details.
Differential Revision: http://developer.blender.org/D2823
nvcc is very picky regarding compiler versions, severely limiting the compiler we can use, this commit adds a nvrtc based compiler that'll allow us to build the cubins even if the host compiler is unsupported. for details see D2913.
Differential Revision: http://developer.blender.org/D2913
This adds midlevel and object/world space for displacement, and a
vector displacement node with tangent/object/world space, midlevel
and scale.
Note that tangent space vector displacement still is not exactly
compatible with maps created by other software, this will require
changes to the tangent computation.
Differential Revision: https://developer.blender.org/D1734
There was a check for volume bounces at every surface intersection. That could lead to a volume scattered path being terminated
when passing through a transparent surface. This check was superfluous, as the volume shader evaluation already checks the
number of volume bounces and once it passes the max, volume shaders will not return scatter events any more.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: brecht, #cycles
Tags: #cycles
Maniphest Tasks: T53914
Differential Revision: https://developer.blender.org/D3024
Previously we stored each color channel in a single closure, which was
convenient for sampling a closure and channel together. But this doesn't
work so well for algorithms where we want to render multiple color
channels together.
Previously only scalar displacement along the normal was supported,
now displacement can go in any direction. For backwards compatibility,
a Displacement node will be automatically inserted in existing files.
This will make it possible to support vector displacement maps in the
future. It's already possible to use them to some extent, but requires
a manual shader node setup. For tangent space maps the right tangent
may also not be available yet, depends on the map.
Differential Revision: https://developer.blender.org/D3015
This converts object space height to world space displacement, to be
linked to the new vector displacement material output.
Differential Revision: https://developer.blender.org/D3015
This was we can introduce other types of BVH, for example, wider ones, without
causing too much mess around boolean flags.
Thoughs:
- Ideally device info should probably return bitflag of what BVH types it
supports.
It is possible to implement based on simple logic in device/ and mesh.cpp,
rest of the changes will stay the same.
- Not happy with workarounds in util_debug and duplicated enum in kernel.
Maybe enbum should be stores in kernel, but then it's kind of weird to include
kernel types from utils. Soudns some cyclkic dependency.
Reviewers: brecht, maxim_d33
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D3011
Adds the code to get screen size of a point in world space, which is
used for subdividing geometry to the correct level. The approximate
method of treating the point as if it were directly in front of the
camera is used, as panoramic projections can become very distorted
near the edges of an image. This should be fine for most uses.
There is also no support yet for offscreen dicing scale, though
panorama cameras are often used for rendering 360° renders anyway.
Fixes T49254.
Differential Revision: https://developer.blender.org/D2468
This can be enabled in the Film panel, with an option to control the
transmisison roughness below which glass becomes transparent.
Differential Revision: https://developer.blender.org/D2904
SVM nodes need to read all data to get the right offset for the following node.
This is quite weak, a more generic solution would be good in the future.
Our own implementation was behaving different comparing to OSL and GPU,
namely on the border pixels OSL and CUDA was doing interpolation with
black, but we were clamping coordinate.
This partially fixes issue reported in T53452.
Similar change should also be done for 3D interpolation perhaps, but this
is to be investigated separately.
Previously, the NLM kernels would be launched once per offset with one thread per pixel.
However, with the smaller tile sizes that are now feasible, there wasn't enough work to fully occupy GPUs which results in a significant slowdown.
Therefore, the kernels are now launched in a single call that handles all offsets at once.
This has two downsides: Memory accesses to accumulating buffers are now atomic, and more importantly, the temporary memory now has to be allocated for every shift at once, increasing the required memory.
On the other hand, of course, the smaller tiles significantly reduce the size of the memory.
The main bottleneck right now is the construction of the transformation - there is nothing to be parallelized there, one thread per pixel is the maximum.
I tried to parallelize the SVD implementation by storing the matrix in shared memory and launching one block per pixel, but that wasn't really going anywhere.
To make the new code somewhat readable, the handling of rectangular regions was cleaned up a bit and commented, it should be easier to understand what's going on now.
Also, some variables have been renamed to make the difference between buffer width and stride more apparent, in addition to some general style cleanup.
CUDA 9.0.176 apparently caused some slow down on high-end Pascal cards that can be mitigated by increasing the number of registers. See https://developer.blender.org/F1142667 for a detailed comparison.
Was giving difference when using sharpness of 1.0 and 0.999 even though the
result was expected to be really close to each other.
This SSS profile will probably be removed in the future in favor of more
physically bases Burley, but for the time being don't see anything wrong
fixing an existing code.
No color pass because it's hard to define what to use as color in a volume.
Reviewers: sergey, brecht
Differential Revision: https://developer.blender.org/D2903
Previously we picked one of the RGB channels with equal probability, but this
works poorly in a dense volume after many bounces. Now we take into account
the throughput and single scattering albedo.
This makes it a little more practical to do brute force SSS with volumes, but
is still very inefficient because we do direct light sampling at every volume
bounce even when inside an opaque mesh. In theory there could be a light inside
the mesh so we can't automatically disable direct lighting.
In fact this was an existing issue when exceeding the number of available
closure, but it's more common now that we set the number to 0 for shadows
and emission