It will discard the whole tile, but it's still kind of more friendly than
fully locked interface (sort of) for until tile is fully sampled.
Sorry if it causes PITA to merge for the opencl split work, but this issue
bothering a lot when collecting benchmarks.
Previously it was falling back to just a path after #include
statement was finished. Now we fall back to a proper current
file name after dealing with the preprocessor statement.
Some of the files were wrongly attributing code to some other
organizations and in few places proper attribution was missing.
This is mainly either a copy-paste error (when new file was
created from an existing one and header wasn't updated) or due
to some refactor which split non-original-BF code with purely
BF code.
Should solve some confusion around.
In Windows, event dispatching code is throwing out the wheel scroll count value.
Despite of how many fast you move the wheel, it only make one-notch scroll event.
This patch convert wheel event to multiple 1-notch wheel events.
This also correct the handling of smooth scroll mouse wheel (which can report smaller than 1-notch wheel movement) by accumulating the small wheel delta values.
Reviewers: djnz, shadowrom, elubie, #platform:_windows, sergey, juicyfruit, brecht
Reviewed By: shadowrom, elubie, #platform:_windows, brecht
Subscribers: dingto, elubie, brachi, brecht
Differential Revision: https://developer.blender.org/D143
The light sampling functions calculate light sampling PDF for the case that the light has been randomly selected out of all lights.
However, since BPT handles lamps and meshlights separately, this isn't the case. So, to avoid a wrong result, the code just included the 0.5 factor in the throughput.
In theory, however, the correction should be made to the sampling probability, which needs to be doubled. Now, for the regular calculation, that's no real difference since the throughput is divided by the pdf.
However, it does matter for the MIS calculation - it's unbiased both ways, but including the factor in the PDF instead of the throughput should give slightly better results.
Reviewers: sergey, brecht, dingto, juicyfruit
Differential Revision: https://developer.blender.org/D2258
We raised the minimum to GL 2.1 in Blender 2.77, and dropped support for older GPUs (pre-2012 Intel mostly). On Windows you get a popup message, but on Mac we simply crashed. Every Mac has a builtin software renderer for GL 2.1 so let's use that when the GPU is not capable!
Run blender --debug-gpu to see version detection & software fallback.
Idea here is to select the lowest isolation level that wont compromise quality.
By using the lowest level we save memory and processing time. This will also
help avoid precision issues that have been showing up from using the highest
level (T49179, T49257).
This is a pretty simple heuristic that gives ok results. There's more we could
do here, such as filtering for vertices/edges adjacent geometric features that
need isolation instead of checking them all, but the logic there could get a
bit involved.
There's potential for slight popping of edges during animation if the dice
rate is low, but I don't think this should be a problem since low dice rates
really shouldn't be used in animation anyways.
Reviewed By: brecht, sergey
Differential Revision: https://developer.blender.org/D2240
This reverts commit ecbfa31caa.
Original commit broke logic in nodes re-fitting. That area can
access non-existing children momentarely. Not sure what would
be best solution here, for now simply reverting the change/
Problem was zero length normal caused by a precision issue in patch evaluation.
This is somewhat of a quick fix, but is better than allowing possible NaNs to
occur and cause problems elsewhere.
Both spot and area light have large areas where they're not visible.
Therefore, this patch stops the light sampling code when one of these cases (outside of the spotlight cone or behind the area light) occurs, before the lamp shader is evaluated.
In the case of the area light, the solid angle sampling can also be skipped.
In a test scene with Sample All Lights and 18 Area lamps and 9 Spot lamps that all point away from the area that the camera sees, render time drops from 12sec to 5sec.
Reviewers: brecht, sergey, dingto, juicyfruit
Differential Revision: https://developer.blender.org/D2216
Small issues in GHOST
- use NSApplicationDelegate protocol for our app delegate
- make sure NSApp is initialized before using
(cherry picked from commit df7be04ca6)
The title says it all actually. From tests with barber shop scene here
gives 2-3x speedup for shader compilation on my oldie i7 machine. The
gain is mainly due to textures metadata query from jpeg files (which
seems to requite de-compression before metadata can be read). But in
theory could give nice improvements for scenes with huge node trees
as well (i'm talking about node trees of complexity of fractal which
we had reports about in the past).
Reviewers: juicyfruit, dingto, lukasstockner97, brecht
Reviewed By: brecht
Subscribers: monio, Blendify
Differential Revision: https://developer.blender.org/D2215
The issue was caused by some false-positive empty non-AABB intersection.
Tried to tweak it a bit so it does not record intersection anymore.
Hopefully will work for all platforms. Tested here on iMac and Debian.