This commits implements threaded sorting of references when looking for
object spatial split. It mainly useful when doing initial binning, which
happens from main thread.
Gives nice speedup of BVH build for Bunny.blend: 36sec vs. 55sec for
the Rabbit mesh BVH build.
On more complex scenes the speedup is probably minimal, but still nice
to have more instant rendering for simplier scenes.
Some further tests with production scenes would be interesting.
Reviewers: juicyfruit, dingto, lukasstockner97, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1928
Simple fix: release lock earlier.
Reduces spatial split build time from 96 to 53sec on the Bunny.blend
(using studio Intel for benchmark).
NOTE: Timing difference is not that spectacular when comparing numbers
with builds before memory optimization, but even then it's about 20%
faster build.
This was only visible on systems with lots of threads and root of the issue
was that we've been pre-allocating too much memory for all the threads.
Now we only pre-allocate data for the main thread and rest of the threads
does allocation on-demand.
This brings down memory usage from 36Gig to 6.9Gig when building spatial
split for the Bunny.blend file on our Intel beast.
Originally regression was happened by the threaded spacial split builder
commit.
The title actually covers it all, This commit exploits all the work
being done in previous changes to make it possible to build spatial
splits in threads.
Works quite nicely, but has a downside of some extra memory usage.
In practice it doesn't seem to be a huge problem and that we can
always look into later if it becomes a real showstopper.
In practice it shows some nice speedup:
- BMW27 scene takes 3 now (used to be 4)
- Agent shot takes 5 sec (used to be 80)
Such non-linear speedup is most likely coming from much less amount
of heap re-allocations. A a downside, there's a bit of extra memory
used by BVH arrays. From the tests amount of extra memory is below
0.001% so far, so it's not that bad at all.
Reviewers: brecht, juicyfruit, dingto, lukasstockner97
Differential Revision: https://developer.blender.org/D1820
Policy here is a bit more complicated, if tree becomes too deep we're
forced to create a leaf node and size of that leaf wouldn't be so well
predicted, which means it's quite tricky to use single stack array for
that.
Made it more official feature that StackAllocator will fall-back to
heap when running out of stack memory.
It's still much better than always using heap allocator.
Currently they're staying at 1 (actual size over capacity), but we
will be changing it quite soon in order to avoid having too much
memory re-allocation happening at a BVH build time and will be
playing with different policies for that.
In some files stack memory was overruning the pre-allocated stack.
Perhaps we should fall-back to a hep-allocated stack so release builds
don't crash in works case but just becoming slower.
There are in fact some missing parts to it (Split BVH builder should
be creating bins from result of Object Split constructor).
Doable, but need to quickly fix issue for the studio here, easier to
revert for now.
This has following advantages:
- Localizes all the run-time storage into a single structure,
which could easily be extended further.
- Storage could be created per-thread, so once builder is
threaded we wouldn't have any conflicts between threads.
- Global nature of the storage avoids memory re-allocation
on the runtime, keeping builder as fast as possible.
Currently it's just API changes, which don't affect user at all.
Uses new StackAllocator from util_stack_allocator. Some tweaks to the stack
storage size are possible, read notes in the code about this.
At this point we might want to rename allocator files to util_allocator_foo.c,
so the stay nicely grouped in the folder.
This commit makes it so casting subsurface rays will totally ignore all
the BVH nodes and primitives which do not belong to a current object,
making it much simpler traversal code and reduces number of intersection
tests.
Reviewers: brecht, juicyfruit, dingto, lukasstockner97
Differential Revision: https://developer.blender.org/D1823
Notes:
- There is still some bvh cache code, but that is from the engines initial commit, we might clean this up further or keep it.
- Changes in util_cache.h/.c are kept, this might be re-used in the future.
Avoid memmove() happening on every insert of duplicated node to the references
list. Temporary pre-allocated vector is used for new references which is then
being inserted into actual array in one go later.
Gives around 4x speedup building spatially split BVH for the grass field in the
cassette player shot from Gooseberry.
This commit implements object reference node spatial split making it possible
to use spatial split for top-level BVH.
The code is not in use yet because enabling spatial split on top level BVH is
not coming for free and it needs to be investigated if it's worth in terms of
improved render times.
This way it's easy to add more reference types allowed for splitting to the
BVH reference split function without making this function too much big. This
way it's possible to experiment with such features as splitting object instance
references.
So far should not be any functional changes.
Previous idea behind having vector during building and array for actual storage
was needed in order to minimize amount of re-allocations happening during the
build, but it lead to double memory overhead used by those arrays at the vector
to array conversion stage.
Issue with such approach was that for BVH without spatial split size of arrays
is known in advance and it never changes, which made vector to array conversion
totally redundant.
Also after testing with several rather complex from spatial split scenes (such
as trees) it seems even conservative approach of reallocation (when we perform
re-allocation when leaf does not fit into the memory) doesn't give measurable
difference in time.
This makes it so we can switch to array, which will avoid unneeded memory
re-allocations when spatial split is disabled without harming other cases.
it's a bit difficult to measure exact benefit of this change on our production
files here, but depending on the scene it might give quite reasonable memory
save.
This way we can get rid of inefficient memory usage caused by BVH boundbox
part being unused by leaf nodes but still being allocated for them. Doing
such split allows to save 6 of float4 values for QBVH per leaf node and 3
of float4 values for regular BVH per leaf node.
This translates into following memory save using 01.01.01.G rendered
without hair:
Device memory size Device memory peak Global memory peak
Before the patch: 4957 5051 7668
With the patch: 4467 4562 7332
The measurements are done against current master. Still need to run speed tests
and it's hard to predict if it's faster or not: on the one hand leaf nodes are
now much more coherent in cache, on the other hand they're not so much coherent
with regular nodes anymore.
Reviewers: brecht, juicyfruit
Subscribers: venomgfx, eyecandy
Differential Revision: https://developer.blender.org/D1236
It was an issue with what bounds to use for BVH node during construction.
Also corrected case when there are all 4 primitive types in the range and
also there're objects in the same range.
This inconsistency drove me totally crazy, it's really confusing
when it's inconsistent especially when you work on both Cycles and
Blender sides.
Shouldn;t cause merge PITA, it's whitespace changes only, Git should
be able to merge it nicely.
The issue was caused by mismatch in how aligned triangles storage was
filled in during BVH construction and how it was used during rendering.
Basically, i was leaving uninitialized storage for triangles when
there was deformation motion blur detected for the mesh. Was likely
some sort of optimization, but in fact it's still possible that regular
triangles would be needed for rendering.
So now we're storing aligned storage for all triangle primitives and
only skipping motion triangles (the deformation motion blur flag from
mesh is now ignored).
Ideally we should get rid of those temporary vectors anyway, but
it's not so trivial because of the alignment. For untl then we'll
just have a bit worse solution. This part of code is not the root
of the issue of memory spikes for now anyway.
But since we're getting rid of temporary memory earlier actual spike
is a bit smaller as now. For example in franck_sheep file it's now
5489.69MB vs. previously 5599.90MB.