The idea is to make include statements more explicit and obvious where the
file is coming from, additionally reducing chance of wrong header being
picked up.
For example, it was not obvious whether bvh.h was refferring to builder
or traversal, whenter node.h is a generic graph node or a shader node
and cases like that.
Surely this might look obvious for the active developers, but after some
time of not touching the code it becomes less obvious where file is coming
from.
This was briefly mentioned in T50824 and seems @brecht is fine with such
explicitness, but need to agree with all active developers before committing
this.
Please note that this patch is lacking changes related on GPU/OpenCL
support. This will be solved if/when we all agree this is a good idea to move
forward.
Reviewers: brecht, lukasstockner97, maiself, nirved, dingto, juicyfruit, swerner
Reviewed By: lukasstockner97, maiself, nirved, dingto
Subscribers: brecht
Differential Revision: https://developer.blender.org/D2586
Doesn't currently change anything, but would need for some future
work here.
It uses existing padding in kernel BVH structure, so there is
nothing changed memory-wise.
The issue here was mainly coming from minimal pixel width feature
which is quite commonly enabled in production shots.
This feature will use some probabilistic heuristic in the curve
intersection function to check whether we need to return intersection
or not. This probability is calculated for every intersection check.
Now, when we use multiple BVH nodes for curve primitives we increase
probability of that primitive to be considered a good intersection
for us. This is similar to increasing minimal width of curve.
What is worst here is that change in the intersection probability
fully depends on exact layout of BVH, meaning probability might
change differently depending on a view angle, the way how builder
binned the primitives and such. This makes it impossible to do
simple check like dividing probability by number of BVH steps.
Other solution might have been to split BVH into fully independent
trees, but that will increase memory usage of all the static
objects in the scenes, which is also not something desirable.
For now used most simple but robust approach: store BVH primitives
time and test it in curve intersection functions. This solves the
regression, but has two downsides:
- Uses more memory.
which isn't surprising, and ANY solution to this problem will
use more memory.
What we still have to do is to avoid this memory increase for
cases when we don't use BVH motion steps.
- Reduces number of maximum available textures on pre-kepler cards.
There is not much we can do here, hardware gets old but we need
to move forward on more modern hardware..
Similar to the previous commit, the statistics goes as:
BVH Steps Render time (sec) Memory usage (MB)
0 46 260
1 27 373
2 18 598
3 15 826
Scene used for the tests is the agent's body from one of the barber
shop scenes (no textures or anything, just a diffuse material).
Once again this is limited to regular (non-spatial split) BVH,
Support of spatial split to this feature will come later.
Also fixed some issues with motion keys calculation:
- Clamp lower and upper limits of curves so we can safely call those
functions for the very first and very last curve segment.
- Fixed wrong indexing for the curve radius array.
- Fixed wrong motion attribute offset calculation.
Main intention is to give some quick way to control scene's memory
usage by clamping textures which are too big. This is really handy
on the early production stages when you first create really nice
looking hi-res textures and only when it all works and approved
start investing time on optimizing your scene.
This is a new option in Scene Simplify panel and it acts as
following: when texture size is bigger than the given value it'll
be scaled down by half for until it fits into given limit.
There are various possible improvements, such as:
- Use threaded scaling using our own task manager.
This is actually one of the main reasons why image resize is
manually-implemented instead of using OIIO's resize. Other
reason here is that API seems limited to construct 3D texture
description easily.
- Vectorization of uchar4/float4/half4 textures.
- Use something smarter than box filter.
Was playing with some other filters, but not sure they are
really better: they kind of causes more fuzzy edges.
Even with such a TODOs in the code the option is already quite
useful.
Reviewers: brecht
Reviewed By: brecht
Subscribers: jtheninja, Blendify, gregzaal, venomgfx
Differential Revision: https://developer.blender.org/D2362
Storing multiple copies of a shader was needed when the displacement method was
a mesh option and could be different for each mesh. Now that its a shader option
this is unnecessary.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2156
Changes from microdisplacement work broke previous support for subdivision
meshes, sometimes leading to crashes; this makes things work again. Files
that contain "patch" nodes will need to be updated to use meshes instead, as
specifying patches was both inefficient and completely unsupported by the new
subdivision code.
These latter can cause MSVC debug asserts if the array is empty. With C++11
we'll be able to do this for std::vector later. This hopefully fixes an assert
in the Cycles subdivision code.
By calling `tessellate()` from the mesh manager in Cycles we can do pre/post
processing or even threaded tessellation without concerning client side code
with the details.
Displacement is now a per material setting, which means old files will have to
be updated if they had used displacement. Cool side effect of this change is
material previews now show displacement.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2140
Enables Catmull-Clark subdivision meshes with support for creases and attribute
subdivision. Still waiting on OpenSubdiv to fully support face varying
interpolation for subdividing uv coordinates tho. Also there may be some
inconsistencies with Blender's subdivision which will be resolved at a
later time.
Code for reading patch tables and creating patch maps is borrowed
from OpenSubdiv.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2111
Adds a descriptor for attributes that can easily be passed around and extended
to contain more data. Will be used for attributes on subdivision meshes.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2110
Subdivision options can now be found in the subsurf modifier. The modifier must
be the last in the stack or the options will be unavailable. Catmull-Clark
subdivision is still unavailable and will fallback to linear subdivision instead
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2109
This adds support for ngons and attributes on subdivision meshes. Ngons are
needed for proper attribute interpolation as well as correct Catmull-Clark
subdivision. Several changes are made to achieve this:
- new primitive `SubdFace` added to `Mesh`
- 3 more textures are used to store info on patches from subd meshes
- Blender export uses loop interface instead of tessface for subd meshes
- `Attribute` class is updated with a simplified way to pass primitive counts
around and to support ngons.
- extra points for ngons are generated for O(1) attribute interpolation
- curves are temporally disabled on subd meshes to avoid various bugs with
implementation
- old unneeded code is removed from `subd/`
- various fixes and improvements
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2108
This is a bit weak, but better than tagging whole mesh manager for update.
Maybe we'll solve such dual-look up in the future.
This commit finally solves T48963: Noise when changing Diffuse node to Emission node
While it's an extra option added to the interface which might not be
fully obvious for artists, it allows to save up to 20% of memory in
hairy scenes.
This is high enough memory saver in my opinion which might become
handy for some production files where it's more important to make
scene to fit into memory rather than trying to use more optimal BVH
structure but go into swap or crash.
Reviewers: dingto, brecht
Reviewed By: dingto, brecht
Differential Revision: https://developer.blender.org/D2090
This commit enables new unaligned BVH builder and traversal for scenes
with hair. This happens automatically, no need of manual control over
this.
There are some possible optimization still to happen here and there,
but overall there's already nice speedup:
Master Hair BVH
bunny.blend 8:06.54 5:57.14
victor.blend 16:07.44 15:37.35
Unfortunately, such more complexity is not really coming for free,
so there's some downsides, but those are within acceptable range:
Master Hair BVH
classroom.blend 5:31.79 5:35.11
barcelona.blend 4:38.58 4:44.51
Memory usage is also somewhat bigger for hairy scenes, but speed
benefit pays well for that. Additionally as was mentioned in one
of previous commits we can add an option to disable hair BVH and
have similar render time but have memory saving.
Reviewers: brecht, dingto, lukasstockner97, juicyfruit, maiself
Differential Revision: https://developer.blender.org/D2086
This is a special builder type which is allowed to orient nodes to
strands direction, hence minimizing their surface area in comparison
with axis-aligned nodes. Such nodes are much more efficient for hair
rendering.
Implementation of BVH builder is based on Embree, and generally idea
there is to calculate axis-aligned SAH and oriented SAH and if SAH
of oriented node is smaller than axis-aligned SAH we create unaligned
node.
We store both aligned and unaligned nodes in the same tree (which
seems to be different from what Embree is doing) so we don't have
any any extra calculations needed to set up hair ray for BVH
traversal, hence avoiding any possible negative effect of this new
BVH nodes type.
This new builder is currently not in use, still need to make BVH
traversal code aware of unaligned nodes.
There are several internal changes for this:
First idea is to make __tri_verts to behave similar to __tri_storage,
meaning, __tri_verts array now contains all vertices of all triangles
instead of just mesh vertices. This saves some lookup when reading
triangle coordinates in functions like triangle_normal().
In order to make it efficient needed to store global triangle offset
somewhere. So no __tri_vindex.w contains a global triangle index which
can be used to read triangle vertices.
Additionally, the order of vertices in that array is aligned with
primitives from BVH. This is needed to keep cache as much coherent as
possible for BVH traversal. This causes some extra tricks needed to
fill the array in and deal with True Displacement but those trickery
is fully required to prevent noticeable slowdown.
Next idea was to use this __tri_verts instead of __tri_storage in
intersection code. Unfortunately, this is quite tricky to do without
noticeable speed loss. Mainly this loss is caused by extra lookup
happening to access vertex coordinate.
Fortunately, tricks here and there (i,e, some types changes to avoid
casts which are not really coming for free) reduces those losses to
an acceptable level. So now they are within couple of percent only,
On a positive site we've achieved:
- Few percent of memory save with triangle-only scenes. Actual save
in this case is close to size of all vertices.
On a more fine-subdivided scenes this benefit might become more
obvious.
- Huge memory save of hairy scenes. For example, on koro.blend
there is about 20% memory save. Similar figure for bunny.blend.
This memory save was the main goal of this commit to move forward
with Hair BVH which required more memory per BVH node. So while
this sounds exciting, this memory optimization will become invisible
by upcoming Hair BVH work.
But again on a positive side, we can add an option to NOT use Hair
BVH and then we'll have same-ish render times as we've got currently
but will have this 20% memory benefit on hairy scenes.
- add_vertex() can be called from split_vertex() which does not guarantee
to have properly pre-allocate arrays.
- Need to check whether Cycles is compiled with OSL in XML reader.
This fixes a rare case where NaNs could exist inside Cycles.
When certain invalid meshes were passed in, Cycles would try too normalize
a zero length normal during its setup stage. While it does check against
division by zero, it still returns a zero length normal and passes it on to
the path tracing kernel. The kernel then operates under the assumption that
normals are valid, and in the case of such a zero length normal, would
eventually create NaNs that propagate through and result in black pixels.
Reviewers: #cycles
Subscribers: brecht, sergey
Projects: #cycles
Differential Revision: https://developer.blender.org/D2008
Mainly makes logging less verbose when doing progressive sampling in viewport.
Such kind of verbosity is not really possible to be filtered out with `grep`
so let's reshuffle few lines of code.
This is an attempt to gracefully handle out-of-memory events
and stop rendering with an error message instead of a crash.
It uses bad_alloc exception, and usually i'm not really fond
of exceptions, but for such limited use for errors from which
we can't recover it should be fine.
Ideally we'll need to stop full Cycles Session, so viewport
render and persistent images frees all the memory, but that
we can support later, since it'll mainly related on telling
Blender what to do.
General rules are:
- Use as less exception handles as possible, try to find a
most geenric pace where to handle those.
For example, ccl::Session.
- Threads needs own handling, exception trap from one thread
will not catch exceptions from other threads.
That's why BVH build needs own thing.
Reviewers: brecht, juicyfruit, dingto, lukasstockner97
Differential Revision: https://developer.blender.org/D1898
This way we prevent cracks in the model due to discontinuous normals, by using
smooth normals for displacement instead of always getting flat normals after
linear subdivision.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1916