This commit implements traversal for QBVH tree, which is based on the old loop
code for traversal itself and Embree for node intersection.
This commit also does some changes to the loop inspired by Embree:
- Visibility flags are only checked for primitives.
Doing visibility check for every node cost quite reasonable amount of time
and in most cases those checks are true-positive.
Other idea here would be to do visibility checks for leaf nodes only, but
this would need to be investigated further.
- For minimum hair width we extend all the nodes' bounding boxes.
Again doing curve visibility check is quite costly for each of the nodes and
those checks returns truth for most of the hierarchy anyway.
There are number of possible optimization still, but current state is good
enough in terms it makes rendering faster a little bit after recent watertight
commit.
Currently QBVH is only implemented for CPU with SSE2 support at least. All
other devices would need to be supported later (if that'd make sense from
performance point of view).
The code is enabled for compilation in kernel. but blender wouldn't use it
still.
This way there's much less cross-references between objects and meshes
device update functions.
The only thing remained s the object bounds calculation which is needed
by bvh update. This could also be decoupled, but it's not that crucial
yet because its's how it used to be for ages now.
This adds an AABB collision check for objects with volumes and if there's a
collision detected then the object will have SD_OBJECT_INTERSECTS_VOLUME flag.
This solves a speed regression introduced by the fix for T39823 by skipping
volume stack update in cases no volumes intersects the current SSS object.
The idea is to only count intersections with objects which has volumetric shader
and ignore all other objects.
This is probably as fast as we can go without involving some forth level magic.
Now we do much better preliminary check for panoramic camera is inside the
volume object boundings.
Also we're now cacheing the has_volume in the mesh, which makes it unneeded
iterations for each object's shaders.
Should be no functional changes, just faster sync and panoramic-in-volume
rendering.
tri_shader does no longer need to a float.
Reviewers: dingto, sergey
Reviewed By: dingto, sergey
Subscribers: dingto
Projects: #cycles
Differential Revision: https://developer.blender.org/D789
This was originally caused by a6ae12a where i didn't foresee unclear distinguishing
between empty and non-synced meshes will give issues for the viewport. They're the
same for final rendering, but for viewport we need to be accurate here.
Root of the issue goes back to the on-fly normals commit and the
latest fix for it wasn't actually correct. I've mixed two fixes
in there.
So the idea here goes back to storing negative scaled object flag
and flip runtime-calculated normal if this flag is set, which is
pretty much the same as the original fix for the issue from me.
The issue with motion blur wasn't caused by the rumtime normals
patch and it had issues before, because it already did runtime
normals calculation. Now made it so motion triangles takes the
negative scale flag into account.
This actually makes code more clean imo and avoids rather confusing
flipping code in mesh.cpp.
Fix T41115: Motion Blur renders Objects Black - But not in Viewport Preview
This actually extends previous fix to normals and makes it all much nicer now.
Worth doing some intense testing, quick one worked just fine but there always
could be some corner cases.
* Added support for uchar4 attributes to Cycles' attribute system.
* This is used for Vertex Colors now, which saves some memory (4 unsigned characters, instead of 4 floats).
* GPU Texture Limit on sm_20 and sm_21 decreased from 95 to 94, because we need a new texture for the uchar4 attributes. This is no problem for sm_30 or newer.
Part of my GSoC 2014.
Instead of pre-calculation and storage, we now calculate the face normal during render.
This gives a small slowdown (~1%) but decreases memory usage, which is especially important for GPUs,
where you have limited VRAM.
Part of my GSoC 2014.
Idea and code by Brecht, many thanks!
Reviewers: brecht
Reviewed By: brecht
CC: campbellbarton, dingto
Differential Revision: https://developer.blender.org/D369
These are internally stored as a 3D image textures, but accessible like e.g.
UV coordinates though the attribute node and getattribute().
This is convenient for rendering e.g. smoke objects where data like density is
really a property of the mesh, and it avoids having to specify the smoke object
in a texture node, instead the material will work with any smoke domain.
This now supports multiple steps and subframe sampling of motion.
There is one difference for object and camera transform motion blur. It still
only supports two steps there, but the transforms are now sampled at subframe
times instead of the previous and next frame and then interpolated/extrapolated.
This will give different render results in some cases but it's more accurate.
Part of the code is from the summer of code project by Gavin Howard, but it has
been significantly rewritten and extended.
This does not support staying fixed while the surface deforms, but for static
meshes it should match up with the surface texture coordinates. Implemented
as a matrix transform from objects space to mesh texture space.
Making this work for deforming surfaces would be quite complicated, you might
need something like harmonic coordinates as used in the mesh deform modifier,
probably will not be possible anytime soon.
* Revert r57203 (len() renaming)
There seems to be a problem with nVidia OpenCL after this and I haven't figured out the real cause yet.
Better to selectively enable native length() later, after figuring out what's wrong.
This fixes [#35612].
* Rename some math functions:
len -> length
len_squared -> length_squared
normalize_len -> normalize_length
* This way OpenCL uses its inbuilt length() function, rather than our own. The other two functions have been renamed for consistency.
* Tested CPU, CUDA and OpenCL compile, should be no functional changes.
should be no functional changes yet. UV, tangent and intercept are now stored
as attributes, with the intention to add more like multiple uv's, vertex
colors, generated coordinates and motion vectors later.
Things got a bit messy due to having both triangle and curve data in the same
mesh data structure, which also gives us two sets of attributes. This will get
cleaned up when we split the mesh class.
Patch [#33445] - Experimental Cycles Hair Rendering (CPU only)
This patch allows hair data to be exported to cycles and introduces a new line segment primitive to render with.
The UI appears under the particle tab and there is a new hair info node available.
It is only available under the experimental feature set and for cpu rendering.
operator< had wrong brackets, changed it now to be more clear.
Fix#33404: crash GPU rendering with OSL option still enabled. There was a check
to disable OSL in this case, but it shouldn't have modified scene->params because
this is used for comparison in scene->modified().
rays from the OSL shader. The "shade" parameter is not supported currently, but
attributes can be retrieved from the object that was hit using the
getmessage("trace", ..) function.
As mentioned in the OSL specification, this function can't be used instead of
lighting, the main purpose is to allow shaders to "probe" nearby geometry, for
example to apply a projected texture that can be blocked by geometry, apply
more “wear” to exposed geometry, or make other ambient occlusion-like effects.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/OSL#Trace
Example .blend and render:
http://www.pasteall.org/blend/17347http://www.pasteall.org/pic/show.php?id=40066
* Moved kernel/osl/nodes to kernel/shaders
* Renamed standard attributes to use geom:, particle:, object: prefixes
* Update stdosl.h to properly reflect the closures we support
* Fix the wrong stdosl.h being used for building shaders
* Add geom:numpolyvertices, geom:trianglevertices, geom:polyvertices attributes