Files
blender/intern/cycles/kernel/svm/tex_coord.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

414 lines
12 KiB
C
Raw Normal View History

/* SPDX-License-Identifier: Apache-2.0
* Copyright 2011-2022 Blender Foundation */
#pragma once
#include "kernel/camera/camera.h"
#include "kernel/geom/geom.h"
#include "kernel/sample/mapping.h"
CCL_NAMESPACE_BEGIN
/* Texture Coordinate Node */
ccl_device_noinline int svm_node_tex_coord(KernelGlobals kg,
Cycles: Kernel address space changes for MSL This is the first of a sequence of changes to support compiling Cycles kernels as MSL (Metal Shading Language) in preparation for a Metal GPU device implementation. MSL requires that all pointer types be declared with explicit address space attributes (device, thread, etc...). There is already precedent for this with Cycles' address space macros (ccl_global, ccl_private, etc...), therefore the first step of MSL-enablement is to apply these consistently. Line-for-line this represents the largest change required to enable MSL. Applying this change first will simplify future patches as well as offering the emergent benefit of enhanced descriptiveness. The vast majority of deltas in this patch fall into one of two cases: - Ensuring ccl_private is specified for thread-local pointer types - Ensuring ccl_global is specified for device-wide pointer types Additionally, the ccl_addr_space qualifier can be removed. Prior to Cycles X, ccl_addr_space was used as a context-dependent address space qualifier, but now it is either redundant (e.g. in struct typedefs), or can be replaced by ccl_global in the case of pointer types. Associated function variants (e.g. lcg_step_float_addrspace) are also redundant. In cases where address space qualifiers are chained with "const", this patch places the address space qualifier first. The rationale for this is that the choice of address space is likely to have the greater impact on runtime performance and overall architecture. The final part of this patch is the addition of a metal/compat.h header. This is partially complete and will be extended in future patches, paving the way for the full Metal implementation. Ref T92212 Reviewed By: brecht Maniphest Tasks: T92212 Differential Revision: https://developer.blender.org/D12864
2021-10-14 13:53:40 +01:00
ccl_private ShaderData *sd,
uint32_t path_flag,
Cycles: Kernel address space changes for MSL This is the first of a sequence of changes to support compiling Cycles kernels as MSL (Metal Shading Language) in preparation for a Metal GPU device implementation. MSL requires that all pointer types be declared with explicit address space attributes (device, thread, etc...). There is already precedent for this with Cycles' address space macros (ccl_global, ccl_private, etc...), therefore the first step of MSL-enablement is to apply these consistently. Line-for-line this represents the largest change required to enable MSL. Applying this change first will simplify future patches as well as offering the emergent benefit of enhanced descriptiveness. The vast majority of deltas in this patch fall into one of two cases: - Ensuring ccl_private is specified for thread-local pointer types - Ensuring ccl_global is specified for device-wide pointer types Additionally, the ccl_addr_space qualifier can be removed. Prior to Cycles X, ccl_addr_space was used as a context-dependent address space qualifier, but now it is either redundant (e.g. in struct typedefs), or can be replaced by ccl_global in the case of pointer types. Associated function variants (e.g. lcg_step_float_addrspace) are also redundant. In cases where address space qualifiers are chained with "const", this patch places the address space qualifier first. The rationale for this is that the choice of address space is likely to have the greater impact on runtime performance and overall architecture. The final part of this patch is the addition of a metal/compat.h header. This is partially complete and will be extended in future patches, paving the way for the full Metal implementation. Ref T92212 Reviewed By: brecht Maniphest Tasks: T92212 Differential Revision: https://developer.blender.org/D12864
2021-10-14 13:53:40 +01:00
ccl_private float *stack,
uint4 node,
int offset)
{
float3 data;
uint type = node.y;
uint out_offset = node.z;
switch (type) {
case NODE_TEXCO_OBJECT: {
2017-02-16 06:24:13 -05:00
data = sd->P;
if (node.w == 0) {
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE) {
object_inverse_position_transform(kg, sd, &data);
}
}
else {
Transform tfm;
tfm.x = read_node_float(kg, &offset);
tfm.y = read_node_float(kg, &offset);
tfm.z = read_node_float(kg, &offset);
data = transform_point(&tfm, data);
}
break;
}
Cycles: merging features from tomato branch. === BVH build time optimizations === * BVH building was multithreaded. Not all building is multithreaded, packing and the initial bounding/splitting is still single threaded, but recursive splitting is, which was the main bottleneck. * Object splitting now uses binning rather than sorting of all elements, using code from the Embree raytracer from Intel. http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/ * Other small changes to avoid allocations, pack memory more tightly, avoid some unnecessary operations, ... These optimizations do not work yet when Spatial Splits are enabled, for that more work is needed. There's also other optimizations still needed, in particular for the case of many low poly objects, the packing step and node memory allocation. BVH raytracing time should remain about the same, but BVH build time should be significantly reduced, test here show speedup of about 5x to 10x on a dual core and 5x to 25x on an 8-core machine, depending on the scene. === Threads === Centralized task scheduler for multithreading, which is basically the CPU device threading code wrapped into something reusable. Basic idea is that there is a single TaskScheduler that keeps a pool of threads, one for each core. Other places in the code can then create a TaskPool that they can drop Tasks in to be executed by the scheduler, and wait for them to complete or cancel them early. === Normal ==== Added a Normal output to the texture coordinate node. This currently gives the object space normal, which is the same under object animation. In the future this might become a "generated" normal so it's also stable for deforming objects, but for now it's already useful for non-deforming objects. === Render Layers === Per render layer Samples control, leaving it to 0 will use the common scene setting. Environment pass will now render environment even if film is set to transparent. Exclude Layers" added. Scene layers (all object that influence the render, directly or indirectly) are shared between all render layers. However sometimes it's useful to leave out some object influence for a particular render layer. That's what this option allows you to do. === Filter Glossy === When using a value higher than 0.0, this will blur glossy reflections after blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good starting value to tweak. Some light paths have a low probability of being found while contributing much light to the pixel. As a result these light paths will be found in some pixels and not in others, causing fireflies. An example of such a difficult path might be a small light that is causing a small specular highlight on a sharp glossy material, which we are seeing through a rough glossy material. With path tracing it is difficult to find the specular highlight, but if we increase the roughness on the material the highlight gets bigger and softer, and so easier to find. Often this blurring will be hardly noticeable, because we are seeing it through a blurry material anyway, but there are also cases where this will lead to a loss of detail in lighting.
2012-04-28 08:53:59 +00:00
case NODE_TEXCO_NORMAL: {
2017-02-16 06:24:13 -05:00
data = sd->N;
object_inverse_normal_transform(kg, sd, &data);
Cycles: merging features from tomato branch. === BVH build time optimizations === * BVH building was multithreaded. Not all building is multithreaded, packing and the initial bounding/splitting is still single threaded, but recursive splitting is, which was the main bottleneck. * Object splitting now uses binning rather than sorting of all elements, using code from the Embree raytracer from Intel. http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/ * Other small changes to avoid allocations, pack memory more tightly, avoid some unnecessary operations, ... These optimizations do not work yet when Spatial Splits are enabled, for that more work is needed. There's also other optimizations still needed, in particular for the case of many low poly objects, the packing step and node memory allocation. BVH raytracing time should remain about the same, but BVH build time should be significantly reduced, test here show speedup of about 5x to 10x on a dual core and 5x to 25x on an 8-core machine, depending on the scene. === Threads === Centralized task scheduler for multithreading, which is basically the CPU device threading code wrapped into something reusable. Basic idea is that there is a single TaskScheduler that keeps a pool of threads, one for each core. Other places in the code can then create a TaskPool that they can drop Tasks in to be executed by the scheduler, and wait for them to complete or cancel them early. === Normal ==== Added a Normal output to the texture coordinate node. This currently gives the object space normal, which is the same under object animation. In the future this might become a "generated" normal so it's also stable for deforming objects, but for now it's already useful for non-deforming objects. === Render Layers === Per render layer Samples control, leaving it to 0 will use the common scene setting. Environment pass will now render environment even if film is set to transparent. Exclude Layers" added. Scene layers (all object that influence the render, directly or indirectly) are shared between all render layers. However sometimes it's useful to leave out some object influence for a particular render layer. That's what this option allows you to do. === Filter Glossy === When using a value higher than 0.0, this will blur glossy reflections after blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good starting value to tweak. Some light paths have a low probability of being found while contributing much light to the pixel. As a result these light paths will be found in some pixels and not in others, causing fireflies. An example of such a difficult path might be a small light that is causing a small specular highlight on a sharp glossy material, which we are seeing through a rough glossy material. With path tracing it is difficult to find the specular highlight, but if we increase the roughness on the material the highlight gets bigger and softer, and so easier to find. Often this blurring will be hardly noticeable, because we are seeing it through a blurry material anyway, but there are also cases where this will lead to a loss of detail in lighting.
2012-04-28 08:53:59 +00:00
break;
}
case NODE_TEXCO_CAMERA: {
Transform tfm = kernel_data.cam.worldtocamera;
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE)
data = transform_point(&tfm, sd->P);
else
2017-02-16 06:24:13 -05:00
data = transform_point(&tfm, sd->P + camera_position(kg));
break;
}
case NODE_TEXCO_WINDOW: {
2017-02-16 06:24:13 -05:00
if ((path_flag & PATH_RAY_CAMERA) && sd->object == OBJECT_NONE &&
kernel_data.cam.type == CAMERA_ORTHOGRAPHIC)
data = camera_world_to_ndc(kg, sd, sd->ray_P);
else
2017-02-16 06:24:13 -05:00
data = camera_world_to_ndc(kg, sd, sd->P);
data.z = 0.0f;
break;
}
case NODE_TEXCO_REFLECTION: {
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE)
data = 2.0f * dot(sd->N, sd->I) * sd->N - sd->I;
else
2017-02-16 06:24:13 -05:00
data = sd->I;
break;
}
case NODE_TEXCO_DUPLI_GENERATED: {
2017-02-16 06:24:13 -05:00
data = object_dupli_generated(kg, sd->object);
break;
}
case NODE_TEXCO_DUPLI_UV: {
2017-02-16 06:24:13 -05:00
data = object_dupli_uv(kg, sd->object);
break;
}
case NODE_TEXCO_VOLUME_GENERATED: {
2017-02-16 06:24:13 -05:00
data = sd->P;
#ifdef __VOLUME__
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE)
data = volume_normalized_position(kg, sd, data);
#endif
break;
}
}
stack_store_float3(stack, out_offset, data);
return offset;
}
ccl_device_noinline int svm_node_tex_coord_bump_dx(KernelGlobals kg,
Cycles: Kernel address space changes for MSL This is the first of a sequence of changes to support compiling Cycles kernels as MSL (Metal Shading Language) in preparation for a Metal GPU device implementation. MSL requires that all pointer types be declared with explicit address space attributes (device, thread, etc...). There is already precedent for this with Cycles' address space macros (ccl_global, ccl_private, etc...), therefore the first step of MSL-enablement is to apply these consistently. Line-for-line this represents the largest change required to enable MSL. Applying this change first will simplify future patches as well as offering the emergent benefit of enhanced descriptiveness. The vast majority of deltas in this patch fall into one of two cases: - Ensuring ccl_private is specified for thread-local pointer types - Ensuring ccl_global is specified for device-wide pointer types Additionally, the ccl_addr_space qualifier can be removed. Prior to Cycles X, ccl_addr_space was used as a context-dependent address space qualifier, but now it is either redundant (e.g. in struct typedefs), or can be replaced by ccl_global in the case of pointer types. Associated function variants (e.g. lcg_step_float_addrspace) are also redundant. In cases where address space qualifiers are chained with "const", this patch places the address space qualifier first. The rationale for this is that the choice of address space is likely to have the greater impact on runtime performance and overall architecture. The final part of this patch is the addition of a metal/compat.h header. This is partially complete and will be extended in future patches, paving the way for the full Metal implementation. Ref T92212 Reviewed By: brecht Maniphest Tasks: T92212 Differential Revision: https://developer.blender.org/D12864
2021-10-14 13:53:40 +01:00
ccl_private ShaderData *sd,
uint32_t path_flag,
Cycles: Kernel address space changes for MSL This is the first of a sequence of changes to support compiling Cycles kernels as MSL (Metal Shading Language) in preparation for a Metal GPU device implementation. MSL requires that all pointer types be declared with explicit address space attributes (device, thread, etc...). There is already precedent for this with Cycles' address space macros (ccl_global, ccl_private, etc...), therefore the first step of MSL-enablement is to apply these consistently. Line-for-line this represents the largest change required to enable MSL. Applying this change first will simplify future patches as well as offering the emergent benefit of enhanced descriptiveness. The vast majority of deltas in this patch fall into one of two cases: - Ensuring ccl_private is specified for thread-local pointer types - Ensuring ccl_global is specified for device-wide pointer types Additionally, the ccl_addr_space qualifier can be removed. Prior to Cycles X, ccl_addr_space was used as a context-dependent address space qualifier, but now it is either redundant (e.g. in struct typedefs), or can be replaced by ccl_global in the case of pointer types. Associated function variants (e.g. lcg_step_float_addrspace) are also redundant. In cases where address space qualifiers are chained with "const", this patch places the address space qualifier first. The rationale for this is that the choice of address space is likely to have the greater impact on runtime performance and overall architecture. The final part of this patch is the addition of a metal/compat.h header. This is partially complete and will be extended in future patches, paving the way for the full Metal implementation. Ref T92212 Reviewed By: brecht Maniphest Tasks: T92212 Differential Revision: https://developer.blender.org/D12864
2021-10-14 13:53:40 +01:00
ccl_private float *stack,
uint4 node,
int offset)
{
#ifdef __RAY_DIFFERENTIALS__
float3 data;
uint type = node.y;
uint out_offset = node.z;
switch (type) {
case NODE_TEXCO_OBJECT: {
2017-02-16 06:24:13 -05:00
data = sd->P + sd->dP.dx;
if (node.w == 0) {
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE) {
object_inverse_position_transform(kg, sd, &data);
}
}
else {
Transform tfm;
tfm.x = read_node_float(kg, &offset);
tfm.y = read_node_float(kg, &offset);
tfm.z = read_node_float(kg, &offset);
data = transform_point(&tfm, data);
}
break;
}
Cycles: merging features from tomato branch. === BVH build time optimizations === * BVH building was multithreaded. Not all building is multithreaded, packing and the initial bounding/splitting is still single threaded, but recursive splitting is, which was the main bottleneck. * Object splitting now uses binning rather than sorting of all elements, using code from the Embree raytracer from Intel. http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/ * Other small changes to avoid allocations, pack memory more tightly, avoid some unnecessary operations, ... These optimizations do not work yet when Spatial Splits are enabled, for that more work is needed. There's also other optimizations still needed, in particular for the case of many low poly objects, the packing step and node memory allocation. BVH raytracing time should remain about the same, but BVH build time should be significantly reduced, test here show speedup of about 5x to 10x on a dual core and 5x to 25x on an 8-core machine, depending on the scene. === Threads === Centralized task scheduler for multithreading, which is basically the CPU device threading code wrapped into something reusable. Basic idea is that there is a single TaskScheduler that keeps a pool of threads, one for each core. Other places in the code can then create a TaskPool that they can drop Tasks in to be executed by the scheduler, and wait for them to complete or cancel them early. === Normal ==== Added a Normal output to the texture coordinate node. This currently gives the object space normal, which is the same under object animation. In the future this might become a "generated" normal so it's also stable for deforming objects, but for now it's already useful for non-deforming objects. === Render Layers === Per render layer Samples control, leaving it to 0 will use the common scene setting. Environment pass will now render environment even if film is set to transparent. Exclude Layers" added. Scene layers (all object that influence the render, directly or indirectly) are shared between all render layers. However sometimes it's useful to leave out some object influence for a particular render layer. That's what this option allows you to do. === Filter Glossy === When using a value higher than 0.0, this will blur glossy reflections after blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good starting value to tweak. Some light paths have a low probability of being found while contributing much light to the pixel. As a result these light paths will be found in some pixels and not in others, causing fireflies. An example of such a difficult path might be a small light that is causing a small specular highlight on a sharp glossy material, which we are seeing through a rough glossy material. With path tracing it is difficult to find the specular highlight, but if we increase the roughness on the material the highlight gets bigger and softer, and so easier to find. Often this blurring will be hardly noticeable, because we are seeing it through a blurry material anyway, but there are also cases where this will lead to a loss of detail in lighting.
2012-04-28 08:53:59 +00:00
case NODE_TEXCO_NORMAL: {
2017-02-16 06:24:13 -05:00
data = sd->N;
object_inverse_normal_transform(kg, sd, &data);
Cycles: merging features from tomato branch. === BVH build time optimizations === * BVH building was multithreaded. Not all building is multithreaded, packing and the initial bounding/splitting is still single threaded, but recursive splitting is, which was the main bottleneck. * Object splitting now uses binning rather than sorting of all elements, using code from the Embree raytracer from Intel. http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/ * Other small changes to avoid allocations, pack memory more tightly, avoid some unnecessary operations, ... These optimizations do not work yet when Spatial Splits are enabled, for that more work is needed. There's also other optimizations still needed, in particular for the case of many low poly objects, the packing step and node memory allocation. BVH raytracing time should remain about the same, but BVH build time should be significantly reduced, test here show speedup of about 5x to 10x on a dual core and 5x to 25x on an 8-core machine, depending on the scene. === Threads === Centralized task scheduler for multithreading, which is basically the CPU device threading code wrapped into something reusable. Basic idea is that there is a single TaskScheduler that keeps a pool of threads, one for each core. Other places in the code can then create a TaskPool that they can drop Tasks in to be executed by the scheduler, and wait for them to complete or cancel them early. === Normal ==== Added a Normal output to the texture coordinate node. This currently gives the object space normal, which is the same under object animation. In the future this might become a "generated" normal so it's also stable for deforming objects, but for now it's already useful for non-deforming objects. === Render Layers === Per render layer Samples control, leaving it to 0 will use the common scene setting. Environment pass will now render environment even if film is set to transparent. Exclude Layers" added. Scene layers (all object that influence the render, directly or indirectly) are shared between all render layers. However sometimes it's useful to leave out some object influence for a particular render layer. That's what this option allows you to do. === Filter Glossy === When using a value higher than 0.0, this will blur glossy reflections after blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good starting value to tweak. Some light paths have a low probability of being found while contributing much light to the pixel. As a result these light paths will be found in some pixels and not in others, causing fireflies. An example of such a difficult path might be a small light that is causing a small specular highlight on a sharp glossy material, which we are seeing through a rough glossy material. With path tracing it is difficult to find the specular highlight, but if we increase the roughness on the material the highlight gets bigger and softer, and so easier to find. Often this blurring will be hardly noticeable, because we are seeing it through a blurry material anyway, but there are also cases where this will lead to a loss of detail in lighting.
2012-04-28 08:53:59 +00:00
break;
}
case NODE_TEXCO_CAMERA: {
Transform tfm = kernel_data.cam.worldtocamera;
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE)
data = transform_point(&tfm, sd->P + sd->dP.dx);
else
2017-02-16 06:24:13 -05:00
data = transform_point(&tfm, sd->P + sd->dP.dx + camera_position(kg));
break;
}
case NODE_TEXCO_WINDOW: {
2017-02-16 06:24:13 -05:00
if ((path_flag & PATH_RAY_CAMERA) && sd->object == OBJECT_NONE &&
kernel_data.cam.type == CAMERA_ORTHOGRAPHIC)
data = camera_world_to_ndc(kg, sd, sd->ray_P + make_float3(sd->ray_dP, 0.0f, 0.0f));
else
2017-02-16 06:24:13 -05:00
data = camera_world_to_ndc(kg, sd, sd->P + sd->dP.dx);
data.z = 0.0f;
break;
}
case NODE_TEXCO_REFLECTION: {
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE)
data = 2.0f * dot(sd->N, sd->I) * sd->N - sd->I;
else
2017-02-16 06:24:13 -05:00
data = sd->I;
break;
}
case NODE_TEXCO_DUPLI_GENERATED: {
2017-02-16 06:24:13 -05:00
data = object_dupli_generated(kg, sd->object);
break;
}
case NODE_TEXCO_DUPLI_UV: {
2017-02-16 06:24:13 -05:00
data = object_dupli_uv(kg, sd->object);
break;
}
case NODE_TEXCO_VOLUME_GENERATED: {
2017-02-16 06:24:13 -05:00
data = sd->P + sd->dP.dx;
# ifdef __VOLUME__
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE)
data = volume_normalized_position(kg, sd, data);
# endif
break;
}
}
stack_store_float3(stack, out_offset, data);
return offset;
#else
return svm_node_tex_coord(kg, sd, path_flag, stack, node, offset);
#endif
}
ccl_device_noinline int svm_node_tex_coord_bump_dy(KernelGlobals kg,
Cycles: Kernel address space changes for MSL This is the first of a sequence of changes to support compiling Cycles kernels as MSL (Metal Shading Language) in preparation for a Metal GPU device implementation. MSL requires that all pointer types be declared with explicit address space attributes (device, thread, etc...). There is already precedent for this with Cycles' address space macros (ccl_global, ccl_private, etc...), therefore the first step of MSL-enablement is to apply these consistently. Line-for-line this represents the largest change required to enable MSL. Applying this change first will simplify future patches as well as offering the emergent benefit of enhanced descriptiveness. The vast majority of deltas in this patch fall into one of two cases: - Ensuring ccl_private is specified for thread-local pointer types - Ensuring ccl_global is specified for device-wide pointer types Additionally, the ccl_addr_space qualifier can be removed. Prior to Cycles X, ccl_addr_space was used as a context-dependent address space qualifier, but now it is either redundant (e.g. in struct typedefs), or can be replaced by ccl_global in the case of pointer types. Associated function variants (e.g. lcg_step_float_addrspace) are also redundant. In cases where address space qualifiers are chained with "const", this patch places the address space qualifier first. The rationale for this is that the choice of address space is likely to have the greater impact on runtime performance and overall architecture. The final part of this patch is the addition of a metal/compat.h header. This is partially complete and will be extended in future patches, paving the way for the full Metal implementation. Ref T92212 Reviewed By: brecht Maniphest Tasks: T92212 Differential Revision: https://developer.blender.org/D12864
2021-10-14 13:53:40 +01:00
ccl_private ShaderData *sd,
uint32_t path_flag,
Cycles: Kernel address space changes for MSL This is the first of a sequence of changes to support compiling Cycles kernels as MSL (Metal Shading Language) in preparation for a Metal GPU device implementation. MSL requires that all pointer types be declared with explicit address space attributes (device, thread, etc...). There is already precedent for this with Cycles' address space macros (ccl_global, ccl_private, etc...), therefore the first step of MSL-enablement is to apply these consistently. Line-for-line this represents the largest change required to enable MSL. Applying this change first will simplify future patches as well as offering the emergent benefit of enhanced descriptiveness. The vast majority of deltas in this patch fall into one of two cases: - Ensuring ccl_private is specified for thread-local pointer types - Ensuring ccl_global is specified for device-wide pointer types Additionally, the ccl_addr_space qualifier can be removed. Prior to Cycles X, ccl_addr_space was used as a context-dependent address space qualifier, but now it is either redundant (e.g. in struct typedefs), or can be replaced by ccl_global in the case of pointer types. Associated function variants (e.g. lcg_step_float_addrspace) are also redundant. In cases where address space qualifiers are chained with "const", this patch places the address space qualifier first. The rationale for this is that the choice of address space is likely to have the greater impact on runtime performance and overall architecture. The final part of this patch is the addition of a metal/compat.h header. This is partially complete and will be extended in future patches, paving the way for the full Metal implementation. Ref T92212 Reviewed By: brecht Maniphest Tasks: T92212 Differential Revision: https://developer.blender.org/D12864
2021-10-14 13:53:40 +01:00
ccl_private float *stack,
uint4 node,
int offset)
{
#ifdef __RAY_DIFFERENTIALS__
float3 data;
uint type = node.y;
uint out_offset = node.z;
switch (type) {
case NODE_TEXCO_OBJECT: {
2017-02-16 06:24:13 -05:00
data = sd->P + sd->dP.dy;
if (node.w == 0) {
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE) {
object_inverse_position_transform(kg, sd, &data);
}
}
else {
Transform tfm;
tfm.x = read_node_float(kg, &offset);
tfm.y = read_node_float(kg, &offset);
tfm.z = read_node_float(kg, &offset);
data = transform_point(&tfm, data);
}
break;
}
Cycles: merging features from tomato branch. === BVH build time optimizations === * BVH building was multithreaded. Not all building is multithreaded, packing and the initial bounding/splitting is still single threaded, but recursive splitting is, which was the main bottleneck. * Object splitting now uses binning rather than sorting of all elements, using code from the Embree raytracer from Intel. http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/ * Other small changes to avoid allocations, pack memory more tightly, avoid some unnecessary operations, ... These optimizations do not work yet when Spatial Splits are enabled, for that more work is needed. There's also other optimizations still needed, in particular for the case of many low poly objects, the packing step and node memory allocation. BVH raytracing time should remain about the same, but BVH build time should be significantly reduced, test here show speedup of about 5x to 10x on a dual core and 5x to 25x on an 8-core machine, depending on the scene. === Threads === Centralized task scheduler for multithreading, which is basically the CPU device threading code wrapped into something reusable. Basic idea is that there is a single TaskScheduler that keeps a pool of threads, one for each core. Other places in the code can then create a TaskPool that they can drop Tasks in to be executed by the scheduler, and wait for them to complete or cancel them early. === Normal ==== Added a Normal output to the texture coordinate node. This currently gives the object space normal, which is the same under object animation. In the future this might become a "generated" normal so it's also stable for deforming objects, but for now it's already useful for non-deforming objects. === Render Layers === Per render layer Samples control, leaving it to 0 will use the common scene setting. Environment pass will now render environment even if film is set to transparent. Exclude Layers" added. Scene layers (all object that influence the render, directly or indirectly) are shared between all render layers. However sometimes it's useful to leave out some object influence for a particular render layer. That's what this option allows you to do. === Filter Glossy === When using a value higher than 0.0, this will blur glossy reflections after blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good starting value to tweak. Some light paths have a low probability of being found while contributing much light to the pixel. As a result these light paths will be found in some pixels and not in others, causing fireflies. An example of such a difficult path might be a small light that is causing a small specular highlight on a sharp glossy material, which we are seeing through a rough glossy material. With path tracing it is difficult to find the specular highlight, but if we increase the roughness on the material the highlight gets bigger and softer, and so easier to find. Often this blurring will be hardly noticeable, because we are seeing it through a blurry material anyway, but there are also cases where this will lead to a loss of detail in lighting.
2012-04-28 08:53:59 +00:00
case NODE_TEXCO_NORMAL: {
2017-02-16 06:24:13 -05:00
data = sd->N;
object_inverse_normal_transform(kg, sd, &data);
Cycles: merging features from tomato branch. === BVH build time optimizations === * BVH building was multithreaded. Not all building is multithreaded, packing and the initial bounding/splitting is still single threaded, but recursive splitting is, which was the main bottleneck. * Object splitting now uses binning rather than sorting of all elements, using code from the Embree raytracer from Intel. http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/ * Other small changes to avoid allocations, pack memory more tightly, avoid some unnecessary operations, ... These optimizations do not work yet when Spatial Splits are enabled, for that more work is needed. There's also other optimizations still needed, in particular for the case of many low poly objects, the packing step and node memory allocation. BVH raytracing time should remain about the same, but BVH build time should be significantly reduced, test here show speedup of about 5x to 10x on a dual core and 5x to 25x on an 8-core machine, depending on the scene. === Threads === Centralized task scheduler for multithreading, which is basically the CPU device threading code wrapped into something reusable. Basic idea is that there is a single TaskScheduler that keeps a pool of threads, one for each core. Other places in the code can then create a TaskPool that they can drop Tasks in to be executed by the scheduler, and wait for them to complete or cancel them early. === Normal ==== Added a Normal output to the texture coordinate node. This currently gives the object space normal, which is the same under object animation. In the future this might become a "generated" normal so it's also stable for deforming objects, but for now it's already useful for non-deforming objects. === Render Layers === Per render layer Samples control, leaving it to 0 will use the common scene setting. Environment pass will now render environment even if film is set to transparent. Exclude Layers" added. Scene layers (all object that influence the render, directly or indirectly) are shared between all render layers. However sometimes it's useful to leave out some object influence for a particular render layer. That's what this option allows you to do. === Filter Glossy === When using a value higher than 0.0, this will blur glossy reflections after blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good starting value to tweak. Some light paths have a low probability of being found while contributing much light to the pixel. As a result these light paths will be found in some pixels and not in others, causing fireflies. An example of such a difficult path might be a small light that is causing a small specular highlight on a sharp glossy material, which we are seeing through a rough glossy material. With path tracing it is difficult to find the specular highlight, but if we increase the roughness on the material the highlight gets bigger and softer, and so easier to find. Often this blurring will be hardly noticeable, because we are seeing it through a blurry material anyway, but there are also cases where this will lead to a loss of detail in lighting.
2012-04-28 08:53:59 +00:00
break;
}
case NODE_TEXCO_CAMERA: {
Transform tfm = kernel_data.cam.worldtocamera;
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE)
data = transform_point(&tfm, sd->P + sd->dP.dy);
else
2017-02-16 06:24:13 -05:00
data = transform_point(&tfm, sd->P + sd->dP.dy + camera_position(kg));
break;
}
case NODE_TEXCO_WINDOW: {
2017-02-16 06:24:13 -05:00
if ((path_flag & PATH_RAY_CAMERA) && sd->object == OBJECT_NONE &&
kernel_data.cam.type == CAMERA_ORTHOGRAPHIC)
data = camera_world_to_ndc(kg, sd, sd->ray_P + make_float3(0.0f, sd->ray_dP, 0.0f));
else
2017-02-16 06:24:13 -05:00
data = camera_world_to_ndc(kg, sd, sd->P + sd->dP.dy);
data.z = 0.0f;
break;
}
case NODE_TEXCO_REFLECTION: {
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE)
data = 2.0f * dot(sd->N, sd->I) * sd->N - sd->I;
else
2017-02-16 06:24:13 -05:00
data = sd->I;
break;
}
case NODE_TEXCO_DUPLI_GENERATED: {
2017-02-16 06:24:13 -05:00
data = object_dupli_generated(kg, sd->object);
break;
}
case NODE_TEXCO_DUPLI_UV: {
2017-02-16 06:24:13 -05:00
data = object_dupli_uv(kg, sd->object);
break;
}
case NODE_TEXCO_VOLUME_GENERATED: {
2017-02-16 06:24:13 -05:00
data = sd->P + sd->dP.dy;
# ifdef __VOLUME__
2017-02-16 06:24:13 -05:00
if (sd->object != OBJECT_NONE)
data = volume_normalized_position(kg, sd, data);
# endif
break;
}
}
stack_store_float3(stack, out_offset, data);
return offset;
#else
return svm_node_tex_coord(kg, sd, path_flag, stack, node, offset);
#endif
}
ccl_device_noinline void svm_node_normal_map(KernelGlobals kg,
Cycles: Kernel address space changes for MSL This is the first of a sequence of changes to support compiling Cycles kernels as MSL (Metal Shading Language) in preparation for a Metal GPU device implementation. MSL requires that all pointer types be declared with explicit address space attributes (device, thread, etc...). There is already precedent for this with Cycles' address space macros (ccl_global, ccl_private, etc...), therefore the first step of MSL-enablement is to apply these consistently. Line-for-line this represents the largest change required to enable MSL. Applying this change first will simplify future patches as well as offering the emergent benefit of enhanced descriptiveness. The vast majority of deltas in this patch fall into one of two cases: - Ensuring ccl_private is specified for thread-local pointer types - Ensuring ccl_global is specified for device-wide pointer types Additionally, the ccl_addr_space qualifier can be removed. Prior to Cycles X, ccl_addr_space was used as a context-dependent address space qualifier, but now it is either redundant (e.g. in struct typedefs), or can be replaced by ccl_global in the case of pointer types. Associated function variants (e.g. lcg_step_float_addrspace) are also redundant. In cases where address space qualifiers are chained with "const", this patch places the address space qualifier first. The rationale for this is that the choice of address space is likely to have the greater impact on runtime performance and overall architecture. The final part of this patch is the addition of a metal/compat.h header. This is partially complete and will be extended in future patches, paving the way for the full Metal implementation. Ref T92212 Reviewed By: brecht Maniphest Tasks: T92212 Differential Revision: https://developer.blender.org/D12864
2021-10-14 13:53:40 +01:00
ccl_private ShaderData *sd,
ccl_private float *stack,
uint4 node)
{
uint color_offset, strength_offset, normal_offset, space;
svm_unpack_node_uchar4(node.y, &color_offset, &strength_offset, &normal_offset, &space);
float3 color = stack_load_float3(stack, color_offset);
color = 2.0f * make_float3(color.x - 0.5f, color.y - 0.5f, color.z - 0.5f);
2017-02-16 06:24:13 -05:00
bool is_backfacing = (sd->flag & SD_BACKFACING) != 0;
float3 N;
if (space == NODE_NORMAL_MAP_TANGENT) {
/* tangent space */
if (sd->object == OBJECT_NONE || (sd->type & PRIMITIVE_TRIANGLE) == 0) {
/* Fallback to unperturbed normal. */
stack_store_float3(stack, normal_offset, sd->N);
return;
}
/* first try to get tangent attribute */
const AttributeDescriptor attr = find_attribute(kg, sd, node.z);
const AttributeDescriptor attr_sign = find_attribute(kg, sd, node.w);
if (attr.offset == ATTR_STD_NOT_FOUND || attr_sign.offset == ATTR_STD_NOT_FOUND) {
/* Fallback to unperturbed normal. */
stack_store_float3(stack, normal_offset, sd->N);
return;
}
/* get _unnormalized_ interpolated normal and tangent */
T61513: Refactored Cycles Attribute Retrieval There is a generic function to retrieve float and float3 attributes `primitive_attribute_float` and primitive_attribute_float3`. Inside these functions an prioritised if-else construction checked where the attribute is stored and then retrieved from that location. Actually the calling function most of the time already knows where the data is stored. So we could simplify this by splitting these functions and remove the check logic. This patch splits the `primitive_attribute_float?` functions into `primitive_surface_attribute_float?` and `primitive_volume_attribute_float?`. What leads to less branching and more optimum kernels. The original function is still being used by OSL and `svm_node_attr`. This will reduce the compilation time and render time for kernels. Especially in production scenes there is a lot of benefit. Impact in compilation times job | scene_name | previous | new | percentage -------+-----------------+----------+-------+------------ t61513 | empty | 10.63 | 10.66 | 0% t61513 | bmw | 17.91 | 17.65 | 1% t61513 | fishycat | 19.57 | 17.68 | 10% t61513 | barbershop | 54.10 | 24.41 | 55% t61513 | classroom | 17.55 | 16.29 | 7% t61513 | koro | 18.92 | 18.05 | 5% t61513 | pavillion | 17.43 | 16.52 | 5% t61513 | splash279 | 16.48 | 14.91 | 10% t61513 | volume_emission | 36.22 | 21.60 | 40% Impact in render times job | scene_name | previous | new | percentage -------+-----------------+----------+--------+------------ 61513 | empty | 21.06 | 20.35 | 3% 61513 | bmw | 198.44 | 190.05 | 4% 61513 | fishycat | 394.20 | 401.25 | -2% 61513 | barbershop | 1188.16 | 912.39 | 23% 61513 | classroom | 341.08 | 340.38 | 0% 61513 | koro | 472.43 | 471.80 | 0% 61513 | pavillion | 905.77 | 899.80 | 1% 61513 | splash279 | 55.26 | 54.86 | 1% 61513 | volume_emission | 62.59 | 61.70 | 1% There is also a possitive impact when using CPU and CUDA, but they are small. I didn't split the hair logic from the surface logic due to: * Hair and surface use same attribute types. It was not clear if it could be splitted when looking at the code only. * Hair and surface are quick to compile and to read. So the benefit is quite small. Differential Revision: https://developer.blender.org/D4375
2019-02-19 15:41:22 +01:00
float3 tangent = primitive_surface_attribute_float3(kg, sd, attr, NULL, NULL);
float sign = primitive_surface_attribute_float(kg, sd, attr_sign, NULL, NULL);
float3 normal;
2017-02-16 06:24:13 -05:00
if (sd->shader & SHADER_SMOOTH_NORMAL) {
normal = triangle_smooth_normal_unnormalized(kg, sd, sd->Ng, sd->prim, sd->u, sd->v);
}
else {
2017-02-16 06:24:13 -05:00
normal = sd->Ng;
/* the normal is already inverted, which is too soon for the math here */
if (is_backfacing) {
normal = -normal;
}
object_inverse_normal_transform(kg, sd, &normal);
}
/* apply normal map */
float3 B = sign * cross(normal, tangent);
N = safe_normalize(color.x * tangent + color.y * B + color.z * normal);
/* transform to world space */
object_normal_transform(kg, sd, &N);
}
else {
/* strange blender convention */
if (space == NODE_NORMAL_MAP_BLENDER_OBJECT || space == NODE_NORMAL_MAP_BLENDER_WORLD) {
color.y = -color.y;
color.z = -color.z;
}
/* object, world space */
N = color;
if (space == NODE_NORMAL_MAP_OBJECT || space == NODE_NORMAL_MAP_BLENDER_OBJECT)
object_normal_transform(kg, sd, &N);
else
N = safe_normalize(N);
}
/* invert normal for backfacing polygons */
if (is_backfacing) {
N = -N;
}
float strength = stack_load_float(stack, strength_offset);
if (strength != 1.0f) {
strength = max(strength, 0.0f);
2017-02-16 06:24:13 -05:00
N = safe_normalize(sd->N + (N - sd->N) * strength);
}
if (is_zero(N)) {
2017-02-16 06:24:13 -05:00
N = sd->N;
}
stack_store_float3(stack, normal_offset, N);
}
ccl_device_noinline void svm_node_tangent(KernelGlobals kg,
Cycles: Kernel address space changes for MSL This is the first of a sequence of changes to support compiling Cycles kernels as MSL (Metal Shading Language) in preparation for a Metal GPU device implementation. MSL requires that all pointer types be declared with explicit address space attributes (device, thread, etc...). There is already precedent for this with Cycles' address space macros (ccl_global, ccl_private, etc...), therefore the first step of MSL-enablement is to apply these consistently. Line-for-line this represents the largest change required to enable MSL. Applying this change first will simplify future patches as well as offering the emergent benefit of enhanced descriptiveness. The vast majority of deltas in this patch fall into one of two cases: - Ensuring ccl_private is specified for thread-local pointer types - Ensuring ccl_global is specified for device-wide pointer types Additionally, the ccl_addr_space qualifier can be removed. Prior to Cycles X, ccl_addr_space was used as a context-dependent address space qualifier, but now it is either redundant (e.g. in struct typedefs), or can be replaced by ccl_global in the case of pointer types. Associated function variants (e.g. lcg_step_float_addrspace) are also redundant. In cases where address space qualifiers are chained with "const", this patch places the address space qualifier first. The rationale for this is that the choice of address space is likely to have the greater impact on runtime performance and overall architecture. The final part of this patch is the addition of a metal/compat.h header. This is partially complete and will be extended in future patches, paving the way for the full Metal implementation. Ref T92212 Reviewed By: brecht Maniphest Tasks: T92212 Differential Revision: https://developer.blender.org/D12864
2021-10-14 13:53:40 +01:00
ccl_private ShaderData *sd,
ccl_private float *stack,
uint4 node)
{
uint tangent_offset, direction_type, axis;
svm_unpack_node_uchar3(node.y, &tangent_offset, &direction_type, &axis);
float3 tangent;
T61513: Refactored Cycles Attribute Retrieval There is a generic function to retrieve float and float3 attributes `primitive_attribute_float` and primitive_attribute_float3`. Inside these functions an prioritised if-else construction checked where the attribute is stored and then retrieved from that location. Actually the calling function most of the time already knows where the data is stored. So we could simplify this by splitting these functions and remove the check logic. This patch splits the `primitive_attribute_float?` functions into `primitive_surface_attribute_float?` and `primitive_volume_attribute_float?`. What leads to less branching and more optimum kernels. The original function is still being used by OSL and `svm_node_attr`. This will reduce the compilation time and render time for kernels. Especially in production scenes there is a lot of benefit. Impact in compilation times job | scene_name | previous | new | percentage -------+-----------------+----------+-------+------------ t61513 | empty | 10.63 | 10.66 | 0% t61513 | bmw | 17.91 | 17.65 | 1% t61513 | fishycat | 19.57 | 17.68 | 10% t61513 | barbershop | 54.10 | 24.41 | 55% t61513 | classroom | 17.55 | 16.29 | 7% t61513 | koro | 18.92 | 18.05 | 5% t61513 | pavillion | 17.43 | 16.52 | 5% t61513 | splash279 | 16.48 | 14.91 | 10% t61513 | volume_emission | 36.22 | 21.60 | 40% Impact in render times job | scene_name | previous | new | percentage -------+-----------------+----------+--------+------------ 61513 | empty | 21.06 | 20.35 | 3% 61513 | bmw | 198.44 | 190.05 | 4% 61513 | fishycat | 394.20 | 401.25 | -2% 61513 | barbershop | 1188.16 | 912.39 | 23% 61513 | classroom | 341.08 | 340.38 | 0% 61513 | koro | 472.43 | 471.80 | 0% 61513 | pavillion | 905.77 | 899.80 | 1% 61513 | splash279 | 55.26 | 54.86 | 1% 61513 | volume_emission | 62.59 | 61.70 | 1% There is also a possitive impact when using CPU and CUDA, but they are small. I didn't split the hair logic from the surface logic due to: * Hair and surface use same attribute types. It was not clear if it could be splitted when looking at the code only. * Hair and surface are quick to compile and to read. So the benefit is quite small. Differential Revision: https://developer.blender.org/D4375
2019-02-19 15:41:22 +01:00
float3 attribute_value;
const AttributeDescriptor desc = find_attribute(kg, sd, node.z);
if (desc.offset != ATTR_STD_NOT_FOUND) {
if (desc.type == NODE_ATTR_FLOAT2) {
float2 value = primitive_surface_attribute_float2(kg, sd, desc, NULL, NULL);
attribute_value.x = value.x;
attribute_value.y = value.y;
attribute_value.z = 0.0f;
}
else {
attribute_value = primitive_surface_attribute_float3(kg, sd, desc, NULL, NULL);
}
T61513: Refactored Cycles Attribute Retrieval There is a generic function to retrieve float and float3 attributes `primitive_attribute_float` and primitive_attribute_float3`. Inside these functions an prioritised if-else construction checked where the attribute is stored and then retrieved from that location. Actually the calling function most of the time already knows where the data is stored. So we could simplify this by splitting these functions and remove the check logic. This patch splits the `primitive_attribute_float?` functions into `primitive_surface_attribute_float?` and `primitive_volume_attribute_float?`. What leads to less branching and more optimum kernels. The original function is still being used by OSL and `svm_node_attr`. This will reduce the compilation time and render time for kernels. Especially in production scenes there is a lot of benefit. Impact in compilation times job | scene_name | previous | new | percentage -------+-----------------+----------+-------+------------ t61513 | empty | 10.63 | 10.66 | 0% t61513 | bmw | 17.91 | 17.65 | 1% t61513 | fishycat | 19.57 | 17.68 | 10% t61513 | barbershop | 54.10 | 24.41 | 55% t61513 | classroom | 17.55 | 16.29 | 7% t61513 | koro | 18.92 | 18.05 | 5% t61513 | pavillion | 17.43 | 16.52 | 5% t61513 | splash279 | 16.48 | 14.91 | 10% t61513 | volume_emission | 36.22 | 21.60 | 40% Impact in render times job | scene_name | previous | new | percentage -------+-----------------+----------+--------+------------ 61513 | empty | 21.06 | 20.35 | 3% 61513 | bmw | 198.44 | 190.05 | 4% 61513 | fishycat | 394.20 | 401.25 | -2% 61513 | barbershop | 1188.16 | 912.39 | 23% 61513 | classroom | 341.08 | 340.38 | 0% 61513 | koro | 472.43 | 471.80 | 0% 61513 | pavillion | 905.77 | 899.80 | 1% 61513 | splash279 | 55.26 | 54.86 | 1% 61513 | volume_emission | 62.59 | 61.70 | 1% There is also a possitive impact when using CPU and CUDA, but they are small. I didn't split the hair logic from the surface logic due to: * Hair and surface use same attribute types. It was not clear if it could be splitted when looking at the code only. * Hair and surface are quick to compile and to read. So the benefit is quite small. Differential Revision: https://developer.blender.org/D4375
2019-02-19 15:41:22 +01:00
}
if (direction_type == NODE_TANGENT_UVMAP) {
/* UV map */
if (desc.offset == ATTR_STD_NOT_FOUND) {
stack_store_float3(stack, tangent_offset, zero_float3());
return;
}
else {
T61513: Refactored Cycles Attribute Retrieval There is a generic function to retrieve float and float3 attributes `primitive_attribute_float` and primitive_attribute_float3`. Inside these functions an prioritised if-else construction checked where the attribute is stored and then retrieved from that location. Actually the calling function most of the time already knows where the data is stored. So we could simplify this by splitting these functions and remove the check logic. This patch splits the `primitive_attribute_float?` functions into `primitive_surface_attribute_float?` and `primitive_volume_attribute_float?`. What leads to less branching and more optimum kernels. The original function is still being used by OSL and `svm_node_attr`. This will reduce the compilation time and render time for kernels. Especially in production scenes there is a lot of benefit. Impact in compilation times job | scene_name | previous | new | percentage -------+-----------------+----------+-------+------------ t61513 | empty | 10.63 | 10.66 | 0% t61513 | bmw | 17.91 | 17.65 | 1% t61513 | fishycat | 19.57 | 17.68 | 10% t61513 | barbershop | 54.10 | 24.41 | 55% t61513 | classroom | 17.55 | 16.29 | 7% t61513 | koro | 18.92 | 18.05 | 5% t61513 | pavillion | 17.43 | 16.52 | 5% t61513 | splash279 | 16.48 | 14.91 | 10% t61513 | volume_emission | 36.22 | 21.60 | 40% Impact in render times job | scene_name | previous | new | percentage -------+-----------------+----------+--------+------------ 61513 | empty | 21.06 | 20.35 | 3% 61513 | bmw | 198.44 | 190.05 | 4% 61513 | fishycat | 394.20 | 401.25 | -2% 61513 | barbershop | 1188.16 | 912.39 | 23% 61513 | classroom | 341.08 | 340.38 | 0% 61513 | koro | 472.43 | 471.80 | 0% 61513 | pavillion | 905.77 | 899.80 | 1% 61513 | splash279 | 55.26 | 54.86 | 1% 61513 | volume_emission | 62.59 | 61.70 | 1% There is also a possitive impact when using CPU and CUDA, but they are small. I didn't split the hair logic from the surface logic due to: * Hair and surface use same attribute types. It was not clear if it could be splitted when looking at the code only. * Hair and surface are quick to compile and to read. So the benefit is quite small. Differential Revision: https://developer.blender.org/D4375
2019-02-19 15:41:22 +01:00
tangent = attribute_value;
}
}
else {
/* radial */
float3 generated;
if (desc.offset == ATTR_STD_NOT_FOUND)
2017-02-16 06:24:13 -05:00
generated = sd->P;
else
T61513: Refactored Cycles Attribute Retrieval There is a generic function to retrieve float and float3 attributes `primitive_attribute_float` and primitive_attribute_float3`. Inside these functions an prioritised if-else construction checked where the attribute is stored and then retrieved from that location. Actually the calling function most of the time already knows where the data is stored. So we could simplify this by splitting these functions and remove the check logic. This patch splits the `primitive_attribute_float?` functions into `primitive_surface_attribute_float?` and `primitive_volume_attribute_float?`. What leads to less branching and more optimum kernels. The original function is still being used by OSL and `svm_node_attr`. This will reduce the compilation time and render time for kernels. Especially in production scenes there is a lot of benefit. Impact in compilation times job | scene_name | previous | new | percentage -------+-----------------+----------+-------+------------ t61513 | empty | 10.63 | 10.66 | 0% t61513 | bmw | 17.91 | 17.65 | 1% t61513 | fishycat | 19.57 | 17.68 | 10% t61513 | barbershop | 54.10 | 24.41 | 55% t61513 | classroom | 17.55 | 16.29 | 7% t61513 | koro | 18.92 | 18.05 | 5% t61513 | pavillion | 17.43 | 16.52 | 5% t61513 | splash279 | 16.48 | 14.91 | 10% t61513 | volume_emission | 36.22 | 21.60 | 40% Impact in render times job | scene_name | previous | new | percentage -------+-----------------+----------+--------+------------ 61513 | empty | 21.06 | 20.35 | 3% 61513 | bmw | 198.44 | 190.05 | 4% 61513 | fishycat | 394.20 | 401.25 | -2% 61513 | barbershop | 1188.16 | 912.39 | 23% 61513 | classroom | 341.08 | 340.38 | 0% 61513 | koro | 472.43 | 471.80 | 0% 61513 | pavillion | 905.77 | 899.80 | 1% 61513 | splash279 | 55.26 | 54.86 | 1% 61513 | volume_emission | 62.59 | 61.70 | 1% There is also a possitive impact when using CPU and CUDA, but they are small. I didn't split the hair logic from the surface logic due to: * Hair and surface use same attribute types. It was not clear if it could be splitted when looking at the code only. * Hair and surface are quick to compile and to read. So the benefit is quite small. Differential Revision: https://developer.blender.org/D4375
2019-02-19 15:41:22 +01:00
generated = attribute_value;
if (axis == NODE_TANGENT_AXIS_X)
tangent = make_float3(0.0f, -(generated.z - 0.5f), (generated.y - 0.5f));
else if (axis == NODE_TANGENT_AXIS_Y)
tangent = make_float3(-(generated.z - 0.5f), 0.0f, (generated.x - 0.5f));
else
tangent = make_float3(-(generated.y - 0.5f), (generated.x - 0.5f), 0.0f);
}
object_normal_transform(kg, sd, &tangent);
2017-02-16 06:24:13 -05:00
tangent = cross(sd->N, normalize(cross(tangent, sd->N)));
stack_store_float3(stack, tangent_offset, tangent);
}
CCL_NAMESPACE_END