Blender had some support for using MoltenVK. However there are some key
issues why MotlenVK cannot be used. Bugs have been reported up-stream.
As it doesn't work and holds back regular developments it will be removed
from the main branch.
Any efforts on making Vulkan run on Apple (including KosmicKrisp)
is considered a community effort and can be done in a development
branch.
Pull Request: https://projects.blender.org/blender/blender/pulls/144602
This follows the other CMake "modernization" commits, this time for
`bf_intern_openvdb` and the OpenVDB dependency itself.
The difference with this one is that `intern/openvdb` becomes an
"optional" dependency itself. This is because downstream consumers often
want to include this dependency rather than openvdb directly, so this
target must also be optional. Optional, in this case, means the target
always exists but may be entirely empty.
Summary
- If you are using BKE APIs to access openvdb features, then use the
`bf::blenkernel` target
- If you are only using `intern/openvdb` APIs then use the
`bf::intern::optional::openvdb` target (rare)
- For all other cases, use the `bf::dependencies::optional::openvdb`
target (rare)
context: https://devtalk.blender.org/t/cmake-cleanup/30260
Pull Request: https://projects.blender.org/blender/blender/pulls/137071
Add a [Python code generator][1] that takes an OpenAPI definition and
outputs the corresponding data model as [dataclasses][2]
This is intended to be used in the Remote Asset Library project, to
create, download, parse, and validate information of a remote asset
library.
[1]: https://koxudaxi.github.io/datamodel-code-generator/
[2]: https://docs.python.org/3/library/dataclasses.html
## Running the Generator
The generator is a Python script, which creates its own Python
virtualenv, installs the dependencies it needs, and then runs the
generator within that virtualenv.
The script is intended to run via the `generate_datamodels` CMake
target. For example, `ninja generate_datamodels` in the build
directory.
## Details
The virtualenv is created in Blender's build directory, and is not
cleaned up after running. This means that subsequent runs will just
use it directly, instead of reinstalling dependencies on every run.
## Generated Code & Interaction with Build System
It is my intention that the code generation _only_ happens when the
OpenAPI specification changes. This means that the generated code will
be committed to Git like any hand-written code. Building Blender will
therefore _not_ require the code generator to run. Only people working
on the area that uses the generated code will have to deal with this.
Pull Request: https://projects.blender.org/blender/blender/pulls/139495
This refactor the previous system, main changes being:
* The `NINJA_MAX_NUM_PARALLEL_..._JOBS` variables are now only to allow
users to override automatic values with their own settings.
* If undefined or left at the new '0' default, CMake will compute
'optimal' values for all three pools, based on the amount of available
RAM, cores, and the type of build.
* Linking jobs can now max up at 2 instead of 1.
Pull Request: https://projects.blender.org/blender/blender/pulls/142112
* Add own simple logging system to replace glog, which is no longer
maintained by Google.
* When building in Blender, integrate with CLOG and print all messages
through that system instead.
* --log cycles now replaces --debug-cycles. The latter still works but
is no longer documented.
Pull Request: https://projects.blender.org/blender/blender/pulls/140244
With adding more c++ and templates all over our code-base, the general
memory footprint of regulare and 'heavy' compile jobs have increased
over the past couple of years, reflect that in the heuristics used to
auto-compute default 'good enough' max number of compile and
heavy-complie jobs.
This commit implements #125759.
It removes:
* Blender does not build on big endian systems anymore.
* Support for opening blendfiles written from a big endian system is
removed.
It keeps:
* Support to generate thumbnails from big endian blendfiles.
* BE support in `extern` or `intern` libraries, including Cycles.
* Support to open big endian versions of third party file formats:
- PLY files.
- Some image files (cineon, ...).
Pull Request: https://projects.blender.org/blender/blender/pulls/140138
With these changes, we can now mark devices which are expected to work as
performant as possible, and devices which were not optimized for some reason.
For example, because the device was released after the Blender release,
making it impossible for developers to optimize for devices in already
released unchangeable code. This is primarily relevant for the LTS versions,
which are supported for two years and require proper communication about
optimization status for the new devices released during this time.
This is implemented for oneAPI devices. Other device types currently are
marked as optimized for compatibility with old behavior, but may implement
the same in the future.
Pull Request: https://projects.blender.org/blender/blender/pulls/139751
Pretty bare bones but gets the job done, unlike the gcc
tooling, this will work for release builds, the performance cost
of it is on the high side of things, the full test suite tests take over
an hour for me with code coverage support enabled on a release build.
I have not timed a debug build. Given developers can just run their
tests to get coverage data over what they are working on, I feel this
is still useful tooling to have.
This adds the 3 targets for clang and adds a single gcc target
coverage-reset - this removes the collected code coverage data and
report
coverage-report - This merges the collected data and generates the
report (new for gcc)
coverage-show - This merges the collected data and generates the report
and opens it in the browser
This relies on llvm-cov and llvm-profdata being available if not found
code coverage is disabled.
Note: A full test run requires an obscene amount of disk space, a
complete test run takes about 125GB and takes 12 minutes to merge, so
provision the COMPILER_CODE_COVERAGE_DATA_DIR folder accordingly
Example report in PR
Enable optimizations on Debug builds for executables used for the build
process (datatoc and glsl_preprocess) and enable unity builds for
shader preprocessing targets.
Debug: From 28.9s to 5.7s
Release: From 4.9s to 3.5s
Pull Request: https://projects.blender.org/blender/blender/pulls/138274
The `USE_EXPERIMENTAL_TESTS` variable was not exposed as an option, so
the two tests that use the option to check whether tests should be run
had to be manually enabled by changing `tests/python/CMakeLists.txt`.
This commit renames the variable to `WITH_TESTS_EXPERIMENTAL`, defaults
the option to `OFF`, and marks it as an advanced option.
Pull Request: https://projects.blender.org/blender/blender/pulls/133831
Adds the 'manifold' solver option to the Boolean geo node and to
the Boolean modifier. This solver is about as fast, or faster,
than the current float solver, and is robust against floating
point issues like the Exact solver. But currently it only
works on mesh arguments that are strictly manifold.
See https://projects.blender.org/blender/blender/issues/120182
for many more details.
This patch changes the default macOS x64 linker from the slower legacy
"classic" linker, to the much faster default modern macOS linker. A new
CMake option `WITH_LEGACY_MACOS_X64_LINKER` was also introduced to use
the old linker, set to OFF by default.
As an example from testing on an Intel Macbook, this makes linking time
go from 3min25s to 10s after this change. The reason why the legacy
linker was enforced on x64 in the first place was to silence "platform
load command not found" warnings during linking. Since this behavior is
still desired on the BuildBot, the `WITH_LEGACY_MACOS_X64_LINKER` option
is enforced in the BuildBot macOS configs.
Pull Request: https://projects.blender.org/blender/blender/pulls/134639
As of dde0765de2, jemalloc is enabled by default when found.
As of dde0765de2, jemalloc is enabled by default when found.
There is also no active plan to use it on all platforms, i.e.
on Windows tbbmalloc is used.
When it doesn't exist, there is simply a warning:
```
-- Could NOT find JeMalloc (missing: JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR)
-- JeMalloc not found, disabling WITH_MEM_JEMALLOC
```
Pull Request: https://projects.blender.org/blender/blender/pulls/137743
Adds a C++ based FBX importer, using 3rd party ufbx library (design task:
#131304). The old Python based importer is still there; the new one is marked
as "(experimental)" in the menu item. Drag-and-drop uses the old Python
importer; the new one is only in the menu item.
The new importer is generally 2x-5x faster than the old one, and often uses
less memory too. There's potential to make it several times faster still.
- ASCII FBX files are supported now
- Binary FBX files older than 7.1 (SDK 2012) version are supported now
- Better handling of "geometric transform" (common in 3dsmax), manifesting
as wrong rotation for some objects when in a hierarchy (e.g. #131172)
- Some FBX files that the old importer was failing to read are supported now
(e.g. cases 47344, 134983)
- Materials import more shader parameters (IOR, diffuse roughness,
anisotropy, subsurface, transmission, coat, sheen, thin film) and shader
models (e.g. OpenPBR or glTF2 materials from 3dsmax imports much better)
- Importer now creates layered/slotted animation actions. Each "take" inside
FBX file creates one action, and animated object within it gets a slot.
- Materials that use the same texture several times no longer create
duplicate images; the same image is used
- Material diffuse color animations were imported, but they only animated
the viewport color. Now they also animate the nodetree base color too.
- "Ignore Leaf Bones" option no longer ignores leaf bones that are actually
skinned to some parts of the mesh.
- Previous importer was creating orphan invisible Camera data objects for
some files (mostly from MotionBuilder?), new one properly creates these
cameras.
Import settings that existed in Python importer, but are NOT DONE in the new
one (mostly because not sure if they are useful, and no one asked for them
from feedback yet):
- Manual Orientation & Forward/Up Axis: not sure if actually useful. FBX
file itself specifies the axes fairly clearly. USD/glTF/Alembic also do
not have settings to override them.
- Use Pre/Post Rotation (defaults on): feels like it should just always be
on. ufbx handles that internally.
- Apply Transform (defaults off, warning icon): not sure if needed at all.
- Decal Offset: Cycles specific. None of other importers have it.
- Automatic Bone Orientation (defaults off): feels like current behavior
(either on or off) often produces "nonsensical bones" where bone direction
does not go towards the children with either setting. There are discussions
within I/O and Animation modules about different ways of bone
visualizations and/or different bone length axes, that would solve this
in general.
- Force Connect Children (defaults off): not sure when that would be useful.
On several animated armatures I tried, it turns armature animation
into garbage.
- Primary/Secondary Bone Axis: again not sure when would be useful.
Importer UI screenshots, performance benchmark details and TODOs for later
work are in the PR.
Pull Request: https://projects.blender.org/blender/blender/pulls/132406
This was introduced during EEVEE-next developement
cycle to not make the buildbot fail because of EEVEE
render tests.
These have stabilized now and we can remove this option.
Pull Request: https://projects.blender.org/blender/blender/pulls/137545
It's better for performance to use a single thread pool for all areas of
Blender, and this gets us closer to that.
Bullet, Quadriflow, Mantaflow and Ceres still contain OpenMP code, but it
was already disabled.
On macOS, our OpenMP libraries are no longer compatible with the latest
Xcode 16.3. By removing OpenMP we no longer have to solve that problem.
OpenMP was disabled for bpy module builds on Windows ARM64, which also no
longer needs to be solved.
Pull Request: https://projects.blender.org/blender/blender/pulls/136865
This commit allows the `WITH_UI_TESTS` CMake option to be used on all
platforms, not only Linux. The existing functionality to use the Weston
compositor was moved into the `WITH_UI_TESTS_HEADLESS` option. When
these tests are run with only `WITH_UI_TESTS`, a visible instance of
Blender is opened up for testing.
Pull Request: https://projects.blender.org/blender/blender/pulls/135889
Adds a new CMake option so that these tests can be run independently of
other tests, and so that existing tests that use WITH_GPU_RENDER_TESTS
are not forced to run these sculpt tests.
Pull Request: https://projects.blender.org/blender/blender/pulls/134893
This change brings the following improvements on the user level
- Support of GPUs with gfx12 architecture
- New HIP-RT library which in addition to the gfx12 support brings
various bug-fixes.
The known limitation of gfx12 is that OpenImageDenoiser does not yet
support this GPU architecture. This means that while Cycles will use the
full advantage of the gfx12 (including hardware accelerated ray-tracing),
denoising will only be possible on CPU, or secondary gfx11 or below GPU.
This is something that requires a change in OIDN and it is to late to do
it for Blender 4.4, but it is something to look forward for Blender 4.5.
The gfx12 changes for the pre-compiled kernels is rather trivial,
so it comes together (in the same PR) as the bigger HIP-RT change.
On the development side this change brings the following improvements:
- One step compile and link (much simpler CMake rules)
- Embedding BVH binaries in hiprt dll (which makes it easier to package
and load, without relying on special path configuration)
Co-authored-by: Sahar Kashi <sahar.kashi@amd.com>
Co-authored-by: Sergey Sharybin <sergey@blender.org>
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
Pull Request: https://projects.blender.org/blender/blender/pulls/133129
This adds support for Overlay tests.
There are some differences with how we handle tests for other engines:
- The renders are captured using `bpy.ops.render.opengl()`, but this
won't work on our GPU build bots.
- A single blend file can run multiple tests by outputting a txt list
with the test names.
- Each overlay test blend file requires a matching script file with
the same name inside `tests/python/overlay/`.
- To reproduce a specific test state you can run
`blender "(...)/tests/data/overlay/<test>.blend" -P "(...)/tests/python/overlay/<test>.py" -- --test <test-number>`.
Note:
The current test permutations are WIP, so reference images are not
committed to the data repo for now.
Pull Request: https://projects.blender.org/blender/blender/pulls/133879