Calculate the clipped min/max factor along the segment,
only applying to the coordinates at the end (will give better precision too).
Also make split input/output args.
This changes the following defaults:
- Render settings:
* Samples: 100
* Preview Samples: 50
* Filter: Blackmann-Harris
* Tile Order: Hilbert Spiral
- Lamp settings:
* Use MIS: On
- Material settings:
* Volume Sampling: Multiple Importance
Old files are not affected, I tested the versioning code back and forth.
More changes are to come (World, BVH...) but that needs a bit more work.
Skipping computing of shadow pass when diffuse color is pitch black is fine... unless
you actually need/want that shadow pass!
The 'noisy' issue with picture texture remains a bit mysterious to me currently. :/
The previous threshold used to prevent the Graph Editor from imploding if
presented with a flat (or nearly flat, accounting for floating point precision)
curve was too coarse, meaning that in some cases, the "View All" tool would end
up behaving weirdly.
This brings the dopesheet more in line with the NLA and Graph Editors, where
similar initial ranges were also used. The benefit is that it should save
animators a small amount of time getting the dopesheet timeline into the
right zoom level before starting work.
UI_panel_category_draw_all was setting PolygonMode to LINES before
drawing LINES.
stitch_draw was setting PolygonMode to its default FILL value — any
function that deviates from the default should’ve changed it back to
FILL.
This commit adds a "Select Grouped" operator. Although it is set up to
allow more types of "grouping" in future, it current only supports
a single mode (i.e. "Same Layer"). As a result, it does not pop up
any menus/submenus in all the usual places.
Wireframe drawing doesn’t use the following GL state, so don’t update
these:
- ShadeModel
- NormalPointer
- ColorPointer
Normal & color data are still uploaded as part of the interleaved VBO,
but those attributes are disabled for wireframe.
I've been using this fix in another branch locally, so it seems to work fine.
The other #ifdef checks should be checked on too, as __MINGW32__ and __MINGW64__
do NOT seem to be defined when compiling that file
As reported on the Blender Institute Podcast 009. See my comment on the cloud blog
for further details.
When used a second (or third, etc.) time, the breakdowner's (Shift-E) percentage value
would initially be the last-used value (e.g. 33% or 75%), before suddenly jumping
to another value as soon as the mouse moves. The cause of this behaviour was that it
was initially reusing the value from the previous time the operator was run, but then
as soon as the mouse moved, it would snap to the percentage implied by the mouse position.
(Note: The mapping from mouse position to percentage is "absolute" - i.e. the percentage
is based on how far across the 3D view the mouse is, instead of being some kind of
relative offset thing).
To make things a bit less jumpy, I've changed the behaviour so that the mouse position
always gets used immediately, instead of having it jump suddenly only when making
some mouse movement.
Apply X and Y blur as separate step, this reduces number of accumulations
required and makes effect more realtime.
Another quick thing for the Nieve project.
Fix T47213.
There was actually no real bug here, just clarify now in the UI that Mesh, World and Lamp samples only have an effect if we sample all lights (direct or indirect).
The simplest way of handling mirroring in multi-paint is creating a
uniform symmetric selection and relying on existing symmetric weights
to direct changes to the appropriate vertex groups. This already works
if mirror bones are selected manually, and can be made easier to use
by doing it implicitly.
Use the collective weight when using blurring.
This brings blur in sync with the rest of multi-paint modifications.
Also, blur has no business calling `defvert_verify_index` anyway when
it simply wants to read weights.
Similarly, the displayed weight distribution should be used as the
target of brush application instead of the invisible active group.
Multi-paint is split into a new function because all the calls to
defvert_verify_index at the top of the old code should never be
done in multi-paint mode.
Since the coloring uses sum or average of the weights of all selected
groups, the weight pick tool should also use that instead of reading
the weight of the single active group that you can't see.
This condition can actually happen quite often if weight painting for
a rig that uses separate control bones, so the color shouldn't be so
bright that it's hard to look at for a significant amount of time.
The threaded code is twice quicker now (from an average of 20ms/frame to 10ms/frame
while baking 10000 particles here e.g.)!
Think this is mostly due to usage of 'dynamic' scheduler in OMP code though,
from my experience so far this tends to have dramatic effects over performances,
static scheduler is usually much much more efficient.
Problem is actually similar in both engines - in some cases, we changed
'natural' quad splitting order to alternative one, without properkly 'notifying'
UV/VCol/other tessface data about it.
So code would use a 'wrong' triangle of UVs etc.
Fix for Cycles was committed by sergey as rBa6eae7339190d1.
The issue was discontinuity in logic when importing vertices from blender
and then importing data layers regardless of how we split the face. Quite
interesting we didn't notice this issue before.
Thanks Bastien for the investigation, based on D1742 but redid it to make
patch a bit more clear to follow.
LoopTri changes in 2.76 calculated all tangents as triangles,
this gave different results though in most cases it was hard to notice.
Though no bugs were reported we should keep our tangents compatible with other users of mikktspace.
Specifically, when only one bone is selected and it's not really active.
(With multiple bones on the other hand that behavior is forced on,
since multi-paint can't modify purely zero weight verts and that's important.)
There is no function pointers in OpenCL specification. For as long
as we want to support this platform we should follow the specifications.
While the code is not totally optimal now, it should not be that huge
of performance issue on CPU since it does jump tables just nicely, so
it's not that much extra computation here.