This commit includes all the changes made for plane tracker
in tomato branch.
Movie clip editor changes:
- Artist might create a plane track out of multiple point
tracks which belongs to the same track (minimum amount of
point tracks is 4, maximum is not actually limited).
When new plane track is added, it's getting "tracked"
across all point tracks, which makes it stick to the same
plane point tracks belong to.
- After plane track was added, it need to be manually adjusted
in a way it covers feature one might to mask/replace.
General transform tools (G, R, S) or sliding corners with
a mouse could be sued for this. Plane corner which
corresponds to left bottom image corner has got X/Y axis
on it (red is for X axis, green for Y).
- Re-adjusting plane corners makes plane to be "re-tracked"
for the frames sequence between current frame and next
and previous keyframes.
- Kayframes might be removed from the plane, using Shit-X
(Marker Delete) operator. However, currently manual
re-adjustment or "re-track" trigger is needed.
Compositor changes:
- Added new node called Plane Track Deform.
- User selects which plane track to use (for this he need
to select movie clip datablock, object and track names).
- Node gets an image input, which need to be warped into
the plane.
- Node outputs:
* Input image warped into the plane.
* Plane, rasterized to a mask.
Masking changes:
- Mask points might be parented to a plane track, which
makes this point deforming in a way as if it belongs
to the tracked plane.
Some video tutorials are available:
- Coder video: http://www.youtube.com/watch?v=vISEwqNHqe4
- Artist video: https://vimeo.com/71727578
This is mine and Keir's holiday code project :)
Now button in the toolshelf behaves this way:
- User clicks on "Add Marker"
- Then he clicks where the marker should get placed
Patch by Marcos Couto (ocf) with own modifications.
Implements an automatic keyframe selection algorithm which uses
couple of approaches to find out best keyframes candidates:
- First, slightly modifier Pollefeys's criteria is used, which
limits correspondence ration from 80% to 100%. This allows to
reject keyframe candidate early without doing heavy math in
cases there're not much common features with first keyframe.
- Second step is based on Geometric Robust Information Criteria
(aka GRIC), which checks whether features motion between
candidate keyframes is better defined by homography or
fundamental matrices.
To be a good keyframe candidate, fundamental matrix need to
define motion better than homography (in this case F-GRIC will
be smaller than H-GRIC).
This two criteria are well described in this paper:
http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf
- Final step is based on estimating reconstruction error of
a full-scene solution using candidate keyframes. This part
is based on the following paper:
ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf
This step requires reconstruction using candidate keyframes
and obtaining covariance matrix of 3D points positions.
Reconstruction was done pretty much straightforward using
other simple pipeline routines, and for covariance estimation
pseudo-inverse of Hessian is used, which is in this case
(J^T * J)+, where + denotes pseudo-inverse.
Jacobian matrix is estimating using Ceres evaluate API.
This is also crucial to get rid of possible gauge ambiguity,
which is in our case made by zero-ing 7 (by gauge freedoms
number) eigen values in pseudo-inverse.
There're still room for improving and optimizing the code,
but we need some point to start with anyway :)
Thanks to Keir Mierle and Sameer Agarwal who assisted a lot
to make this feature working.
Use center of currently visible frame part instead of
center of the whole frame for position of marker which
is adding from toolbox.
Used separate operator for this to keep operators more
atomic and not confuse with lots of conflicting properties.
This operator will run a tracker from previous
keyframe to current frame for all selected markers.
Current markers positions are considering initial
position guess which could be updated by a tracker
for better match.
Useful in cases when feature disappears from the
frame and then appears again. Usage in this case
is the following:
- When feature point re-appeared on frame, manully
place marker on it.
- Use Refine Markers operation (which is in Track
panel) to allow tracker to find a better match.
Depending on direction of tracking use either
Forwards or Backwards refining. It's easy: if
trackign happens forwards, use Refine Frowards,
otherwise use Refine Backwards :)
This is an alternative to using camera to scale the
scene and it's expected to be better solution because
scaling camera leads to issues with z-buffer.
Found the whole scaling thing a bit confusing,
especially for object tracking, but cleaning this up
is a bit different topic.
Displays such information as current frame dimension,
frame number within image sequence/movie and in case
of image sequence input displays current file name of
a frame.
Not entirely happy with such approach, but was requested
a lot by artists.
Made it an operator instead of automatic prefetching.
Filling the whole memory with frames is not always
desired behavior.
Now prefetching is available via P-key, or from Clip
panel in toolbox or from Clip menu.
Also enabled prefetching for non-proxied movies.
Several major things are done in this commit:
- First of all, logic of modal solver was changed.
We do not rely on only minimizer to take care of
guessing rotation for frame, but we're using
analytical rotation computation for point clouds
to obtain initial rotation.
Then this rotation is being refined using Ceres
minimizer and now instead of minimizing average
distance between points of point of two clouds,
minimization of reprojection error of point
cloud onto frame happens.
This gives quite a bit of precision improvement.
- Second bigger improvement here is using bundle
adjustment for a result of first step when we're
only estimating rotation between neighbor images
and reprojecting markers.
This averages error across the image sequence
avoiding error accumulation. Also, this will
tweak bundles themselves a bit for better match.
- And last bigger improvement here is support of
camera intrinsics refirenment.
This allowed to significantly improve solution
for real-life footage and results after such
refining are much more usable than it were before.
Thanks to Keir for the help and code review.
Systematically adding some custom id to template_list using default UI_UL_list class, this one is commoly used more than once in an area, yielding collision issues if they do not have a custom id...
(Did not add those when I created that module, because I did not thought we would actually need them in usual UI code, but turned out I was wrong).
Also made some optimizations in those py gettext funcs, when i18n is disabled at build time, no need to do pyobject -> cstring -> pyobject conversions!.
It introduces a new (py-extendable and registrable) RNA type, UIList (roughly similar to Panel one), which currently contains only "standard" list's scroll pos and size (but may be expended to include e.g. some filtering data, etc.). This now makes lists completely independent from Panels!
This UIList has a draw_item callback which allows to customize items' drawing from python, that all addons can now use. Incidentally, this also greatly simplifies the C code of this widget, as we do not code any "special case" here anymore!
To make all this work, other changes were also necessary:
* Now all buttons (uiBut struct) have a 'custom_data' void pointer, used currently to store the uiList struct associated with a given uiLayoutListBox.
* DynamicPaintSurface now exposes a new bool, use_color_preview (readonly), saying whether that surface has some 3D view preview data or not.
* UILayout class has now four new (static) functions, to get the actual icon of any RNA object (important e.g. with materials or textures), and to get an enum item's UI name, description and icon.
* UILayout's label() func now takes an optional 'icon_value' integer parameter, which if not zero will override the 'icon' one (mandatory to use "custom" icons as generated for material/texture/... previews).
Note: not sure whether we should add that one to all UILayout's prop funcs?
Note: will update addons using template list asap.
This fixes some "regressions" introduced in rev50781 which lead to much
worse solution in some cases. Now it's possible to bring old behavior back.
Perhaps it's more like temporal solution for time being smarter solution is
found. But finding such a solution isn't so fast, so let's bring manual
control over reprojection usage.
But anyway, imo it's now nice to have a structure which could be used to
pass different settings to the solver.
- Fix for copy default settings from active track operator
- Add meaningful tracking presets
API changes:
- Added parameter exact to Marker.find_frame, so now it's
possible to get estimated marker
- Added Marker.pattern_bound_box to get pattern's bound box
* Code cleanup, removed unneeded code.
* Style cleanup, don't break lines to early
(unless marked as pep8-80 or pep8-120 compliant)
* Keep 1 line after layout declaration empty.
In contrast to start_frame (which affects on where footage actually
starts to play and also affects on all data associated with a clip
such as motion tracking, reconstruction and so on) this slider only
affects on a way how frame number is mapping to a filename, without
touching any kind of tracking data.
The formula is:
file_name = clip_file_name + frame_offset - (start_frame - 1)
- Display track's reprojection error in dopesheet
- Make sure track is selected when clicking on dopesheet channel
- Attempt to make headers a bit cleaner without long labels which
doesn't actually make sense.
It was a bit confusing to synchronize settings used in pre-calculated
dopesheet channels which was storing in tracking data with settings
used for display which is in space data.
This was initially done by converting one flags to other and checking
if space's settings matches pre-calculated one, but that had several
issues if two different dopesheet are using different settings:
- Channels would be re-calculated on every redraw for each of spaces
- Dopesheet operators could fail due to the could be using channels
calculated for other space.
That was also quite nasty code checking if requested settings matches
pre-calculated one.
Added option to use Grease Pencil datablock as a mask for pattern
when doing motion tracking. Option could be found in Tracking Settings
panel.
All strokes would be rasterized separately from each other and every
stroke is treating as a closed spline.
Also added option to apply a mask on track preview which is situated
just after B/B/W channel button under track preview.
===========================================
Major list of changes done in tomato branch:
- Add a planar tracking implementation to libmv
This adds a new planar tracking implementation to libmv. The
tracker is based on Ceres[1], the new nonlinear minimizer that
myself and Sameer released from Google as open source. Since
the motion model is more involved, the interface is
different than the RegionTracker interface used previously
in Blender.
The start of a C API in libmv-capi.{cpp,h} is also included.
- Migrate from pat_{min,max} for markers to 4 corners representation
Convert markers in the movie clip editor / 2D tracker from using
pat_min and pat_max notation to using the a more general, 4-corner
representation.
There is still considerable porting work to do; in particular
sliding from preview widget does not work correct for rotated
markers.
All other areas should be ported to new representation:
* Added support of sliding individual corners. LMB slide + Ctrl
would scale the whole pattern
* S would scale the whole marker, S-S would scale pattern only
* Added support of marker's rotation which is currently rotates
only patterns around their centers or all markers around median,
Rotation or other non-translation/scaling transformation of search
area doesn't make sense.
* Track Preview widget would display transformed pattern which
libmv actually operates with.
- "Efficient Second-order Minimization" for the planar tracker
This implements the "Efficient Second-order Minimization"
scheme, as supported by the existing translation tracker.
This increases the amount of per-iteration work, but
decreases the number of iterations required to converge and
also increases the size of the basin of attraction for the
optimization.
- Remove the use of the legacy RegionTracker API from Blender,
and replaces it with the new TrackRegion API. This also
adds several features to the planar tracker in libmv:
* Do a brute-force initialization of tracking similar to "Hybrid"
mode in the stable release, but using all floats. This is slower
but more accurate. It is still necessary to evaluate if the
performance loss is worth it. In particular, this change is
necessary to support high bit depth imagery.
* Add support for masks over the search window. This is a step
towards supporting user-defined tracker masks. The tracker masks
will make it easy for users to make a mask for e.g. a ball.
Not exposed into interface yet/
* Add Pearson product moment correlation coefficient checking (aka
"Correlation" in the UI. This causes tracking failure if the
tracked patch is not linearly related to the template.
* Add support for warping a few points in addition to the supplied
points. This is useful because the tracking code deliberately
does not expose the underlying warp representation. Instead,
warps are specified in an aparametric way via the correspondences.
- Replace the old style tracker configuration panel with the
new planar tracking panel. From a users perspective, this means:
* The old "tracking algorithm" picker is gone. There is only 1
algorithm now. We may revisit this later, but I would much
prefer to have only 1 algorithm. So far no optimization work
has been done so the speed is not there yet.
* There is now a dropdown to select the motion model. Choices:
* Translation
* Translation, rotation
* Translation, scale
* Translation, rotation, scale
* Affine
* Perspective
* The old "Hybrid" mode is gone; instead there is a toggle to
enable or disable translation-only tracker initialization. This
is the equivalent of the hyrbid mode before, but rewritten to work
with the new planar tracking modes.
* The pyramid levels setting is gone. At a future date, the planar
tracker will decide to use pyramids or not automatically. The
pyramid setting was ultimately a mistake; with the brute force
initialization it is unnecessary.
- Add light-normalized tracking
Added the ability to normalize patterns by their average value while
tracking, to make them invariant to global illumination changes.
Additional details could be found at wiki page [2]
[1] http://code.google.com/p/ceres-solver
[2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker