Sie sind auf Seite 1von 642

SynthEyes 1702 User Manual

(SynthEyes 2017.02.1039 and later)


2003-2017 Andersson Technologies LLC

Welcome to SynthEyes, our camera-tracking system and match-moving


application. With SynthEyes, you can process your film or video shot to
determine the motion and field of view of the camera taking the shot, or track an
object moving within it. You can combine feature locations to produce an object
mesh, then extract a texture map for that object. After reviewing the match-
moved shot, inserting some sample 3-D objects and viewing the RAM playback,
you can output the camera and/or object motions to any of a large number of
popular 3-D animation programs. Once in your 3-D animation program, you can
add 3-D effects to the live-action shot, ranging from invisible alterations, such as
virtual set extensions, to virtual product placements, to over-the-top creature
insertions. With the right setup, you can capture the motion of an actors body or
face.
SynthEyes can also help you stabilize your shots, taking full advantage of
its two and three-dimensional tracking capabilities to help you generate rock-solid
moving-camera shots. The comprehensive stabilization feature set gives you full
directorial control over stabilization.
If you work with film images, especially for TV, the stabilization system can
also help you avoid some common mistakes in film workflows that compromise
3-D tracking, rendering, and effects work.
And if you are working on a 3D stereo movie (two cameras), SynthEyes
can not only help you add 3-D effects, but it can help you match the two shots
together to reduce eyestrain.

Bonjour! Hej! Hola! ! Hallo! Ciao!


Welcome to all our non-native-English SynthEyes users. If you
can read this far, your knowledge of English far exceeds our
knowledge of any other language, so congratulations!
Unfortunately, we aren't able to maintain translated versions of this
manual for other languages, it is too complicated and expensive
for a specialized product. SynthEyes does feature a user-driven
system to customize the user interface (menus and dialog text) to
other languages, we call this our Community Translation Project.

Once you've checked out the basics in SynthEyes, explore our


revolutionary new Synthia English-language instructible assistant, via
the IA button at top right and the Help/Synthia PDF manual. Command
SynthEyes to perform tasks that would otherwise require programming, or just
use its voice recognition and response to save time. Perfect for those strange
one-off fixits.

1
Unless you are using the demo version, you will need to follow the
registration and authorization procedure described towards the end of this
document and in the online tutorial.

IMPORTANT: to minimize the chances of data loss, please read the


sections on auto-save and on file name versioning, then configure the
preferences to correspond to your desired working style!

To help provide the best user experience, SynthEyes has a Customer


Care center with automatic updates, messages from the factory, feature
suggestions, forum, and more. Be sure to take advantage of these capabilities,
available from SynthEyess help menu.
Be sure to check out the many video tutorials on the web site. We know
many of you are visual learners, and the subjects covered in this manual are
inherently 3-D, so a quick look at the tutorials, and playing along in the program
as you read, can make the text much more accessible.

DONT PANIC. SynthEyes is designed to integrate into the most


sophisticated visual effects workflows on the planet. So it is only
natural that you will probably see some things in this manual you do
not understand. Even if you dont understand 3-D color LUTs or the
half floating point image format, youll be fine! (hello, google!) We
have worked hard to make sure that when you do not need that
functionality, everything still works, nice and simple. As you learn more
and do more, youll likely discover that when you want to do
something, some control that hadnt make sense to you before will
suddenly be just what you need. So jump on in, the waters fine!

How to Read this Manual


Before you skip this section.... please allow us to point out that it is here
for your benefit, based on our experience providing technical support for many
years. Here are our top concepts:
Use the 'bookmarks' or Table of Contents. It will allow you to get
a quick idea what is in the manual, and jump around as you read it,
if need be. Windows: In Adobe Reader's View menu, click
Navigation Panels/Bookmarks. OS X: In Preview's View menu, turn
on "Table of Contents." Linux: View/Side Pane.
Use the search box. You'd be surprised how many mails we get
that say "I can't find XXX in the manual" where typing XXX into
Adobe Reader or Preview's search box immediately gives very
good answers. So save time and trouble and put it to work!
Read the reference sections. While the User Manual
concentrates on the tasks, techniques, and procedures used in
tracking ("what buttons to push"), the Reference portion of the

2
manual contains the very detailed information on each control,
command, and mouse operation ("what the buttons do"). If you see
something and don't know what it does, or want to find out if there's
a mouse operation to do what you want, the Reference section is
where to look. The User Manual does not contain all the
information in the Reference manual; a significant amount of
information may be found only in the reference manual.
Look around. You will not know to look for information about
something in the manual unless you know what SynthEyes can do
and what is in the manual. So even if you start out by reading only
the portions needed to get started, it is a good idea to skim briefly
through the table of contents (bookmarks) and even manual itself
from time to time so you know what is in there. Side benefit: you
won't be making as many new-feature requests that SynthEyes
already does!
It's a PDF. This may be the perfect time to justify an eBook reader
to your boss/significant other. You can save the manual and read it
at your leisure.

Additional Manuals
This is the main SynthEyes User Manual. There are additional specialized
manuals as well. All are available from the Help menu within SynthEyes, or in the
folder associated with the application, ie \Program Files\Andersson Technologies
LLC\SynthEyes, /Applications/SynthEyes, or /opt/SynthEyes.
SynthEyes User Manual. Covers basic operation, automatic and
supervised tracking, solving, coordinate systems, exporting, etc etc
etc, plus reference material. You're reading it now.
Planar Tracking Manual. Covers 2- and 3-D Planar Tracking and
specialized exports for planar tracking.
Geometric Hierarchy Tracking Manual. More advance manual
showing how to set up hierarchies of object tracks.
Camera Calibration Manual. Shows how to calibrate cameras using a
variety of techniques, such as calibration grids and vignetting.
Phase Reference Manual. Describes the solver phase nodal system
in more detail, and covers the operation of each of the types of phases.
Synthia User Manual. Synthia is SynthEyes's advanced instructible
assistant; Synthia can save you a lot of time and trouble doing odd
jobs.
Sizzle Scripting Language Manual. Describes the Sizzle
programming language used for import, export, and tool scripts within
SynthEyes. It's a simple language that should be easy to pick up for
anyone with a little experience with scripting languages.
SyPy Python Reference Manual. Describes the SyPy module for
scripting SynthEyes from within any Python interpreter: a standalone

3
interpreter, a development IDE, windowing environment, or 3rd party
animation application.

4
Contents
Quick Start: Automatic Tracking
Quick Start: Supervised Tracking
Quick Start: 3-D Planar Tracking
Quick Start: Stabilization
Shooting Requirements for 3-D Effects
Types of Shots: Gallery of Horrors
Basic Operation
Opening the Shot
Automatic Tracking
Supervised Tracking
Fine-Tuning the Trackers
Checking the Trackers
Nodal Tripod-Mode Shots
Lenses and Distortion
Running the 3-D Solver
3-D Review
Cleaning Up Trackers Quickly
Setting Up a Coordinate System
Post-Solve Tracking
Advanced Solving Using Phases (Pro, not Intro)
Perspective Window
Exporting to Your Animation Package
Realistic Compositing for 3-D
Building Meshes
Texture Extraction
Optimizing for Real-Time Playback
Troubleshooting
Combining Automatic and Supervised Tracking
Stabilization
Rotoscoping and Alpha-Channel Mattes
Object Tracking

5
Joint Camera and Object Tracking
Survey Shots
Multi-Shot Tracking
Stereo Movies
360 Degree Virtual Reality Tracking
Motion Capture and Face Tracking
Finding Light Positions
Curve Tracking and Analysis in 3-D
Merging Files and Tracks
Batch File Processing
Reference Material:
System Requirements
Installation and Registration
Customer Care Features and Automatic Update
Menu Reference
Control Panel Reference
Additional Dialogs Reference
Viewports Feature Reference
Overview of Standard Tool Scripts
Preferences and Scene Settings Reference
Keyboard Reference
Viewport Layout Manager
Support

6
Quick Start: Automatic Tracking
To get started quickly with SynthEyes, match-move the demo shot
FlyoverJPEG, downloaded from https://www.ssontech.com/download.html.
Unpack the ZIP file into a folder containing the image sequence.
An overview of the tracking process looks like this:
Open the shot and configure SynthEyes to match the source
Create 2-D trackers that follow individual features in the image, either
automatically or under user supervision.
Analyze (solve) the 2-D tracks to create 3-D camera and tracker data.
Set up constraints to align the solved scene in a way useful for adding
effects. This step can be done before solving as well.
Export to your animation package.
Start SynthEyes from the shortcut on your desktop, and on the main menu
select File/New or File/Import/Shot. In the file-selection dialog that opens, elect
the first frame, FLYOVER0000.JPG, of the shot from the folder you have
downloaded it to.
The shot settings panel will appear.

Note: Screenshots in this manual and online tutorials are usually from
Windows; the OS X and Linux interfaces are slightly different in
appearance due to different fonts and window close buttons, but not
function. The user interface colors and other minor details in the
manual and tutorials may be from older SynthEyes or operating system
versions or have different preferences settings. (There are many user-
configurable elements.) And the layout changes based on the size of
the SynthEyes windowif full screen, based on your monitor
resolution. So just go with the flow and look around if need be!

Tip: The Max RAM Cache GB preference controls how much RAM will
be used to cache the shot for fastest access. See Opening the Shot for
more information.

7
QUICK START: STABILIZATION

You can reset the frame rate from the 24 fps default for sequences to the
NTSC rate by hitting the NTSC button , though this is not critical. The
aspect ratio, 1.333 (ie 4:3) is correct for this shot.
On the top room selector,

verify that the Summary tab is selected.

8
QUICK START: STABILIZATION

On the summary panel, click the Auto button for full-


automatic processing.
Time Saver: you can click Auto as soon as you start SynthEyes, and it will
prompt you for the shot if you have not set one up already, so you do not even
have to do a File/New.
If this is the first time you've done a full-automatic solve, and hopefully it is,
you'll see the following dialog:

9
QUICK START: STABILIZATION

So that you can learn match-moving better, answer No for now. You can
adjust this later via the "Autoplace after solving" preference in the Solver section
of Edit/Edit Preferences. You can also change with an automatic placement is
generated for a specific scene via the checkbox on the Summary panel.
A series of message boxes will pop up showing the job being processed.
Wait for it to finish. This is where your computers speed pays off. On a 2010 Mac
Pro, the shot takes about a second to process in total, including portions before
this dialog. A 2011 MacBook Air takes about 3 seconds.

Once you see Finished solving, hit OK to close this final dialog box.
SynthEyes will switch to a five-viewport configuration that continues to show the
solver results. (Experienced users can disable switching with a preferences
setting).

10
QUICK START: STABILIZATION

Tip: don't worry if it looks like some controls are cut off the bottom of
the control panel at left. You're not missing anything. Those are
secondary copies of the controls found on the Lens panel. On a big
screen they're all visible.

On the main toolbar, change "Layout: Solver Output" to "Quad" to change


to a conventional view that is better suited for small screens.

11
QUICK START: STABILIZATION

You'll see many trackers: in the camera view, a green diamond shows the
2-D location in the image on the current frame, and a small yellow x (tracker
point) shows its location in 3-D space. In the 3-D views, the green x's show 3-D
tracker locations.
You can zoom in on any of the views, including the camera view(s), using
the middle mouse scroll and middle-mouse pan to see more detail. (You can also
right-drag for a smooth zoom.)
If you are trying to use a laptop trackpad, or are having other problems
using the middle-mouse button, please see the section on Middle Mouse Button
Problems. Note that tracking is an intensive high-precision task; trackpads are
designed only for basic browsing and word processing, so an external mouse is
highly recommended for all but the most trivial 3-D tracking.
The status line will show the zoom ratio in the camera view, or the world-
units size of any 3-D viewport. You can Control-HOME to re-center all four
viewports. See the Window Viewport Reference for more such information.
The error curve mini-view at bottom right shows color-coded tracking error
over the length of the shot, and an overall value.
In the main viewports, look at the Left view in the lower left quadrant. The
green marks show the 3-D location of features that SynthEyes located. In the Left
view, they fall on a diagonal running top-left to lower-right. [If the points are flat
on the ground plane, you previously enabled the automatic placement tool; you

12
QUICK START: STABILIZATION

can still continue with this tutorial.] The trackers are located relative to the initial
default camera position: the images contain NOTHING to say that the trackers
should be located anywhere in particular in the scene. You can move the
camera and trackers all around in 3D, and everything will still line up perfectly in
the camera view (and SynthEyes gives you a manual alignment method to do
exactly this).
Since most of these points are on the ground in the scene, wed like them
to fall on the ground plane of the animation environment. SynthEyes provides
tools to let you eyeball it into place, or automatically place it, but theres a more
accurate way

Switch to the Coordinates room. The coordinate system panel is used


to align and scale the solved scenes. (Note that the behavior is a little different
depending on whether you click on the icon or the text: clicking on the text name
of a room updates both the panel and view configuration, if needed, while clicking
on the icon changes only the panel, not the view configuration.)
Refer to the picture below for the location of the 3 trackers labeled in red.
We will precisely align these 3 trackers to become the ground plane. Note that
the details of which trackers are present may change somewhat from version to
version.

Begin by clicking the *3 button at top right of the coordinate system


panel. Next, click on the tracker labeled 1 (above) in the viewport. On the control
panel, the tracker will automatically change from Unconstrained to Origin.
In this example, we will use trackers (1 and 2) aligned left to right. The
coordinate system mini-wizard (*3 button) handles points aligned left to right or
front to back. (By default, it is LR, you could click again to get FB.)
Click the tracker labeled 2, causing it to change to Lock Point. The X field
above it will change to 20.

13
QUICK START: STABILIZATION

Select the tracker labeled 3, slightly right of center. A dialog box will
appear that we'll discuss in a moment. The tracker's settings will have already
changed from Unconstrained to On XY Plane (ie the ground plane).
Why are we doing all this? The choice of trackers to use, the overall size
(determined by the 20 value above), and the choice of axes is arbitrary, up to you
to make your subsequent effects easier. See Setting Up the Coordinate System
for more details on why and how to set up a coordinate system. Note that
SynthEyes scene settings and preferences allow you to change how the axes
are oriented to match other programs such as Maya or Lightwave: ie a Z-up or Y-
up mode. This manuals examples are in Z-Up mode unless otherwise noted; the
corresponding choices for one of the Y-Up modes should be fairly evident.
After you click the third tracker you will be prompted (Apply coordinate
system?) to determine whether the scene should be re-solved to apply your new
settings. Select Yes. SynthEyes changes the solving mode (on the Solver panel)
from Automatic to Refine, so that it will update the match-move, rather than
recalculating from scratch. SynthEyes recalculates the tracker and camera
positions in a flash; click OK on the solving console that pops up.
Afterwards, the 3 trackers will be flat on the ground plane (XY plane) and
the camera path adjusted to match, as shown:

You could have selected any three points to define the coordinate system
this way, as long as they arent in a straight line or all bunched together. The

14
QUICK START: STABILIZATION

points you select should be based on how you want the scene to line up in your
animation package.

Switch to the 3-D tab . Select the magic wand tool on the panel.
Change the mesh type drop-down at left of the wand to create a Pyramid instead
of a Box. Zoom in the Top viewport window so the tracker points are spread out.
In the Top viewport, drag out the base of a rectangular pyramid. Then click
again and drag to set its height. Use the move , rotate , and scale
tools to make the box into a small pyramid located in the vacant field. Click on
the color swatch under the wand, and select a sandy pyramid color. Click
somewhere empty in the viewport to unselect the pyramid (bright red causes lag
in LCDs).
On the View menu, turn off Show Trackers and
Tracker Appearance/Show 3-D Points, and switch to the camera viewport. You

can do that by changing the Layout selector on the


toolbar to Camera, or by clicking the tab at top right of the camera view itself.

Hit Play . If it plays too quickly, select View/Normal Speed. Note that
there will appear to be some jitter because drawing is not anti-aliased. It wont be
present when you render in your 3-D application. SynthEyes is not intended to be

15
QUICK START: STABILIZATION

a rendering or modeling system; it operates in conjunction with a separate 3-D


animation application. (You can create motion-blurred anti-aliased preview
movies from the Perspective window.)

Hit Stop . Rewind to the beginning of the shot (say with shift -A).
By far, the most common cause of sliding of an inserted object is that the
object has not been placed at the right vertical position (height) over the imagery.
Sliding is a user error, not a software error. You should compare the location of
your insert to that of other nearby trackers, in 3-D, adding a tracker at key
locations if necessary. You will also think you have sliding if you place a flat
object onto a surface that is not truly flat. Normally we would place the pyramid
more carefully.

Tip: it can be difficult to assess an insert when the shot itself is


unstable. To help, click to select a tracker near your insert. Hit the 5
key to turn on "Pan To Follow." The camera view will center on this
tracker as you play. Zoom the view to see even better. Click 5 again to
turn this mode off.

To make a preview movie, switch to the Perspective window on the View


selector. Click Lock on the right of the toolbar in the perspective view, or right-
click in the view and select Lock to Current Cam. Right-click in the view and
select Preview Movie.

Click on at upper right and change the saved file type to Quicktime
Movie or MP4. Enter a file name for the output movie, typically in a temporary
scratch location. Click on Compression Settings and select something
appropriate. Click OK to close the compression settings. Back on the Preview

16
QUICK START: STABILIZATION

Movie Settings, turn off Show Grid, and hit Start. The preview movie will be
produced and played back in the Quicktime or Windows Media Player.
You can export to your animation package at this time, from the
File/Export menu item. Select the exporter for your animation package from the
(long) menu list. SynthEyes will prompt for the export location and file name; by
default a file with the same name as the currently-open file (flyover in this case),
but with an appropriate file extension, such as .fbx for a Filmbox file. See the link
above (to the section on exporting) for more application-specific details, generally
the defaults will be a good start.
You can have SynthEyes produce multiple exports in one operation using
the multi-export configuration dialog.

Note: the SynthEyes Demo version only exports the first six frames
and last frame of the sequence. You can only learn SynthEyes from
within SynthEyes. Use Preview Movie to see the entire shot within
SynthEyes, including your own imported models.

You can save your tracked scene now (or at any earlier time) using
File/Save. If you have auto-save turned on, you will be prompted at the first
completion of the save interval to specify a file name.
This completes this initial example, which is the quickest, though not
necessarily the best, way to go. Youll notice that SynthEyes presents many
additional views, controls, and displays for detecting and removing tracking
glitches, navigating in 3-D, handling temporarily obscured trackers, moving
objects and multiple shots, etc.
In particular, after auto-tracking and before exporting, you should always
check up on the trackers, especially using Clean Up Trackers and the graph
editor, to correct any glitches in tracking (which can result in little glitches in the
camera path), and to eliminate any trackers that are not stable. For example, in
the example flyover, the truck that is moving behind the trees might be tracked,
and it should be deleted and the solution refined (quickly recomputed).
SynthEyes also offers an automated tool for setting up a coordinate
system: the Place button on the Summary panel. If you click it now, it will
replace the coordinate system you have already set up, without affecting the
apparent size and placement of the pyramid (the pyramid can be left untouched if
you turn off Whole affects meshes on the viewport's right-click menu). Although it
is quick and easy, the Place button does not offer the same accuracy as the
three-trackers method (which itself is quite quick). The Place button can be run
automatically from the AUTO track and solve button, if the corresponding
preference is turned on.
See Realistic Compositing for 3-D for information about how to make
things you add to the scene look like they belong there.

17
QUICK START: STABILIZATION

Quick Start: Supervised Tracking


Sometimes you will need to hand track shots, add additional supervised
trackers to an automatic track, or add supervised trackers to help the automated
system with very jumpy shots. Although supervised tracking takes a bit more
knowledge, it can often be done relatively quickly, and produce results on shots
the automated method cannot handle.

Tip: Be sure to watch the pnline tutorial Supervised Tracking Master


Class (https://www.ssontech.com/tutembed/SuperMaster.html) which
is an extended presentation of many details of supervised tracking.

To demonstrate, manually match-move the demo shot flyover. Start


SynthEyes and select File/New or File/Import/Shot. Open the flyover shot.
The shot settings panel will appear; it should not require adjustment.
The first major step is to create trackers, which will follow selected
features in the shot. We will track in the forward direction, from the beginning of
the shot to the end, so rewind to the beginning of the shot. On shots where
features approach from the distance, it is often more convenient to track
backwards instead.

Switch to the Trackers room and click Create ( ). It will bring up the
tracking control panel.

Tip: You can create a tracker at any time by holding down the C key
and left-clicking in the camera view.

Begin creating trackers at the locations in the image below, by putting the
cursor over the location, pushing and holding the left mouse button, and
adjusting the tracker position while the mouse button is down, looking at the
tracker insides window on the control panel to put the point of the feature at
the center. Look for distinctive white or black spots in the indicated locations.

Important: Normally you should always specifically configure the


tracker size, search area size, and tracking prediction mode on the
Track menu. See the Supervised Tracking section for details.

You'll also see a tracker mini-view pop up on the camera view. You can
control when and where that pops up via preferences.

Tip: the name of each supervised tracker will be shown in the


camera, perspective, and 3D viewports, by default. You can have none
of the names shown, or the names shown for all trackers, using the
controls on the View menu.

18
QUICK START: STABILIZATION

After creating the first tracker, click the green swatch under the tracker
mini-view window and change the color to a bright yellow to be more visible. Or,
to do this after creating trackers, control-A to select them all, then click the
swatch.

Tip: there are two layouts for the tracking control panel, a small one for
recommended for laptops, and a larger one recommended for high-
resolution displays, selected by the Wider tracker-view panel
checkbox in the preferences. In between, take your pick!

Once the eleven trackers are placed, you can turn off the creation tool by
clicking it again. Type control-A (command-A on Mac) to select all the trackers.

On the tracker control panel, find . In this spinner, called Key


Every, push and drag the value from zero to 20. This says you wish to
automatically re-key the tracker every 20 frames to accommodate changes in the
pattern.

Tip: you should have at least six trackers visible on each frame of the
shot, with substantial amounts of overlap if they do not last throught
the shot. For good results, keep a dozen or so visible at all times,
spread out throughout the image, not bunched together. If the shot
moves around a lot, you may need many more trackers to maintain
satisfactory coverage.

19
QUICK START: STABILIZATION

Hit the Play button , and SynthEyes will track through the entire shot. if
you select an individual tracker, you'll be able to see a tracking figure of merit at
the bottom right in the error curve mini-viewthis is only available here before
the tracker is solved (from the graph editor later). The values don't matter, only
the shape.
On this example, the trackers should stay on their features throughout the
entire shot without further intervention. You can scrub the time bar back and forth
to verify that. You will notice that one tracker has gone off-screen and been shut
down automatically. (Advanced feature hint: when the image has black edges,
you can adjust the Region-of-interest on the image preprocessing panel to save
storage and ensure that the trackers turn off when they reach the out-of-bounds
portion.) If necessary, you can reposition a tracker on any frame, setting a key
and teaching the tracker to follow the image from that location subsequently.
After tracking, with all the trackers still selected (or hit Control/command-
A), click the Lock ( ) button to lock them , so they will not re-track as you
play around (or get messed up).

Tip: you can turn on View/Timebar background/Tracker count to see a


color-coded background showing whether or not there are enough
usable trackers on each frame.

Now you will align the coordinate system. This is the same as for
automatic tracking, except performed before any solving. See Setting Up the
Coordinate System for more details on why and how to set up a coordinate
system. Switch to the Coordinates room using the toolbar. You might as well
switch the Layout to Camera since the trackers aren't solved yet, and there is
nothing to see in the 3D views.

20
QUICK START: STABILIZATION

This is a similar guide picture to that from auto-tracking, though the


trackers are in different locations. Click the *3 button, then click on tracker #1.
Click the *3 button, now reading LR, to change it to FB. Click tracker #2. Click
tracker #3. You'll get a popup saying that the coordinate system will be applied
once the scene is solved, click OK.
Now switch to the Solver room. Hit the Go! button. A display panel will
pop up, and after a fraction of a second, it should say Finished solving. Hit OK
to close the popup. You could add some objects from the 3-D panel at this time,
as in the automatic tracking example.

You can add some additional trackers now to increase accuracy. Use
(or shift-F) to go to the end of the shot, and change to backward tracking by
clicking the big on the playbar. It will change to the backwards direction
. Click to the Trackers room, and turn on the Create ( ) button.

Tip: When you play the scene, SynthEyes updates the tracking data
only for trackers that are configured for the same tracking direction as
the playback itself. See the directional indicator under the tracker mini-
view to see the direction for given tracker(s).

Create additional trackers spread through the scene, for example only on
white spots. Switch their tracker type from a match tracker to a white-spot
tracker , using the type selection flyout button on the tracker control panel.
(Note that the Key-every spinner does not affect spot-type trackers.)

21
QUICK START: STABILIZATION

You'll see a red X in the tracker mini-view that shows the optimum location
for the spot tracker, center on it and it will snap into place. You should adjust the
Size spinners of the tracker as needed, so that the white spot fills approximately
2/3 of the field of viewotherwise the spot you are interested in will be affected
by surrounding features. Re-center afterwards.

Hit Play to track them out. The tracker on the rock pile may get off-track in
the middle since there are many similar rocks. You can either correct it by
dragging and re-tracking, but it will be easiest for this one to keep it as a larger
match-type tracker. Scrub through the shot to verify they have stayed on
track, then control-A to select them all, and turn on the lock .
Switch to the Solver control panel, change the top drop-down box, the
solving mode, from Automatic to Refine, and hit Go! again, then OK.

Go to the 3-D Panel, click on the create wand , change the object
type from Box (or Pyramid) to Earthling, then drag in the Top view to place an
earthling to menace this setting. Click a second time to set its size. In the
following example, the tracker on the concrete pad was used to adjust the height
of the Earthling statueusing trackers as reference is generally required to
prevent sliding, unless they were used to set up the coordinate system. You can
use pan-to-follow mode (hit the 5 key to turn it on or off) to zoom in on the tracker
(and nearby feet) to monitor their positioning as you scrub. The statue was also
rotated to face appropriately.

22
QUICK START: STABILIZATION

Typically, supervised tracking is performed more carefully, tracking a


single tracker at a time and monitoring it directly. SynthEyes generates keys
every 20 frames with the settings shown; normally such automatically-generated
keyframes are adjusted manually to prevent drifting. If you look at the individual
trackers in this example, you will see that some have drifted by the end of the
shot. Normally they are corrected, hence the term supervised tracking.
For more detailed information on supervised tracking, read the manuals
later write-up of supervised tracking, and see the online tutorial Supervised
Tracking Master Class.
Note the error curve mini-view at lower right of the screen capture above
showing that the tracking has solved reasonably accurately. Later we'll see how
to examine the solve in detail.

23
QUICK START: STABILIZATION

Quick Start: 3-D Planar Tracking


SynthEyes offers a very powerful 2- and 3-D planar tracking subsystem
with many features, so many that it has its own manual: see Planar Tracking
PDF on the SynthEyes Help menu. The 2-D capabilities serve as another flavor
of supervised tracker with additional capabilities.
The 3-D planar trackers are especially powerful, such that you can create
a full 3-D track from a single 3-D planar tracker, if a suitable flat area exists in the
scene. That's what we'll describe in this section. We'll work from a rectangular
area, which is best and easiest, though the planar tracking system can work with
other shapes too.
Opening SynthEyes, click File/New and open the shot "Tower," a 77-frame
1080p HD shot available in the Sample Files area of the website.
Click the Planar room on the top room tab area, which is where planar
trackers are created, tracked, and edited.

Tip: if you don't see the Planar room, you have upgraded and have
your own room preferences. Right-click in the room area and select
Reset All to Factory. For more info see "I Don't See the Planar Room"
in the planar tracking manual.

On the planar panel, click the creation wand at top left. Then click 3
times around the corners of the window group at top left center, starting at top-
left, then proceeding to top right and bottom right; the bottom left corner will be
last, in a moment. (Right-click to cancel any time in the middle of this process.)

24
QUICK START: STABILIZATION

As you go to create the fourth point, at lower left, you'll notice that much
more interesting things start to happen: a pyramidal display appears, and the
fourth corner no longer follows the cursor and seems to have a mind of its own!
The pyramid is literally that, a 3-D pyramid sitting on top of the rectangle
you are creating in 3-D. As you see, many of the corner locations are impossible
or non-useful, and the pyramid will flicker around wildly to different locations:

Assuming you've placed the first three points reasonably, when you place
the fourth properly, you'll see something that makes sense:

25
QUICK START: STABILIZATION

The pyramid is now projecting outwards, perpendicular to the wall. As


you've been doing this, you might have noticed the Own FOV (field of view)
display. With the corners placed, we now have a reasonable value for the
camera field-of-view, from one tracker and one frame. While you're positioning
the corner, you might notice the field of view going to wildly implausible values,
179 degrees or 1 degree. That's a normal piece of feedback.

Tip: If your initial corners aren't well placed, you may not be able to
place the fourth where it should go. Hold down ALT/command while
moving the fourth; SynthEyes will adjust the first three slightly for you.

You'll also notice that the tracker mini-view is showing the interior of the
planar tracker, compressed into a nice little box. The 3-D Aspect value gives the
aspect ratio of the planar tracker in the 3-D environment (not the image!).
Now that you've placed the four corners, you can adjust them as needed,
either by moving them in the camera view, or by dragging the corners inside the
tracker mini-view. You can drag the whole tracker, or move the center of the
tracker mini-view.
You can re-aim the tracker by dragging the apex, or change the FOV or
aspect ratio with the spinners. There's a lot of functionality waiting for you in the
Planar Tracking manual.
For this quick-start, place the cursor over one of the four corners, hold
down shift, and scale the tracker up to encompass some more of the surrounding
window structure. Release shift as you drag to allow the axes to scale
independently. Then drag the whole thing a little to center it up. (Note that as you
scaled or moved the planar tracker, the FOV and 3-D aspect don't change. The
mouse operations occur in the 3-D environment; they are not 2-D effects.)

26
QUICK START: STABILIZATION

With our 3-D planar tracker positioned, it's ready to go, so hit the Play
button at center bottom of the display, or immediately above the channel
selector icon , above the tracker mini-view. SynthEyes will track it through
the shot. The additional rectangle that appears is the search region.

We now have a 3-D track. Click the lock icon to prevent redundant
calculations and inadvertent changes.
You can scrub the shot in the camera view. You can see the stabilized
pattern in the tracker mini-view, with some motion due to the non-planar covers
above the building's windows, which SynthEyes has ignored.
Change the Layout to Quad, and scrub some more. You'll see the red
rectangle moving aroundthat is your 3-D plane moving in space (relative to the
camera).

Change the Layout to Perspective and scrub, there's your 3-D tracker
moving. Click Lock on the mouse bar at top of the Perspective view.

27
QUICK START: STABILIZATION

Wait! Now there are two rectangles! One is the 2-D rectangle, it is located
correctly in the image. The other is the 3-D rectangle, it does not match because
the SynthEyes camera Camera01 (and thus perspective view) is not using the
Field of View required by the planar tracker.
Again, the planar tracker requires a specific field of view, from its Own
FOV spinner. Any 3-D camera view must have exactly that same FOV, if the 2-D
and 3-D rectangles are to be able to match up.

Hold down the control key and click the Planar Options tool gear .
More about that in a moment, but for now the Planar Tracking script bar will pop
up (also available normally from the Scripts menu).

28
QUICK START: STABILIZATION

Click the Save FOV as Known button (to run that script). Presto! Now
there is only a single rectangle visible. Where did the 3-D rectangle go? It's right
there now, exactly lined up with the 2-D rectangle.
The script copied the Own FOV into the Camera's Known field of view
track. You can bounce back and forth between the Lens and Planar panels to
verify that. With the right FOV in place, both rectangles match up.
Now that the FOVs match, we're ready for a 3D export. Clicking the After
Effects 3-D button will launch the exporter (or via the File/Export menu). You can
enter "tower" as the file name, and then control panel for the After Effects
exporter will appear. It's a sophisticated thing in its own right, see the section
After Effects 3-D Javascript Procedure for details. With the default settings (set
your AE version!), the scene will be exported and launch automatically in After
Effects.
That will give you a 3-D composition in After Effects with a 3-D camera
and a moving Placeholder layer that is exactly your planar tracker; you can drop
new imagery or comps into it. Further details are outside the scope of this quick-
start.
You can also click AE Corner Pin on the Planar Tracking script bar (or on
the File/Export menu). With this export, you will get a 2-D composition inside
After Effects, which may be simpler if you're more familiar with AE's 2-D
environment and only need to do a simple effect. (This export can export both 2-
D and 3-D planar trackers.)
Of course, SynthEyes supports many more applications. Instead of special
exports for each application, click the Export Prep button on the script bar (this is
the script Planar Export Preparation).
Export Prep can set up the scene as a moving camera with a single
stationary 3-D planar tracker, or as a stationary camera with multiple moving 3-D
planar trackers. Or you can run it twice for both.
For the quick-start, select Animate Camera and leave the Plane
Orientation at Front. Click OK. Change the Layout to Quad Perspective and then
scrub through the shot. As you see, the 3-D planar tracker is now stationary,
glued to the front view. The camera is now moving. The planar tracker has been
augmented with an identically-sized and -placed mesh object. (In moving-object
mode, there are additional SynthEyes moving objects, with the mesh parented to
the moving object.)
This is a much better configuration for a gravity- and velocity-dependent
particle effect, such as an animation of smoke pouring from the window. You can
move the scene around using the Whole button on the 3-D panel if desired.
This 3-D planar tracker scene, as augmented by the Export Prep script,
can be exported through a wide variety of SynthEyes's existing exporters, such
as Cinema 4D, Filmbox, etc.

29
QUICK START: STABILIZATION

Of course, this quick-start has been just that, touching on some of the
fundamentals. The planar tracking feature set is very deep. Here are some of the
issues that require more attention, and that are covered in the planar tracking
manual:
Occlusion, where actors, vehicles, etc pass between the camera
and the planar surface being tracked.
Variable lighting conditions
Multiple 3-D trackers in the same scene (all the FOVs must match,
since the camera can only have one at a time)
You can see a additional controls on the Planar Options panel, found
underneath the main planar panel on high-resolution displays, or by clicking the
gear icon (without holding control).
The feature set includes quite a few different planar tracker modes, both
2-D modes and various 3-D modes, including for zoom cameras. You can also
supply your own on-set measurements for the rectangle dimensions (notice the
3-D Size spinner?) and known camera FOV.
Keep in mind that 3-D planar trackers are just one more tool in the
toolbox, not a panacea or replacement for a solid 3-D track. Planar trackers use
much smaller amounts of information from the scene, and consequently have
inherently less accuracy. But for a lot of quick effects, they can be a great tool!
Again, for full details see the separate Planar Tracking Manual, which can
be found via the Help menu of SynthEyes.

30
QUICK START: STABILIZATION

Quick Start: Geometric Hierarchy Tracking


Geometric Hierarchy Tracking means tracking known meshes (geometry)
in the scene, which can be connected together to form kinematic hierarchies. For
example, that might mean a hand connected to a forearm, upper arm, and body;
a jaw connected to a head; a door attached to a car; or even a ball bouncing in a
vertical plane across a floor.
The Geometric Hierarchy (GeoH) tracking feature set is so large that it has
its own manual, found like all of them on SynthEyes's Help menu. So we won't be
describing it in detail in this manual, and instead supply one super-simple
example.
We'll work with a shot from our friends at Hollywood Camera Work
(http://www.hollywoodcamerawork.com). They ask that we not link directly to the
files, and links tend to move around anyway, so here's what to look for: in the
Downloads section, click on Tracking Plates. In the section on Planar Tracking(!),
look for U-Haul Walk. That's the shot to use. Note that while this is a fine shot to
try planar tracking on, when we do GeoH tracking on it, we get more data, more
robustly.
In SynthEyes, File/New and select this shot. On the shot settings panel,
be sure to click the 1.777 button, since that is the proper image aspect.
Click on the word GeoHTrack on the top room bar. Click Lock on the
Mouse toolbar in the perspective view so that you see the image. If you scrub
through the shot to look at it, go back to the first frame when you're done, as that
is what the tutorial assumes.
Right-click in the perspective view and selection Creation Object, then
Box. On the foreground of the ground plane, sweep out the base of a box
comparable in size to the back of the truck. Then drag again to set its height. The
exact sizing does not matter at all.
Control-drag upwards on the blue arrow to make the box's orientation a
little more similar to the truck's.
Click the Pinning Toolbar button at the bottom of the GeoH panel. You can
drag the toolbar by its title line, if it is in the way of the truck.

31
QUICK START: STABILIZATION

On the toolbar, click on Create/edit pins, FOV is fixed, Width is fixed, and
Depth is fixed, to configure the pinning operation. ("FOV is fixed" will become
"FOV changes" etc).
Holding down Control (to snap to vertices), drag each of the six main
corners of the box into position on the image. Right-drag to zoom in and out and
middle-pan as needed to do this accurately. You can and should reposition the
pins once created, but don't hold down control when doing sothat will delete
them. See the tooltip of the Create/edit pins button for details.
Note that there's a white top 2"/5 cm high at the top back of the truck; your
box should match the top of that, so that it lines up with the top of the side. You
can note the top of the truck by looking at how it occludes the other truck in the
background.
After right-clicking Pan 2D to reset the zoom, here's the aligned result.

32
QUICK START: STABILIZATION

The box is now aligned on the first frame, and we're ready to track it. Click
the red x on Pinning to close it, then click the GeoH Toolbar button.
On the GeoH toolbar, click GeoH Surface Lasso. Then shift-click the box.
A new moving object will be created, ready for GeoH tracking, and the GeoH
panel will light up. You can close the GeoHTrack toolbar.
To configure which joints should be tracked, click off the X, Y, Z, Pan, Tilt,
and Roll buttons, which changes them from locked to unlocked. All six joints are
unlocked and will be computed as a result of the tracking process.

33
QUICK START: STABILIZATION

We're now ready to track. Click the Play button on the GeoH panel or the
main toolbar, and the tracker will track through the shot. On this demo shot the
result is quick and easy.

You can look at the path using the Quad Perspective view, or switch to the
Graphs room (click the word, not the icon).
This shot is processed easier and more accurately with GeoH tracking
than with planar tracking because of the specific fixed relationship between the
side and back of the truck. If you track either of them individually with a planar
track, the track can jitter unchecked in rotation or depth. With GeoH tracking, the
other side provides a cross-check, if you like, that constrains what either side can
do. It is that cross-checking that makes GeoH tracking powerful.
While here the reference mesh is a simple box, GeoH tracking works with
arbitrary-shaped meshes (ie typically imported from your modeling software),
performing efficiently even with tens or even hundreds of thousands of facets.
The Geometric Hierarchy Tracking manual goes into much more detail,
including limitations of GeoH tracking, how to examine and refine the tracking,
and how to configure hierarchical setups.

34
QUICK START: STABILIZATION

Quick Start: Stabilization


Adding 3-D effects generally requires a moving camera, but making a
smooth camera move can be hard, and a jiggly shot often cries out Amateur!
SynthEyes can help you stabilize your shots for a more professional look, though
like any tool it is not a magic wand: a more stable original shot is always better.
Stabilization will sacrifice some image quality. Well discuss more costs and
benefits of SynthEyes stabilization in the later full section.
Well begin by stabilizing the shot grnfield, available from the web site.
We will do this shot one particular way for illustration, though many other options
are possible. Note that this shot orbits a feature, which will be kept in place.
SynthEyes also can stabilize traveling shots, such as a forward-looking view from
a moving car, where there is no single point that stays in view.
File/New and open the shot using the standard 4:3 defaults. You can play
through it and see the bounciness: it was shot out a helicopter door with no
mechanical stabilizing equipment.

Click the Full Automatic button on the summary panel to


track and solve the shot. If we wanted, we could track without solving, and stick
with 2-D tracks, but well use the more stable and useful 3-D results here.
Select the Shot/Image Preparation menu item (or hit the P key).
In the image prep viewport, drag a lasso around the half-dozen trackers in
the field near the parking lot at left (see the Lasso controls on the Edit menu for
rectangular lassos). We could stabilize using all the trackers, but for illustration
well stabilize this particular group, which would be typical if we were adding a
building into the field.

Click the stabilization tab , change the Translation


stabilization-axis drop-down to Peg, and the Rotation drop-down to Filter.
Reduce the Cut Frequency spinner to 0.5 Hz. This will attenuate rotation
instability, without eliminating it.

Important: it is a common novice mistake to set the filter frequency too


low for a given shaky shot in hopes of magically making it super-
smooth. When a shot contains major bumps, much or all of the entire
source image can go off screen. Start with a higher cut frequency, and
reduce it gradually so you can see the effect.

You should have something like this:

35
QUICK START: STABILIZATION

The image prep window is showing the stabilized output, and large black
bands are present at the bottom and left of the image, because the image has
been shifted (in a 3-D way) so that it will be stable. To eliminate the bands, we
must effectively zoom in a bit, expanding the pixels.
Hit the Auto-Scale button and that is done, expanding by almost 30%, and
eliminating the black bars. This expansion is what reduces image quality, and it
should always be minimized to the extent possible.
Use the frame number spinner in the playbar at the bottom center of the
image preparation dialog to scrub through the shot. The shot is stabilized around
the purple point of interest at left center.
You can see some remaining rotation. You may not always want to make
a shot completely stone solid. A little motion gives it some life. In this case,
merely attenuating the jitter frequency becomes ineffective because the shot is
not that long.
To better show what were going to do next, click the Final button at right,
turning it to Padded mode. Increase the Margin spinner, below it, to 0.125.
Instead of showing the final image, were showing where the final image (the red
outline) is coming from within the original image. Scrub through the shot a little,
then go to the end (frame 178).
Now, change the Rotation mode to Peg also. Instead of low-pass-filtering
the rotation, we have locked the original rotation in place for the length of the
shot. But now, by the end of the shot the red rectangle has gone well off the
original imagery. If you temporarily click Padded to get back to the Final image,
there are two large black missing portions.

36
QUICK START: STABILIZATION

Hit Auto-Scale again, which shrinks the red source rectangle, expanding
the pixels further. Select the Adjust tab of the image preparation window, and
look at the Delta Zoom value. Each pixel is now about 160% of its original size,
reducing image quality. Click Undo to get back to the 129% value we had before.
Unthinkingly increasing the zoom factor is not good for images.
If you scrub through the shot a little (in Padded mode) youll see that the
image-used region is being forced to rotate to compensate for the helicopters
path, orbiting the building site.
For a nice solution, go to the end of the shot, turn on the make-key button
at lower right, then adjust the Delta Rot (rotation) spinner to rotate the red
rectangle back to horizontal as shown.

Scrub through the shot, and youll see that the red rectangle stays
completely within the source image, which is good: there wont be any missing
parts. In fact, you can Auto-scale again and drop the zoom to about 27%.

Warning: SynthEyes uses spline interpolation between key frames on


the adjustment tracks by default. If you set keys close together, this
can produce large excursions at other parts of the shot, driving the
image off-screen. If you must animate many keys, open the Graph
Editor to the stabilization tracks before opening the image
preprocessor, so that you can monitor what you are doing. Switch to
linear keys as needed.

Click Padded to switch back to the Final display mode, and scrub through
to verify the shot again. Note that the black and white dashed box is the
boundary of the original image in Final mode.

37
QUICK START: STABILIZATION

There's some slight blurring introduced by resampling the image. You can
compare different methods for that: click to the Rez tab, and switch the
Interpolation method back and forth from Bi-Linear to 2-Lanczos. You can see
the effect of this especially in the parking lot.

Tip: the interpolation method gives you a trade off between a sharper
image and more artifacts, especially if the image is noisy. Bi-linear
produces a softer image with fewer artifacts, Mitchell-Netravali is a little
sharper, and then the comes Lanczos-2 and the sharpest, Lanczos-3.

To playback at speed, hit OK on the Image Prep dialog. You will probably
receive a message about some (unstabilized) frames that need to be flushed
from the cache; hit OK.
Youll notice that the trackers are no longer in the right places: they are
in the right place for the original images, not the stabilized images. Well later see
the button for this, but for now, right-click in the camera view and turn off
View/Show trackers and View/Show 3-D Points.

Hit the main SynthEyes play button , and you will see a very nicely
stabilized version of the shot.
By adding the hand-animated directorial component of the stabilization,
we were able to achieve a very nice result, without requiring an excessive
amount of zoom. [By intentionally moving the point of interest , the required
zoom can be reduced further to under 15%.]
If you look carefully at the shot, you will notice some occasional
strangeness where things seem to go out of focus temporarily. This is the motion
blur due to the cameras motion during shooting.

Important: To minimize motion blur when shooting footage that


will be stabilized, keep the cameras shutter time as small as
possible (a small shutter angle for film cameras).

Doubtless you would now like to save the sequence out for later
compositing with final effects (or maybe a stabilized shot is all you needed). Hit P

to bring the image prep dialog back up, and select the Output tab .
Click the Save Sequence button.
Click the button to select the output file type and name. Note that for
image sequences, you should include the number of zeroes and starting frame
number that you want in the first image sequence file name: seq001 or seq0000
for example. After setting any compression options, hit Start, and the sequence
will be saved.
There are a number of things which have happened behinds the scene
during this quick start, where SynthEyes has taken advantage of the 3-D solves

38
QUICK START: STABILIZATION

field of view and many trackers to produce better results than traditional
stabilizing software.
SynthEyes has plenty of additional controls affording you directorial
control, and the ability to combine some workflow operations that normally would
be separate, improving final image quality in the process. These are described
later in the Stabilization section of the manual.

39
Quick Start: The Works
As a final quick start, we present a tripod shot with zoom; we will stabilize
the shot slightly.
Open the shot VFArchHalf from the web site, a representative
documentary-type shot at half-HD resolution, 960x540. (This is possibly the only
way to know what the arch actually says.) Select NTSC playback rate. If you look
carefully at the rest of the shot you will notice one thing that has been done to the
shot to make the cameramans job easier.
Since the camera was mounted on a tripod and does not physically
translate during the shot, check the On Tripod checkbox on the Summary Panel.
The lens zooms, so check the Zoom lens checkbox.

Tip: Do not check the Zoom checkbox for all your shots, just in case.
Zoom processing is noisier and less robust.

Click the Run Auto-tracker button to generate trackers, but not solve yet.
Scrub through the beginning of the shot, and you will see a few trackers
on the moving tree branches at left. Lasso-select them (see the Lasso controls
on the Edit menu), then hit the Delete key.
Click Solve.
After solving, hit shift-C or Track/Clean Up Trackers. Click Fix to delete a
few high-error trackers.
To update a tripod-type shot, we must use the Refine Tripod mode on the
Solver panel. Change the mode from Tripod to Refine Tripod. Hit Go!
Look in the Top and Left views and notice how all of the trackers are
located a fixed distance away from the camera.
SynthEyes must do that because in a tripod shot, there is no perspective
available to estimate the distance. You can easily insert 3-D objects and get
them to stick, but aligning them will be more difficult. You can use SynthEyess
single-frame alignment capability to help do that.

For illustration now, go to the 3-D panel and use the create tool to
create a cylinder, box, or earthling in the top view. No matter where you create it,
it will stick if you scrub through the shot. You can reposition it using the other 3-D
tools, move , rotate , and scale , however you like. You can change the
number of segments in the primitive meshes with the # button and spinners on
the 3-D panel, immediately after creation or later.
Once you finish playing, delete all the meshes you have created.
If you have let the shot play at normal playback speed, youve probably
noticed that the camera work is not the best.

41
QUICK START: THE WORKS

Hit the P key to bring up the image preprocessor. Use the frame spinner at
bottom to go to the end of the shot.
Lasso-select the visible trackers, those in and immediately surrounding
the text area.
Now, on the Stabilize tab, change the Translation and Rotation stabilize
modes to Filter. As you do this, SynthEyes records the selected trackers as the
source of the stabilization data. If you did this first, then remembered to select
some particular trackers, or later want to change the trackers used, you can hit
Get Tracks to reload the stabilization tracking data.
Decrease the Cut Freq(Hz) spinner to 1.0 Hz.
Click Auto-Scale. If you click to the Adjust tab, you will see that it is less
than a 5% zoom.

Go to the Rez tab , experiment with the Interpolation if you like; the
default 2-Lanczos generally works well at a reasonable speed.
Hit OK to close the Image Preprocessor.
Switch to the Camera View.
Type J and control-J to turn off tracker display.
Select Normal Speed on the View menu.

For a 1:1 size, shift-click the reset-camera-zoom button.


Hit Play. You now have much smoother camera work, without being overly
robotic.

Use the output tab on the image preprocessor to write the


sequence back out if you wish.
If you want to insert an object into the stabilized shot, you need to update
the trackers and then the camera solution. On the Image Preprocessors Output
tab, click Apply to Trkers once. Close the image preprocessor, then go to the
Solver panel, make sure the solver is in Refine Tripod mode, and click Go!

42
Shooting Requirements for 3-D Effects
Youve seen how to track a simple demo shot. How about your own
shots? Not every shot is suitable for match-moving. If you can not look at the
shot and have a rough idea of where the camera went and where the objects are,
SynthEyes wont be able to either. Its helpful to understand what is needed to
get a good match-move, to know what can be done and what cant, and
sometimes to help a projects director or camera-person plan the shots for effects
insertion.
This list suggests what is necessary:
The camera must physically change location: a simple pan, tilt, or zoom
is not enough for 3-D scene reconstruction. If you have a hand-held
camera, and point it in different directions, that will not be enough camera
translation: you need to walk!
Depth of scene: everything can not be the same distance, or very far, from
the camera.
Distinct trackable features in the shot (reflected highlights from lights do
not count and must be avoided).
The trackable features should not all be in the same plane, for example,
they should not all be on a flat floor or green-screen on the back wall.
If at all possible, shoot progressive, not interlaced footage. Interlaced
footage has only half the vertical resolution, twice as many frames to
worry about, and is like using floppy disks or Windows 98.
If the camera did not move, then either
You must need only the motion of a single object that occupies much of
the screen while moving nontrivially in 3-D (maybe a few objects at film
resolution),
Or, you must make do with a 2 -D match-move, which will track the
cameras panning, tilting, and zooming, but can not report the distance to
any point,
Or, you must shoot some separate still or video imagery where the
camera does move, which can be used to determine the 3-D location of
features tracked in the primary shot.
For this second group of cases, if the camera spins around on a tripod, it
is IMPOSSIBLE, even in theory, to determine how far away anything is. This is
not a bug. SynthEyes tripod tracking mode will help you insert 3-D objects in
such tripod shots anyway. The axis alignment system will help you place 3-D
objects in the scene correctly. It can also solve pure lock-off shots.
If the camera was on a tripod, but shoots a single moving object, such as
bus driving by, you may be able to recover the camera pan/tilt plus the 3-D
motion of the bus relative the camera. This would let you insert a beast clawing
into the top of the bus, for example.

43
BASIC OPERATION

Types of Shots: Gallery of Horrors


We've found that often new customers don't recognize the type of shots
they have, or special problems in them, and therefore fail to employ the
appropriate techniques. So here we'll give you a quick introduction to different
types of shots and various difficulty elements, so you'll know what to look for, and
where to start to do something about it. Note that a given shot can be several
types simultaneously!
Normal shot. The 'ideal' shots handled by SynthEyes feature a camera that
translates smoothly along some path, with many visible features etc.
Mainly, not one of these other types!
Hand-held shot. Typically a hand-held camcorder with much bounce, which will
make tracking more difficult. Use the Track/Hand-held: steady prediction
mode. Also consider stabilization. Counter-intuitively, hand-held shots are
often tripod (nodal) shots, as frequently the shooter is standing still, ie not
translating.
Lock-off shot. The camera does not move, typically it is hard-mounted on a
tripod. We may be doing object tracking, or we may want to know where
the camera is using model-based tracking, or more roughly, using single-
frame alignment.
Tripod shot. The camera is panning, tilting, rolling, maybe zooming... but not
translating. No 3-D information is present! We will tripod-track the
camera motion, but cannot determine the distance to objects in the scene.
To position the camera in 3-D, we might use model-based tracking or
single-frame alignment.
Nodal shot. Another name for tripod shots.
Nearly-Nodal. Avoid! This is a shot where there is a small amount of translation,
enough that a nodal solution isn't appropriate, but not enough translation
for a full 3D solve. Common with amateur hand-held shooters. Possible
rescues require very careful supervised tracking.
Mixed tripod. A shot with both translating and tripod sections: the camera moves
down a dolly track, reaches the end, then pans around for a while, for
example. Use Hold Mode features.
Traveling shot. The camera moves forward continuously for long distances, for
example on the front of a car. If the motion is very straight, estimating field
of view will be difficult (think of the original Star Trek open). Possible
cumulative errors may require external (GPS) survey data to resolve.
Survey shot. The shot is a collection of digital stills, rather than film/video.
Typically used to measure out a set from a wide range of vantage points.
Special survey shot features simplify handling these.
Zoom shot. The camera zoom was changed during the shot. Watch out:
sometimes shots will have hidden zooms, for example, intentionally to
compensate when a dolly runs out of track, or unintentionally, as a result
of focus breathing. Turn on zoom processing on the Summary or Solver
panel.

44
BASIC OPERATION

Green-screen shot. The background is a big green (or blue) backdrop. It must
have trackable features on it, and out in the foreground! Beware situations
where the camera movies in to focus on an actor... leaving no markers in
sight. The green-screen panel can help auto-track these quickly.
Stereo shot. You're given left-eye and right-eye cameras to track. Some people
think stereo should be less than twice as hard, but really it's more like
three times as hard, since you must match each camera to the world, plus
match the two cameras to each other.
Occluded shot. Typically, actors or cars are moving around in front of the
background. The camera must not include any trackers on moving objects
like this (if they are to be tracked, they will be tracked separately as
moving objects). Delete unwanted trackers or use the roto-masking or
alpha facilities to handle these.
Windy shot. The shot contains grass or trees with branches blowing in the wind,
or shots with moving water. Such features are not reliable and must not be
tracked. Use Zero-weighted trackers to approximately locate them, roto- or
alpha-masking, or in an emergency, many short-lived trackers.
Flickery shot. The shot contains rapid changes in overall light level, typically
from explosions or emergency lights, possibly from candlelight or torches.
Handle using high-pass filtering in the image preprocessor.
Noise Amplifier. All the features being tracked are much further away than the
object being inserted. No matter what, physics says the insert will be
unstable: there's no information to say what happens up close. Filtering
will be required.(An advanced technique is to put a tracker on an up-close
inserted object, retrack it, manually adjust that track, then refine the overall
solve.)
Fisheye shot. The shot features visible lens distortion (of any kind). These
require substantial additional workflow. If at all possible, shoot and
calibrate using lens grid autocalibration.
Perfect Push-In. The camera moves forward in a straight line exactly along the
optic axisthe direction it is looking. Most likely to occur on vehicle or
dolly shots. There is no information available in these shots to estimate
field of view. You will need to supply a value manually, from an on-set
measurement or a best guess.
Planar Shot. All the trackable features lie on, or nearly on, a single plane in 3-D,
typically a flat floor or back wall (especially for green-screen shots). That's
a problem, because there is no 3-D in that tracker data, basically the
trackers are all redundant: this is actually the underlying mathematics
used for planar four-corner pinning. A supplied field of view is helpful. Be
sure to track any off-plane features, ie using supervised tracking.
Rolling-shutter shot. A shot from a CMOS camera that contains enough rolling
shutter distortion that it must be handled carefully. Rolling shutter is
present in all CMOS shots and must frequently be addressed. In very
noticable forms, you'll see vertical lines made diagonal, or objects
squished or stretched vertically.

45
BASIC OPERATION

Camera tracking. We are interested in the 3-D path and field of view of the
camera with respect to the background, as opposed to that of any
independently-moving objects in the scene.
Object tracking shot. We are interested in the 3-D path of one or more moving
objects in the 3-D environment, with the camera either locked or also
moving. All the trackers on the moving object must be rigidly positioned
with respect to one another: the object cannot bend or deflect like a face.
Camera and object tracking shot. The shot contains one or more moving
objects, in addition to a camera that is also simultaneously pan/tilt/rolling
and translating. When the camera also translates, there are additional
relative scaling issues between the camera and objects.
Model-based tracking. Camera or object tracking accompanied by known pre-
existing 3D mesh models of the set or moving object. Trackers will be
constrained to the corresponding locations on the mesh.
Motion capture shot. Two or more calibrated cameras used to produce
individual 3-D trajectories for each pair/triplet/etc of trackers, which all
move independent of one another. Typically used to drive animated faces.
Note that though we've given you some jumping-off links to look at,
working through the manual is going to be necessary. You need to learn the
fundamentals that go behind all the shot types; they are not repeated
everywhere.

46
BASIC OPERATION

Basic Operation
Before describing the match-moving process in more detail, here is an
overview of the elements of the user interface, beginning with an annotated
image. Details on each element can be found in the reference sections.

Coordinate Systems
SynthEyes can operate in any of several different coordinate system
alignments, such as Z-up, Y-up, or Y-up-left-handed (Lightwave). The coordinate
axis setting is controlled from Edit/Scene Settings; the default setting is controlled
from Edit/Edit Preferences.
The viewports show the directions of each coordinate axis, X in red, Y in

green, Z in blue. One axis is out of the plane of the screen, and is
labeled as t(towards) or a(away). For example, in the Top view in Z-up mode, the
Z axis is labeled Zt.
SynthEyes automatically adjusts the scene and user interface when you
change the coordinate system setting. If a point is at X/Y/Z = 0,0,10 in Z-up
mode, then if you change to Y up mode, the point will be at 0,10,0. Effectively,
SynthEyes preserves the view from each direction: Top, Front, Left, etc, so that
the view from each direction never changes as you change the coordinate
system setting. The axis will shift, and the coordinates of the points and cameras.

47
BASIC OPERATION

Consequently, you can change the scene coordinate axis setting


whenever you like, and some exporters do it temporarily to match the target
application.

Rotation Angles
SynthEyes uses pan, tilt, and roll angles to describe the orientation of
cameras and objects. With an upright camera looking at the Front view, the pan,
tilt, roll angles, are 0,0,0. Because 10 degrees and 370 degrees refer to the same
orientation, rotation angles have some subtleties.
You can hand-animate the rotation tracks of cameras and moving objects
using unlimited rotation angles, which prevents crazy movementssuch as a
358 degree motion from +179 to -179 instead of the intended 2 degrees.
The 3D solving process and much of the math associated with tracking
produce principal values, ie -180 to +180 (generally the math doesn't use rotation
angles at all!). At the completion of solving, the sequence of rotations is
processed to create better-behaved unlimited rotation angles, which you can edit.
But note that it can be possible to make changes that result in a different set of
angles. For example if you take a tracked shot with a big pan and extend the shot
earlier into a prior pan, the new initial pan will start at 0, which can cause the
original section to be 360... These situations should be rare.
Applying various mathematic operations to the track can also result in
different unlimited values, especially reorienting the entire path using Whole
mode. (This also necessarily forces frames that have keys on individual axes to
be keyed on all axes instead).

Menus
When you see something like Shot/Edit Shot in this manual, it is referring
to the Edit Shot menu item within the Shot section of the main menu.
SynthEyes also has right-click menus that appear when you click the right
mouse button within a viewport. The menu that appears will depend on the
viewport you click in.
The menus also show the keyboard equivalent of each menu item, if one
is defined.

Key Mapping
SynthEyes offers keyboard accelerators for menu entries, various control
buttons, and Sizzle scripts, as described in the keyboard manager reference
section.
Keyboard accelerators for menu operations are accessed with control-
whatever on Windows and Linux, and command-whatever on Mac OS X. So
you'll see something like control/command-A meaning control-A on
Windows/Linux, command-A on Mac.

48
BASIC OPERATION

Similarly, anything referring to the ALT key on Windows/Linux refers again


to the command key on the Mac. ALT-left means hold down the ALT key and
click the left mouse button; on a Mac use command with the left mouse button.
Note that the Mac's Opt key is not used in SynthEyes; command is used instead.
You can change the keyboard accelerators from the keyboard manager,
initiated with Edit/Edit Keyboard Map or by changing the keybd14.ini file in your
File/User Data Folder.

Important! SynthEyes keyboard shortcuts are not remapped to


account for non-US keyboard layouts. For example, on a German
keyboard, where Y and Z are swapped versus US keyboards, the
Undo command control-Z will appear as control-Y on the German
keyboard. While it would certainly be less confusing if control-Z stayed
as control-Z, the keyboard layout is largely chosen based on finger
positions, especially ASDF. Given that, leaving the shortcuts in the
same physical location makes the most sense!

Note that the tracker-related commands will work only from within the
camera view, or when one is visible, so that you do not inadvertently corrupt a
tracker.
On Windows, you can also use Windowss ALT-whatever acceleration to
access the menu bar, such as ALT-F-X to exit.

Main Tool Bar


The tool bar runs across the top of the application, including file operation
icons, buttons to switch among the control panels, and several viewport controls.
Main Selector
Trackers, cameras, moving objects, mesh objects, and lights can all be
selected by clicking on them in the viewports, or selected by name on the main
selector drop-down at left of the main toolbar (displaying "Tracker36" in the
screen capture). Double-clicking the name will pop up a dialog to change the
name.
While any number of trackers on the current active tracker host (see next
section) can be selected at a time, or several meshes can be selected at once,
only a single other object can be selected at a given time. Some parts of
SynthEyes may treat multiple selected meshes as if none are selected.
In the perspective window, a single mesh object can be designated as the
Edit Mesh, where its facets and vertices are exposed and subject to editing.
Note that a moving object can be active, but not selected, and vice versa.
Similarly, a mesh object can be selected but not the edit mesh, and vice versa.

49
BASIC OPERATION

Active Camera/Object
At any point in time, one camera or moving object is referred to as the
Active Tracker Host (or active object for short). The active tracker host is visible
on the dropdown of that name on the main toolbar, and is the checked one listed
on the Shot menu.
The active tracker host (a moving object or camera) will have its shot
shown in the Camera view, and its trackers are visible, selectable, and editable.
The active object, or all objects on its shot, will be exported, depending on the
exporter.
The active tracker host camera or object is separate from the usual idea
of selection. Just because an object is the active host does not mean that it or its
trackers are selected (for example, to be deleted by the Delete key).
Undo/Redo
SynthEyes includes full undo and redo support via the Undo/Redo buttons
and via Edit/Undo and Edit/Redo menu items. Right-clicking the undo or redo
buttons brings up a menu showing available undo and redo operations. If you
click on the third item down, for example, the three most-recent operations will be
undone.

NOTE: If you find yourself having to think too much about left or right
clicking, you can make the undo and redo buttons always bring up the
list of operations, by turning on the Show undo menu, don't undo yet
preference in the user interface section. If you do that, you can still get
instant action using the keyboard shortcut (read on).

Control/command-Z immediately undoes the most recent operation,


Control-Y or Command-shift-Z immediately redoes the most recent operation.
You can always see the name of the most recent operation from the tooltip of the
undo or redo button.
Past the first ten (preference setting) elements of the undo or redo menus,
successive items with the same name are condensed to a single item, for
example "Select tracker (*3)" represents 3 consecutive tracker selections. As
those items reach the first ten items of the undo or redo menu, each item will be
shown individually.
There are a number of controls for how many undo operations are
possible, based on the number of operations, but also on the amount of memory
required to store the operations (some operations such as the Image
Preprocessors "Apply to trackers" can require substantial amounts of memory on
long complex shots). Preferences in the Undo/Redo area control this.
Three buttons at right of the toolbar perform Customer Care Center
functions such as messages and upgrades.

50
BASIC OPERATION

Control Panels
At any time, one control panel, such as the Summary panel, Tracker
panel, or Solver panel, is displayed in the control panel area along the left side of
the user interface. Each control panel features a variety of different buttons,
checkboxes, etc regarding some particular area of functionality.
In the graphic above, the Trackers room has the Tracker control panel.
Normally it brings up a single camera view; we changed it for the graphic above.
You can use any viewport configuration with a given room, just by selecting that
view.
Use the room bar across the top of the user interface to select which
control panel is displayed. SynthEyes uses rooms and control panels as a way to
organize all the many individual controls. Each control panel corresponds to a
particular task, and while that control panel is open, the mouse actions in the
viewports, and the keyboard accelerator keys, adapt to help accomplish a
particular task. The rooms are arranged so that you can often start at the left and
work to the right.
For some rooms, there is a secondary control panel under the first for
convenience. Don't be alarmed if the secondary control panel appears to be cut
off on small monitors! You can access its own room directly.

Viewport Layouts
A layout consists of one or more viewports (camera view, perspective
view, top view, etc), shown simultaneously in a particular arrangement. Select a
layout with the drop-down list on the main toolbar.
A tab at top right of each pane switches to a single full-size version of
that pane, or back if it already is.
You can adjust the relative sizing of each pane by dragging the gutters
between panes.
You can change any pane of a layout to a different type by clicking the tab
just above the upper left corner of the pane, creating a custom layout. You
can keep on changing panes and that will continue to affect the same custom
layout. To create a new custom layout, switch to a different existing non-custom
layout and begin customizing it.
To name your custom layouts and create different pane arrangements,
use the viewport layout manager (on the Window menu). Your layouts are stored
in the SynthEyes file; you can also set up your own default configurations
(preferences) using the layout manager.
Some viewports have several flavors, for example Camera, LCamera,
Perspective, Perspective B, SimulTrack, RSimulTrack, Top, and Right. These
flavors are different settings for the same underlying view type, ie LCamera and
RCamera are both camera views, but one initializes to look at the left eye of a
stereo shot, vs right for the other. Each flavor preserves various settings when

51
BASIC OPERATION

the overall layout changes, so that when you switch from a Quad view to a Top
view, the Top view will continue to look the same. For this reason, if you have a
layout with two perspective views, you should use Perspective and Perspective
B. If you used two Perspective B's, then changed layout and then back again,
both Perspective B's would have the same configurationone of the two
originals at random. For good layouts for stereo shots, use an appropriate
combination of LCamera, RCamera, LSimulTrack, and RSimulTrack rather than
the plain Camera or SimulTrack.

Room-Selection Tab Control


A room is a combination of one or two particular control panels and a
viewport configuration. There's a pre-defined room for each control panel. The
room associates a control panel with a particular viewport configuration. As you
switch from room to room, you'll be switched to your desired viewport
configuration as well. Hopefully, you get the views you want with minimal clicking
about.
You have quite a lot of control over the details of this in SynthEyes:
If you click the text of a room name, both the panel and view configuration will
change, ...but...
If you click the icon of a room, or hold down SHIFT, just the panel will
change. (Icon-specific behavior enabled by the No layout change upon icon
click preference.)
If you change the view configuration for a room, SynthEyes remembers that
change so that you will get the new configuration if you go to a different room
then come back, ....but...
If you hold down the SHIFT key when you select the new view configuration,
SynthEyes will not remember the change.
You can use the room manager (right-click a room and select Edit Room) to
change, add, or delete rooms.
You can use the room manager to add completely new rooms, for example
for stereo or to have different pre-existing view configurations for the same
panel.
If you don't want SynthEyes to ever remember your manual viewport
changes, use the No room change upon layout change preference. Use the
room manager to make changes.
If you want a very simple approach: with the No layout change upon room
change preference, rooms select only the panel, and never change the
viewport.
The rooms are stored in each .sni file, and you can save a configuration
as your preferences via right-click/Save All as Preferences.
Via a preference, you can control whether the room bar shows text only,
icons only, or the defaultboth text and icon as shown.

52
BASIC OPERATION

Floating Panels
Programs such as Photoshop and Illustrator have many different palettes
that can appear and disappear individually. If you are more familiar with this
style, or it is more convenient for a particular task, you can select the
Window/Many Floating Panels option, and have any number of panels open at
once. Keep in mind that only one panel is still primarily in charge, and there may
be unwanted interactions between panels in some combinations. The primary
panel is still marked in hot yellow, while the other panels are a cooler blue.
You can also keeping the current panel floating, instead of locked into the
base window, with the Window/Float One Panel menu item. Floating panels are
preserved and restored across runs and files.

Buttons, Selectors, and Editable Selectors

SynthEyes uses similar graphics for buttons , which you

push to activate something; selectors , where you choose one

of several items from a list, and editable selectors , where


you can not only select something, but change its name. Note that selectors are
sometimes called comboboxes or dropdowns.
As you can see, selectors and editable selectors have the triangle on the
right, denoting the drop-down menu that allows you to select the item. Depending
on a preference, the list might appear below the selector (Windows style) or
placed so that the currently selected item is overtop of the selector (OS X style).
The OS X version is selected by the default preference in Linux.
Editable selectors may be identified by the vertical line at their right
interior, denoting that the selector is partitioned into two parts. To change the
name of something, double-click the text itself, ie "Camera01."
When a list is dropped down, the selection follows the mouse position
without changing the selection, as long as the mouse is inside the dropdown list.
To use keyboard keys (up arrow, down arrow, home, end) or the mouse scroll
wheel, move the cursor outside the dropped-down list, in order to prevent a
conflict between the mouse position and key or scroll operation.
When a button is connected to an animated track, such as a tracker
enable, right-clicking the button will delete a key, shift-click will truncate keys
from the current frame on, and control-click will delete all keys.

Edit Fields and Keyboard Focus


Edit fields are where you enter tracker names, mesh names, camera
names, numeric values, etc. Edit fields hold on to the keyboard focus, so that
additional keystrokes are added to the text field, even if the mouse wanders into
an adjacent viewport that would normally interpret the key as an accelerator of

53
BASIC OPERATION

some kind. Edit fields hold onto focus in this manner so you do not have to pay
as much attention to the mouse, and especially for tablets.
When you are done entering text and want to resume normal mouse
operations, hit the enter key. The keyboard focus can now be acquired by
whatever window the mouse is in. You can also resume normal operations by
clicking on something else.

Spinners

Spinners are the numeric fields with arrows at each end: .


You can drag right/upwards and left/downwards from within the spinner to rapidly
adjust the value, or click the respective arrow < or > to change a little at a time.
If you click on the number displayed in a spinner, you can type in a new
value. As you enter a new value for a spinner, the keystrokes do not take effect
immediately, as that tends to make things fly around inconveniently. When you
have finished entering a value, hit the enter key. This will enter the value into
SynthEyes.
Spinners show keyed frames with a red underline. You can remove a key
or reset a spinner to a default or initial value by right-clicking it. If you shift-right-
click a key, all following keys are truncated. If you control-right-click a key, all
keys are removed from the track.

Viewports
The main display area can show a single viewport, such as a Top or
Camera View, or several independent viewports simultaneously as part of a
layout, such as Quad. Viewports grab keyboard and mouse focus immediately
when the mouse enters the field (except if you are entering text, see below), so
you can move the mouse into a view and start clicking or mouse-wheel-zooming
or using accelerator keys immediately.
Many viewports use the middle-mouse button for panning, and the scroll
wheel or right-mouse-drag to zoom. A 3-button+scroll mouse is highly
recommended for effective SynthEyes use, as trackpads are not designed for
this! For short-term use without a middle-mouse button, ie with a trackpad, see
the section on Middle Mouse Button Problems. Note that the ESCape key can be
used to terminate mouse operations in lieu of right-click.

Viewport Rendering Engines


The Camera View, 3-D View, and Perspective View all display 3-D
imagery. The Perspective View is always displayed using the OpenGL rendering
system. (The camera view is trickier to draw than the perspective view, because
the camera view dynamically distorts the 3D geometry.) The camera and 3-D
views can be rendered using OpenGL, the operating system's built-in graphics
(GDI on Windows, Cocoa on Mac), or a software-based custom SynthEyes
rendering engine. Each method has strengths and limitations:

54
BASIC OPERATION

OpenGL draws anti-aliased (pretty) lines, handles opacity and texturing,


and is fairly fast; but it is not that fast at displaying shot imagery. OpenGL is
sensitive to your particular graphics cards; low-end integrated graphics may not
perform that well. OpenGL is used by default on Mac OS X and Linux.
GDI on Windows is very fast at displaying shot imagery, but bogs down on
large meshes, isn't very pretty, and doesn't handle various amenities. For pure
RAM playback, it is very good.
The built-in graphics on Mac OS X and Linux are capable, but not
particularly speedy. Generally the OpenGL renderer is used on those platforms.
The SynthEyes renderer accelerates (only) the 3-D mesh drawing that the
built-in GDI graphics methods handle poorly. By using vector optimization and
parallel processing, it can be very quick, faster than OpenGL in some cases,
though it does not handle antialiasing, texturing, or opacity. You might also see
some shimmering in dense scenes, as "ties" for pixel color are resolved
depending on the particular order that your processor cores do the work. For
handling large meshes, especially on Windows, using GDI plus the software
assist may be a more productive choice than OpenGL. It does require a
substantial amount of memory to operate, and so may not be a good choice on
32-bit-only machines.

Important: there are often large performance differences between


different graphics modes, such as whether back faces are shown, or
wireframe vs solid renders (solid is faster!). Options such as
shadowing can greatly increase render time on with large meshes. Be
sure to consider what you need.

You can change methods depending on what you are doing at any
particular time using the settings on the View menu and the corresponding
Preferences (for new scenes).

Play Bar
The play bar appears at the bottom of the SynthEyes window for
compatibility with other apps, though for efficiency it can be moved to the top by
a setting on the preferences panel. A duplicate appears at the top of the Tracker
control panel as well. The playbar features play , stop, frame forward , etc
controls as well as the frame number display. The play button changes to show
the current playback direction.
Frames are numbered from 0 unless you adjust the preferences.

Tip: SynthEyes normally plays back "as fast as possible" one frame at
a time, which is most productive for tracking. If you want to play back
at the nominal frame rate, adjust the settings on the View Menu
(temporarily).

55
BASIC OPERATION

Time Bar
The time bar shows the frame numbers, a current frame marker (blue
vertical line), key markers (black triangles at bottom of the timebar), enable
status (solid horizontal blue line at bottom of timebar), and cache status (pink if
the frame is not currently in-cache) or whether or not there are enough trackers.
Not shown are playback range markers, which are small green or red triangles at
the top of the timebar on the beginning and end of the range. Also not shown are
darker portions of the timebar for frames past the beginning or end of the shot.
The timebar background is selected from the View/Timebar background
submenu, to be either the Show cache status or Show tracker count status. A
preference controls the initial setting.

Error Curve Mini-View


The little graphic at bottom right, and color-coded number next to it, are a
quick reference that show error information for the selected tracker(s) or entire
scene: how much the predicted tracker locations differ from the actual locations.
(It shows a tracking quality number if the tracker hasn't been solved yet). You can
click in this view to rapidly move to the corresponding frame, or right-double-click
to reset the playback range to be the full shot range.
For more information see the Error Curve Mini-View guide and the Error
Curve reference. (This display is hidden when it has nothing to show.)

Floating Views
Floating viewports can be created for the camera, graph editor, hierarchy,
perspective, and/or SimulTrack views with Window/Floating whatever menu
items. You can move floating windows to a second or third monitor, and they will
be restored when you close and reopen SynthEyes. While by default for
convenience only a single floating instance of each type can be created (and
toggled in and out of existence by the menu item), you can create multiple
floating windows of each of these types by changing the Only one floating ...
preferences in the User Interface section.

Tooltips
Tooltips are helpful little boxes of text that pop up when you put the mouse
over an item for a little while. There are tooltips for the controls, to help explain
their function, and tooltips in the viewports to identify tracker and object names. If
you aren't sure what a tooltip does, check for a tooltip!
The tooltip of a tracker has a background color that shows whether it is an
automatically-generated tracker (lead gray), or supervised tracker (gold).

Status Line
Some mouse operations display current position information on the status
line at the bottom of the overall SynthEyes window, depending on what window

56
BASIC OPERATION

the mouse is in, and whether it is dragging. For example, zooming in the camera
view shows a relative zoom percentage, while zooming in a 3-D viewport shows
the viewports width and height in 3-D units.

Color Scheme
You can change virtually all of the colors in the user interface individually,
if you like. For example, you can change the default tracker color from green to
blue, if you are constantly handling green-screen shots. See Keeping Track of
the Trackers for more information.

Click-on/Click-off Mode
Tracking can involve substantial sustained effort by your hands and wrists,
so proper ergonomics are important to your workstation setup, and you should
take regular breaks.
As another potential aid, SynthEyes offers an experimental click-on/click-
off mode, which replaces the usual dragging of items around with a click-
on/move/click-off approach. In this mode, you do not have to hold the mouse
buttons down so much, especially as you move, so there should be less strain
(though we can not offer a medical opinion on this, use at your own risk and
discretion).

Note: we're not sure how often, if at all, click-on/click-off is used, and
don't test it regularly. If you are using it and have trouble, contact us.

You can set the click-on/click-off mode as a preference, and can switch it
on and off whenever convenient from the Window menu.
Click-on/click-off mode affects only the camera view, tracker mini-view, 3-
D viewports, perspective window, and spinners, and affects only the left and
middle mouse buttons, never the right. This captures the common needs, without
requiring an excess of clicking in other scenarios.

Scripts
SynthEyes has a scripting language, Sizzle, and uses Sizzle scripts to
implement exporters, some importers, and tool functions. While many scripts are
supplied with SynthEyes, you can change them as you see fit, or write new ones
to interface to your studio workflow.
You can find the importers on the File/Importers menu, exporters on the
File/Exporters menu, and tool scripts on the main Script menu.
The File menu presents the last few exporters for quick access, and an
Export Again option (with keyboard accelerator) for fastest of all. Similarly the
Script menu presents the last few scripts, with a keyboard accelerator for the last
one. You can adjust the number of most-recent exports and scripts kept from the
Save/Export section of the preferences.

57
BASIC OPERATION

On your machine, scripts are stored in two places: a central folder for all
SynthEyes users, and a personal folder for your own. Two menu items at the top
of the Script menu will quickly open either folder.
SynthEyes mirrors the folder structure to produce a matching sub-menu
structure. You can create your own My Scripts folder in your personal area and
place all your own scripts in that area, to be able to quickly find your scripts and
distinguish them from the standard system scripts.
Similarly, a studio might have an Our Shared Scripts folder in the shared
SynthEyes scripts folder. The entire scripts folder can be moved to a shared
drive using the Scripts folder preference.
Scripts in the user's area are listed with an asterisk (*) as a prefix. If a
user's script has the same name as a system script, thus replacing it, the user's
script will be used, and it is prefixed with three asterisks(***) to note this situation.

Script Bars (for Menus too!)


It can be convenient to be able to quickly run scripts or access menu
commands as you work, without having to search through the menus or
remember a keyboard command. You can create script bars for this purpose,
which are small floating windows with a column of buttons, one for each script or
menu item you would like to quickly access. Use the Script Bar Manager to
create and modify the script bars, and the Script bar submenu of the Scripts
menu to open them.
Script bars are automatically re-opened and re-positioned each time you
start SynthEyes, if they were open when SynthEyes last closed.

Important: Script bars are stored and deployed as small text files.
When you create a new script bar, you must store it in either one of the
system-wide script folders or your user script folder. To see the proper
locations, use the Script/User script folder or Script/System script
folder menu items to open the File Explorer or Finder to the
corresponding folder.

The Perspective view has its own kind of floating toolbars as well, though
they are limited to existing built-in operations and there is no manager for them
(they are controlled by the user-changable XML file pertool14.xml).

Notes
You can place rectangular notes on your SynthEyes camera and
perspective views, to mark places that still need tracking, need improvement, or
whatever. They are particularly useful for tracking supervisors to communicate
with individual trackers.
To create a note, click the Edit/Add Note menu item. The Notes Editor will
appear with information on the new note:

58
BASIC OPERATION

one or more lines of text


a checkbox for whether the note is shown or hidden
a background color swatch for the note. The background color for
new notes is default color, with the actual color set from the
preferences.
the camera (view) that the note is shown on. Can also be set to All.
the beginning and ending frame numbers that the note is shown. By
default, they begin at the current frame and last for a value set by a
preference. Set the preference to zero to have them default to the
entire shot.
Whether the note is pinned to the shot image and moves as it pans
and zooms, or is stationary at a specific location in the camera or
perspective view, regardless of panning and zooming.
You can create new notes or delete the current note from the notes editor
as well. You can create any number of notes on any camera, spread through the
shot. Use the go back and go forward buttons to step through the notes.
New notes are located at top left of the (active) camera view. You can
drag them to any desired location, positioning specifically the marked corner.
(Which corner is marked will vary dynamically as you zoom, pan, etc, such that
the marked corner stays put and the text stays visible.)
To edit a note, double-click it.
To look for notes, click Window/Notes editor.
To show/hide all notes, click View/Show notes
Notes are not "selectable"

Preserving Placement and Settings


SynthEyes will normally reopen files with the same scene settings and
window placements as when the file was originally saved. Placement data
includes which floating windows are open, and their position whether open or not.
SynthEyes files may have been saved from other users and machines, which
requires some consideration.

What's a Setting? Normally when you change a tracker position or


solver setting, an undo record is created, so that you can undo the
operation. Other information, typically that affects only the display of
the scene, such as the current frame number, tracker sorting mode, or
scaling of curves in the graph editor, does not create undo records by
design. The items that do not create undo records are the settings
we're talking about here. Many are controlled by menu settings.

Different machines and users may frequently have different monitor


configurations (even your own machine). SynthEyes will re-map windows from
the previously saved configuration onto your current configuration so that all are

59
BASIC OPERATION

displayed. Since dialogs don't scale, you may need to adjust the placements
further after opening the file.
Preferences are not stored in SynthEyes files, so for example your user-
interface color settings are not affected or overridden by files from other users.
Settings which are initialized from preferences (for example Whole Affects
Meshes), and thereafter subject to change in SynthEyes as you work on the file,
are saved and will be reloaded, potentially overriding what would be your normal
preference.

Note: Because settings data don't constitute "changes" (do not create
undo records), changing settings or window placements will not trigger
the "Scene changed, save file?" dialog. So if you save a file, change
the window placements, then exit, the changed window placements will
be lost without warning. Click Save if you want to preserve them. (For
the same reason, desirably, moving windows won't trigger an auto-
save or -increment.)

There is a potential hazard from the automatic restoration of settings: a file


that you open may contain settings that you have forgotten or even are unaware
of. For this reason, there are settings in the File Open area of the preferences
that give you control over whether or not the settings are reloaded (see below). If
you have a file that is behaving unexpectedly and that you suspect may have
settings you don't understand, you can save it, set the preferences to not load
settings, then reopen it. Alternatively you can override the current scene with
your own preferences.
You can also save the current set of window placements and settings as
an additional type of preferences: a favorite set of window configurations and
settings. This can be dangerous, however, as it might obscure the effect of some
preferences. If you've stored a scene's settings as preferences and want to
change them, you should start SynthEyes, make the change, and then re-save
the settings to preferences (don't try to do this from inside some scene you're
working on).
Settings Preferences
Placements and Settings. Selector. This selector controls whether or not the
placement and settings data is used when a SynthEyes file is opened,
with the options listed next.
Never load. (Option.) The information is never used.
From preferences only. (Option.) When a file is opened, the placements and
settings are used that you have set, not those from the file.
Only from my machine. (Option.) Placements and settings from the file are
used only if they were generated from this same machine; otherwise
the placements and settings from the preferences are used, if any.
Always load. (Option.) The placements and settings in the file are always
used.

60
BASIC OPERATION

Load only placements. Checkbox. When checked, only the window placements
will be loaded, not the detailed settings.
Set Scene as Preferences. Button. The placements and settings information is
stored as part of your preferences, and will be used for new SynthEyes
scenes.
Placements as Preferences. Button. Only the placements information, not the
settings, will be stored in your preferences for application to new scenes.
Remove these preferences. Button. Any stored settings and placements are
removed from your preferences.
Set Scene TO Preferences. Button. Overrides the placements and settings of
the current scene by replacing them with those that you've stored as
preferences, if any. This can be used to reset a rogue file, for example.

Auto-Save
SynthEyes can auto-save the .sni file every few minutes if desired, under
control of the auto-save preferences (in the Save/Export section). You will be
asked whether you want to use auto-save the first time a save is necessary. (You
can have the filename incremented automatically as well. )

IMPORTANT: to minimize the chances of data loss, please read this


section and the following section on file name versioning, then
configure the preferences to correspond to your desired working style!

If you keep auto-save off, then the file will be saved whenever you
File/Save.
If auto-save is turned on, then the Minutes per auto-save setting applies.
Once the specified time has been reached, SynthEyes saves the file except
under the following conditions:
there has been no change in the scene since the last save;
a modal dialog is displayed (the main window is disabled);
an operation is currently in progress, such as dragging a spinner or
dragging a tracker around in a viewport;
the mouse is captured, even if it does not affect the scene file, for
example, dragging the current time marker in the time bar;
the shot is being played back (to avoid disturbing the frame rate);
if a text field is being edited, such as a spinner's text value.
If the save must be deferred, it will be retried every six seconds. If you are
very busy or leave SynthEyes playing continuously, auto-save will be blocked.
Status messages are displayed in the status bar at the beginning and completion
of an auto-save.
The "If no filename" preference determines the action taken if the file has
not been previously saved by the time the auto-save period is reached, with the
choices: Don't save, Ask, or Save as untitled. If Don't save is selected, no auto-
save occurs until you define a file name, putting your work at risk in the event of

61
BASIC OPERATION

any problem. You can select Save as untitled, in which case the file is saved as
untitled.sni is produced in your File/User Data Folder. (The prior version of that
file becomes untitled.bac.)
If you select Ask, when the time to auto-save comes without a filename,
the save-file selection dialog will pop up for you to specify a file name. That is
only somewhat optional: if you cancel the file-selection dialog, you will be re-
prompted for a file name each time the auto-save interval expires, until you
successfully enter a file name.
Before the .sni file is auto-saved, the prior version of the .sni file will be
renamed to be a ".sni.bac" file. SynthEyes produces .sni.bac files only for auto-
saves, not for regular File/Save. In addition to saving an additional older backed-
up version, it serves as a backup in case the auto-save itself causes a crash (for
example, if it runs out of memory).

Tip: Auto-tracked files can be rather large, 10s or 100s of MBytes.


They can take a while to save, especially if the .sni file compression
preference is also turned on. To reduce the save time, be sure to use
Tracker Cleanup or Clear All Blips to reduce the blip storage (which
takes up the bulk of the space) once you no longer need to peel
additional auto-trackers or use the Add More Trackers tool. See also
the Mesh De-Duplication features. If you frequently have large auto-
tracked files and cannot clear blips and the auto-save is taking too
long, you should probably keep auto-save off.

Warning: some network storage systems are buggy, and do not


correctly process the file rename (from foo.sni to foo.sni.bac) that
immediately precedes an auto-save. They block the save, causing it to
fail with an error message. If you encounter file-save errors during
auto-save, turn off auto-save.

When auto-save is on and you have saved at least once, SynthEyes will
always immediately save (over-write) changes to the previous file when you do a
File/New, Open, or Close, rather than asking permission, since you have already
given permission to save it by specifying auto-save. This makes for fast and
simpler operation.
Using auto-save will require to you to be a little more careful to make sure
you do not inadvertently over-write files you want to preserve, described as
follows.
When auto-save is off, you are prompted if you want to save the current
.sni file when you do a File/New, Open, or Close. You can save the file or
discard it at that time if you do not want to save changes.

Warning: there is no "revert" for auto-save (revert to exactly what?).


We recommend taking advantage of file versioning.

62
BASIC OPERATION

If you have auto-save on, however, your experimental changes in a


recently-opened file will automatically over-write the original file, as soon as the
auto-save timeout expires or you File/New, Open, or Close. Accordingly, be
careful to immediately do a File/Save As or File/Save Next Version to a new
"scratch" file if you wish to experiment on an existing file without overwriting it.

Tip: There are several variants of File/Save:

File/Save saves to the existing file nameexcept if filename auto-


increment is on, in which case the filename is incremented and the
file saved at the new location (same as File/Save Next Version).
File/Save As prompts for and saves to a new filename, and that
filename becomes the scene's file name.
File/Save a Copy prompts for a new file name, saves the scene
there, but then discards that file namefuture saves will use the
original file name.
File/Save Next Version increments the current file name, saves the
scene at the new file location, and continues on using that new
name.
If you have a laptop with solid-state-disk (SSD) and are concerned about
its write-cycle lifetime, you might want to keep auto-save off.
If you have a particular fondness for your file-written times, you should
keep auto-save off: some combinations of Undo and Redo may result in saves
that are not strictly-speaking necessary. For most people, that's no problem,
since an extra save or two is fine.

Filename Versioning and Limiting


Whether you are saving files only on command, or using auto-save, you
can also turn on the Increment save file name preference, which will cause the
scene's file name to be incremented before each save, so you'll get scene1.sni,
scene2.sni, etc. (This is the same as always using File/Save Next Version.)
You can also turn on the Increment upon open preference, which
immediately increments the sni file name when you open a file, so that any
changes will affect the incremented file, not the original. This is useful for manual
saving, and especially when using auto-save to prevent overwrites.
When you do this, you'll have a whole series of files, one for each manual
or auto-save. That's useful if you need to go back to an earlier version.

Note: When you go back to an earlier file, then resume auto-increment


saving, SynthEyes will modify the filename to create a new series so
that the newly saved files don't overwrite the series of previously-saved
files. For example, it will use myfile15_a1 if myfile16 already exists
(and myfile15_b1 if myfile15_a1 already exists). The file after that will
be myfile15_a2.

63
BASIC OPERATION

But, especially if you have auto-increment and auto-save both on, and
especially with auto-tracked scenes, you can use a LOT of disk space rapidly!
To reduce that, you can
have the filename increment only every several minutes, and/or
have SynthEyes keep only the last few versions.
To have the filename increment (produce the next version) only every few
minutes, set the Minutes/auto-increment preference. At its default value of zero,
a new version is created every save. If you change the value to 10, a new
version will be produced every 10 minutes.
To automatically limit the number of saved file version, turn on the "Limit
number of file versions" preference and set the # of file versions preference to
the desired number of files. With version limiting turned on, each time you save
with auto-increment, auto-save with increment, or save next version, SynthEyes
will delete any file the given number of versions back from the new version. (If
you've gone back and reopened a file, creating a new series, this limit affects
only an individual series.)
Here's a useful collection of auto-save settings:
Enable auto-save? Yes
Minutes per auto-save: 1
If no filename? Save as untitled
Increment upon open: checked
Increment save file name: checked
Minutes/auto-increment: 10
Limit number of file version? checked
# of file versions: 12
These settings will auto-save every minute, produce a new version every
ten minutes, and keep at least two hour's worth of prior versions (longer if there
was no activity for some periods). You can modify as desired.
For a more traditional manual-save approach, you might do this:
Enable auto-save? No
Minutes per auto-save: 1
If no filename? Ask
Increment upon open: checked
Increment save file name: unchecked
Minutes/auto-increment: 10
Limit number of file version? checked
# of file versions: 10
With these settings, you'd do your own File/Save and then File/Save Next
Version as desired. SynthEyes will keep the last 10 versions.

64
BASIC OPERATION

Mesh De-Duplication
When you are using large meshes, typically reference models or lidar
data, and have many versions of the SNI file, a lot of disk space can be required.

Tip: The amount of space required for a given mesh can be shown by
selecting it, then running the Mesh Information tool script.

To reduce the total SNI file sizes, you can use the mesh de-duplication
feature, which stores mesh data separately from the SNI file.

Note: Mesh De-Duplication is not available in the Intro version.

Since reading large mesh files can be rather time-consuming, de-


duplication uses a new SynthEyes-specific mesh file format, "SBM" for
SynthEyes binary mesh, which can be read (closer to inhaled) very rapidly. You
can import or export SBM files directly for any mesh; it is a mesh format just like
an OBJ or DXF is. (This file format specification is available if you want to read or
write it from other applications.)
There are a substantial number of de-duplication modes that control
where and how SBM files are generated. These modes can impact the integrity
of your files and how you send files to others, so please read the following
sections carefully.
Note that there is a preference for the de-duplication mode, found in the
Meshes section (along with a minimum size setting), as well as a scene setting,
found on the Edit/Edit Scene Settings panel. By default when a scene is created,
the scene will just use the preference setting, even if it is later changed.
You can use the scene settings panel to set the de-duplication mode
specifically for a given file. (The most common reason for that is to set the mode
to Never before saving a SNI file that will be sent to someone else, more on that
later.)
Mesh de-duplication files may be located in a "Mesh Area," which is a
folder within your File/User Data Folder. The location of this folder is a preference
which can be changed from the Folders portion of the Preferences panel.
De-Duplication Modes
There are quite a few de-duplication modes, to encompass a variety of
scenarios. We recommend sticking to the "Imports..." modes in most cases
where the large meshes are created external to SynthEyes and imported to it.
Generally you should not change imported meshes inside SynthEyes if you are
using "Imports..." modes.
With all of these, you can "follow along" and learn more by looking at the
sbm files created in the respective folders. It is a transparent process. You can
open those sbm files directly from SynthEyes.

65
BASIC OPERATION

Never. No de-duplication is performed. This is simplest, and best for sending


files to others.
Imports. De-duplication is performed for all imported meshes, and no others.
The originally-imported mesh file is re-read each time the file is opened,
which can be slow. This can be useful if the mesh is changed frequently.
Important: any changes to the mesh within SynthEyes will be lost if you
save then reopen the file. (Vertex and facet selection data is cleared when
a file is read).
Imports with SBM. De-duplication is performed for all imported meshes, and no
others. An SBM file is written to (and later read from) the same folder and
with the same name as the original mesh file, which makes it easy to
identify and keep track of. The SBM will contain any modifications made to
the mesh; for this reason the mesh is not automatically reread from the
original if the original changes.
Imports with local SBM. De-duplication is performed for all imported meshes,
and no others. An SBM file is written to (and later read from) the Mesh
Area with the same name as the mesh's original filename. This is useful
when the original mesh is located on a remote networked drive, so you
can use a fast local disk instead. The SBM will contain any modifications
made to the mesh; for this reason the mesh is not automatically reread
from the original if the original changes.
Scene, Single. De-duplication is performed for any mesh that exceeds the size
limit on the preferences panel. The SBM file will be placed with the SNI
file. See the following section on Single vs Versioned Files.
Scene, Versioned. De-duplication is performed for any mesh that exceeds the
size limit on the preferences panel. The SBM file will be placed with the
SNI file. See the following section on Single vs Versioned Files.
Mesh Area, Single. De-duplication is performed for any mesh that exceeds the
size limit on the preferences panel. The SBM file will be placed in the
Mesh Area. See the following section on Single vs Versioned Files.
Mesh Area, Versioned. De-duplication is performed for any mesh that exceeds
the size limit on the preferences panel. The SBM file will be placed in the
Mesh Area. See the following section on Single vs Versioned Files.
Single Files vs Versioned Files
The Scene and Mesh Area modes can be used whether a mesh was
imported or created within SynthEyes. Meshes are de-duplicated, or not, based
on the size of the mesh. Since there is no original imported mesh, the mesh is
entirely contained in the SBM file; if the SBM file is deleted, lost, or missing, the
mesh is gonethe SBM file is not a "cache".
The Scene modes store SBM files in the folder with the SNI scene. The
Mesh Area modes store them in the Mesh Area folder described earlier.

66
BASIC OPERATION

The Scene and Mesh Area modes come in both Versioned and Single
variants. Here's the problem that is being solved:
Suppose you have a SNI file with a large mesh that you edit from time to
time, and you are keeping different versions of the file, possibly using an auto-
incrementing auto-save. You have v05 of the file, and just wrote v08. Now you
want to go back to v05but the mesh was different at that time.
Unless the mesh de-duplication has kept separate versions of the SBM
file, going back to v05 accurately will be impossible. The Versioned modes do
keep multiple versions, producing a new SBM when the mesh has changed.
For example, suppose the mesh changed between v02 and v03, and v06
and v07. The de-duplication code will write new SBMs associated with v03 and
v07 of the file, and none for v01, 02, 04, 05, 06, or 08. When you go back to v05,
you will be using the SBM written for v03. If you re-open v08, you will be using
the SBM from v07.
While SynthEyes can automatically limit the number of auto-incremented
SNI files, it does not automatically cull the SBM filesthat's a bit tricky! In the
example above, the SBM from v03 is used for v06, so even though you may
delete the v03 SNI file, you may need to keep its SBM files for quite a while.
SynthEyes can't keep SBM files for each SNIthat's what we're avoiding in the
first place!
So at some point you may need to clean up the SBM files manually. You
can determine what SBM files a given SNI file uses by looking at the mesh file list
in its File/File Info.
The Single modes avoid this complexity by not doing versioning: they
keep only the single, most-recent, version of the mesh. That will cut down the
number of versions of SBM files you have, and the thought process required to
clean them up. But, if you go back to an earlier version of the file, you'll still get
the most-recent version of the mesh. That could be a huge problem, or none at
all, depending on what you're doing. So pick wisely!
Forcing a Re-Read
You can force an imported mesh to be re-read (for example if it's originator
changes it) by selecting it, then using File/Import/Reload. It will also be reread if
the SBM file is deleted, but that might be more mistake-inducing. (Be sure not to
delete an SBM that has no original mesh file, or which has been changed later!)
Sending SNI files to Others.
If you send a SNI file to someone else and it contains de-duplicated
meshes, the meshes will be missing: the recipient needs to have the SBM files
(or original meshes).
While it is possible to identify the necessary SBM files from the File/File
Info listing, we recommend not attempting to package those files: the user will

67
BASIC OPERATION

have to locate them at the same full path locations, which is unlikely and
generally unreliable.
Accordingly, to prepare a SNI file to send to someone, it is recommended
to open the Edit/Edit Scene Settings panel and set the de-duplication mode to
Never, then save the file. The resulting SNI file will then contain all the meshes.
(You might turn on the Compress .sni files preference before saving, as well.)

Community Translation Project


SynthEyes has a localization system that allows users to create translation
files for the SynthEyes user interface: for the menus and for the dialog text
(buttons, checkboxes, etc). Each translation is contained in an XML file, initially
produced by SynthEyes, that can be edited by users in a text editor (Notepad,
TextEdit, vim, emacs, etc) or XML Editor such as XML Notepad 2007. It is our
hope that users will want to create and share these XML files for the benefit of
their countrymen and -women.
There may be some rough spots in this process due to limited multilingual
experience, apologies!
Using an Existing Language Translation XML
You can look on the SynthEyes website for available language translation
files at the Community Translation Project webpage. If there is a suitable one
available, you can download it; read on for installation. If not, skip ahead to the
section on Creating a New Language Translation XML.
SynthEyes language XML files must be installed in either the user or
system script folders to be visible and usable in SynthEyes. You can locate these
folders from inside SynthEyes by clicking the Script/User script folder or
Script/System script folder menu items. Language files in the system folder are
available to all users on the system, files in the user folder are available only to
you.
Open either of these folders and copy the XML file into it. You may need
to authenticate with your operating system to copy into the system script folder. If
SynthEyes is already running, click File/Find New scripts.
Important: A language translation file may require that your operating
system be set to a specific primary language in order to display translated text
appropriately. Be alert to notes to that effect from the creator of the language file,
and make sure your system is set to the same language.
To begin using a new translation file, open the preferences (Edit/Edit
Preferences) and select the language in the UI Language field on the right side of
the preferences panel. Click OK to save the change and close the preferences
panel.
Restart SynthEyes to begin using the new translation file. See the later
section on Tracker etc Names for additional usage information.

68
BASIC OPERATION

Set the UI Language preference back to the empty entry and restart to
return to an unmodified interface.
Creating a New Language Translation XML
Starting a new language translation is easy. Click File/Make new language
template. Enter the name of your language in English (French, German, etc),
then select a filename in which to store it. By default it will go in your user script
folder, which is basically essential for the initial stages of development.
Open the newly-create file in your text or xml editor. All the translatable
elements are listed, over 1600 at present. All the menu items are listed in
alphabetic order, then all the dialogs are listed in alphabetic order, with each
editable control in alphabetic order within each dialog.
Each translatable element has from and to elements. Do not change the
from field! Add your translation to the to field, which is initially empty. Please see
the next section, on Character Sets, for some more details. Also, do not change
the id or class fields.
Keep in mind that the space available for each control is fixed and limited,
so translated text will have to be kept very short and succinct or abbreviated.
Requiring a smaller UI font size preference, or a specific UI font name
preference, are options if necessary. (Menu translations do not have to worry
about limited space.)
You do not have to translate every elementyou can start with as few or
as many as you want.
Note that some text is dynamically generated by SynthEyes, and therefore
cannot be permanently changed by a translation. For example, the Undo and
Redo menu items are changed dynamically based on what will be undone or
redone. It is possible that there may be restrictions on translating other fields as
well.
We strongly recommend that you do not change the order of entries. As
long as everyone uses the same alphabetic ordering, it is easy for you or us to
compare and merge files translated by different users, to produce a composite
result. Perhaps you might translate some menus, someone else translates some
dialogs; we can put them together to a more-complete file.
Character Sets
SynthEyes on Windows and OS X uses the 8-bit ISO-8859-1 Latin-1
character set (not Unicode). European languages should be able to stay with the
default character set. (SynthEyes on Linux uses UTF-8, since that is the only
choice.)
To use other character sets, you can switch the operating system
language settings (code page). Any code page must have the usual ASCII
character set in the lower 128 positions, regardless of any shift codes, with

69
BASIC OPERATION

language-specific characters in the upper 128 positions 0x80 - 0xFF. For


example, EUC-JP takes this approach.
When you open and save the translation XML file from your text editor,
you'll need to make sure that it uses the right character coding.
You can note any required settings in the notes attribute of the top-level
CTPLanguage XML node (second line of the XML file).
Tracker etc Names
SynthEyes makes little use of user-entered text fields, mainly for tracker
mesh, object, and other entity names. The tracker and object names can be
exported to many other programs, which may not support alternate character
sets at all, or may support them in other ways.
Accordingly, we strongly recommend that you use only the standard ASCII
characters A-Z, a-z, and 0-9 in tracker etc names, with a leading alphabetic
character (don't start a name with a number!), and no space characters. That is
recommended practice even for plain English usage.
Non-English characters in tracker names may display correctly in some
places (with the appropriate character set in place), but will not display
correctly in any OpenGL window. OpenGL windows include the perspective
view, graph editor, and SimulTrack views, plus the camera and 3D Views on Mac
OS X and Linux. (Camera and 3D Views can be set to OpenGL or non-OpenGL
via preference.)
While we would prefer to use UTF-8 Unicode throughout, at present there
are two complications:
Providing consistent conversion of UTF-8 into the 16-bit Unicode
approach that Windows prefers (very simple for OS X).
Being able to draw any Unicode font in OpenGl. Tricky!
Since SynthEyes for Linux does use UTF--8, text will not be interpreted
consistently across platforms if non-ASCII characters are used. Note that non-
ASCII characters won't be displayed properly in OpenGL windows on Linux.
Over time we'll hopefully be able to address all these issues.

70
Opening the Shot
To begin tracking a new shot, select File/New. However, if you wish to
track a collection of disparate digital stills, instead of a video shot, see Survey
Shots.
Select the desired movie-type file, such as an AVI, QT Movie, or MPEG
file, or the first frame of a series of image files, such as a JPEG, TIFF, BMP, SGI
RGB, Cineon, SMPTE DPX or Targa. On a Mac, file type will be determined
automatically even without a file extension, if it has been written properly (though
OS X does officially require extensions). If you have image files with no extension
or file type, select Just Open It in the Open File dialog box so your files are
visible, then select the first one and SynthEyes will determine its type
automatically, if possible.

WARNING: SynthEyes is intended for use on known imagery in a


secure professional environment. It is not intended or updated to
combat viral threats posed by images obtained from the Internet
or other unknown sources. Such images may cause SynthEyes or
your computer to crash, or even to be taken over by rogue
software, potentially surreptitiously.

File and path names should consist of standard ASCII characters and
digits, not special or accented characters.

Note: The SynthEyes Intro version is limited to a maximum image


resolution of 1920x1080.

Image Sequences
SynthEyes will normally produce an IFL (image file list) file for each file
sequence, and write it into the same folder as the images. The IFL serves as a
reliable placeholder for the entire sequence and saves time re-opening the
sequence, especially on networks, because SynthEyes does not have to re-
check the entire sequence. If the IFL file conflicts with your image-management
system, or you frequently open the same image sequence from different
machines, producing a different file name for the images from each computer,
you can turn off the Write .IFL files for sequences preference. IFL files are
required for survey shots, however.

Note: For maximum reliability, you should always open the first image
in a sequence. Although SynthEyes can open any image, that image
defines the beginning of the shot. Other software, such as After
Effects, may interpret the shot separately. Use the Start/End controls
on the Shot setup panel or the time bar to control what portion of the
shot you track (this gives the most flexibility for editorial changes as
well).

71
OPENING THE SHOT

The "Read 1f at a time" preference (on the Shot menu and in the
preferences) controls whether SynthEyes tries to read multiple images from a
sequence or movie simultaneously, or only one at a time.
This control is ON by default, ie SynthEyes is forced to read only one
frame at a time, which is a lower-performance mode. It is on by default because
a substantial number of customers encounter problems with multi-threaded
reading when their images are on networked disksthe networking software isn't
accustomed to having 8-24 threads each ask for a different file at exactly the
same time, and the network drivers or disks may hang or hiccup.

Tip: Try turning Read 1f at a time off if your images are on a local
disk! This can increase shot-loading rate by several times, especially
on RAID disks.

Reading AVI, Quicktime, etc "Movies"


Movie files, such as AVIs, Quicktimes, REDcode R3D, MP4s, MTS, and
M2Ts, etc) can be convenient due to their simplicity (a single file copied directly
from the camera) and small size. However, these formats have limitations that
can make them of limited use in post-production:
1. There is a vast profusion of compression and file formats, and
2. Inter-frame compression makes it time-consuming to access any
specific frame.
SynthEyes uses the operating system, ie Windows, Linux, or OS X, to
read movie files. SynthEyes does not contain any of the complex code to read
movie files "on its own," with the exception of RED files, which are built in. It is
the purpose of the operating system to provide this kind of shared capability to all
the applications, so that each one does not have to do so, which would be
impossible. SynthEyes for Linux does not have non-RED movie capabilities.

WARNING: you may encounter substantial delays the first time you
open a "movie" file using Quicktime on the Mac or Windows Media
Foundation on Windows, as the file must be indexed to locate the time
of each frame. This index is written with a ".mtimes" or ".times"
extension so that subsequent opens occur rapidly. (If you turn off the
Write IFLs preference, no times file is written, and each re-opening will
take as long as the first.)

Much like still cameras can produce "RAW" images that require special
software to read them and produce standard PNGs or JPEGs, some cameras
produce specialized image formats that must be converted to standard formats
for further use. (See the section on reading RED files.)
The operating system software loaded on your particular machine plays
a large part in determining what movies can be read. If you have a particular
hardware device that produces movie files, such as a camera or disk recorder,

72
OPENING THE SHOT

be sure to install the software that came with that device, so that its files can be
read. Or, you may have to look online for the appropriate codec for footage
supplied by a customer. Older legacy codecs are being dropped by the operating
system vendors.
Highly compressed files produced by cameras are not well-suited for post-
production, as reading a single frame often requires decompressing many others
first. SynthEyes often reads frames out of order, either at your request or during
multi-threaded processing, so reading inter-frame compressed files can take a
very long time, especially if there are few keyframes. If a movie file format is
used, compression formats such as ProRes are specifically intended for post-
production.
In addition to programs supplied by camera vendors, programs such as
After Effects and Final Cut specialize in "importing" footage from cameras and
decompressing the camera-specific acquisition formats to more useful post-
production-ready image sequences.
The use of movie files must be carefully considered. For further
information on using them and configuring readers, see the section Advanced
Details: Reading "Movie" Files. Also, SynthEyes's Disk Caching feature can
make working with movie files (and IFLs) more efficient.
Due to these inherent limitations, much primary post-production work is
done with image sequences instead of movie files.

Basic Shot Settings


Adjust the following settings to match your shot. You can change these
settings later with Shot/Edit Shot. Dont be dismayed if you dont understand all
the settings to start; many are provided for advanced situations only. We'll
describe only the most common ones here. There's more information in the Shot
Settings in the reference portion of the manual.
The Image Aspect is the single most important setting to get right. If it's
not right, your 3-D solve will be distorted. Maya users may want to use a preset
corresponding to one of the Maya presets.
The Max RAM Cache GB preference in the Image Input section is worth
checking on. It doesn't affect tracking or solving accuracyit affects
performance, how many frames will fit in RAM. With enough RAM, this will be the
whole shot. You may want to decrease this value if there are other large apps
running, or increase it a bit if there are none. (The SynthEyes Intro version is
limited to 1.25 GB, about 200 HD frames.)
Note that the Image Preprocessing button brings up another panel with
additional possibilities; well discuss those after the basic open-shot dialog.

73
OPENING THE SHOT

The following is a condensed listing of the reference material:


Start Frame, End Frame: the range of frames to be solved, within the overall
shot you are opening. You can adjust this from this panel, or by shift-
dragging the end of the frame range in the time bar.
Stereo Off/Left/Right. Sequences through the three choices to control the setup
for stereo shots. Leave at Off for normal (monocular) shots, change to Left
when opening the first, left, shot of a stereo pair. See the section on
Stereoscopic shots for more information.
360 VR Mode. Used to declare that the footage is 360 degree spherical Virtual
Reality footage, with additional options for converting to and from 360 VR.
Time-Shift Animation. Causes all the tracking and animation to be shifted
earlier or later, to compensate when the shot is changed to a new one with
frames added or removed from the beginning.
Frame rate: Usually 23.976, 24, 25 or 29.97 frames per second. NTSC is used in
the US & Japan, PAL in Europe. SynthEyes does not care whether you
use an exact or approximate value, but it may be crucial for downstream
applications, especially when the shot is a 'movie' file, rather than a
sequence.
Interlacing: No for film or progressive-scan DV. Yes to stay with 25/30 fps,
skipping every other field. Minimizes the amount of tracking required, with
some loss of ability to track rapid jitter. Use Yes, But for the same thing,
but to keep only the other (odd) field. Use Starting Odd or Starting Even
for interlaced video, depending on the correct first field. Guessing is fine.

74
OPENING THE SHOT

Once you have finished opening the shot in a second, step through a few
frames. If they go 2 steps forward, one back, select the Shot/Edit Shot
menu item, and correct the setting. Use Yes or None for source video
compressed with a non-field-savvy codec such as JPEG sequences.
Channel Depths: Process and Store. 8-bit/16-bit/Float. Radio buttons. You
can select different depths for intermediate processing in the image
processor, and storage in the RAM cache. The selection marked with an
asterisk is the default, based on the source imagery.
Keep Alpha: when checked, SynthEyes will keep the alpha channel when
opening files, even if there does not appear to be a use for it at present (ie
for rotoscoping). Turn on when you want to feed images through the
image preprocessor for lens distortion or stabilization and then write them,
and want the alpha channel to be processed and written also.
Apply Preset: Click to drop down a list of different film formats; selecting one of
them will set the frame rate, image aspect, back plate width, squeeze
factor, interlace setting, rolling shutter, and indirectly, most of the other
aspect and image size parameters. You can make, change, and delete
your own local set of presets using the Save As and Delete entries at the
end of the preset list.
Image Aspect: overall image width divided by height. Equals 1.333 for standard
definition video, 1.777 for HDTV, 2.35 or other values for film. Click
Square Pix to base it on the image width divided by image height,
assuming the pixels are square(most of the time these days). Note: this is
the aspect ratio of the input to the image preprocessor.
Pixel Aspect: width to height ratio of each pixel in the overall image. (The pixel
aspect is for the final image, not the skinnier width of the pixel on an
anamorphic negative.)
Back Plate Width/Height: Sets the width of the film or sensor of the virtual
camera, which determines the interpretation of the focal length. You must
know this if you want to use focal lengths. SynthEyes uses field of view, so
this value does not affect the solve. Note that the real values of focal
length and back plate width are always slightly different than the book
values for a given camera. Use Back Plate Units to change the desired
display units.
Rolling Shutter Enable/Fraction. Checkbox and spinner. Enables and
configures rolling-shutter compensation during solving for the tracker data
of the camera and any objects attached to this shot. CMOS cameras are
subject to rolling shutter; it causes intrinsic image artifacts.
Image Preprocessing: brings up the image preprocessing (preparation) dialog,
allowing various image-level adjustments to make tracking easier (usually
more so for the human than the machine). Includes color, gamma, etc, but
also memory-saving options such as single-channel and region-of-interest
processing. This dialog also accesses SynthEyes image stabilization
features.
Memory Status: shows the image resolution, image size in RAM in megabytes,
shot length in frames, and an estimated total amount of memory required

75
OPENING THE SHOT

for the sequence compared to the total still available on the machine. Note
that the last number is only a rough current estimate that will change
depending on what else you are doing on the machine. The memory
required per frame is for the first frame, so this can be very inaccurate if
you have an animated region-of-interest that changes size in the Image
Preprocessing system. The final aspect ratio coming out of the image
preprocessor is also shown here; it reflects resampling, padding, and
cropping performed by the preprocessor.

After Loading
After you hit OK to load the shot, the image prefetch system begins to
bring it into your processors RAM for quick access. You can use the playbar and
timebar to play and scrub through the shot.
Note: image prefetch puts a severe load on your processor by designit
rushes to load everything as fast as possible, taking advantage of high-
throughput devices such as RAID disks. However, if the footage is located on a
low-bandwidth remote drive, prefetch may cause your machine to be temporarily
unresponsive as the operating system tries to acquire the data. If you need to
avoid this, turn on the Read 1f at a time option on the Shot menu. It is a sticky
preference. If that does not help enough, turn off prefetch on the Shot menu, or
turn off the prefetch preference to turn prefetch off automatically each startup.
You can use the Image Preprocessing stage to help fit the imagery into
RAM, as will be described shortly.
Even if the shot does not fit in RAM, you can get RAM playback of
portions of the shot using the little green and red playback markers in the
timebar: you can drag them to the portion you want to loop.
Sometimes you will want to open an entire shot, but track and solve only a
portion of it. You can shift-drag the start or end of the shot in the timebar (you
may want to middle-drag the whole timebar left or right first to see the boundary.
Select the proper coordinate system type (for MAX, Maya, Lightwave, etc)
at this time. Adjust the setting scene (Edit/Edit Scene Settings), or the preference
setting if desired.

Frame Numbering (Advanced!)


SynthEyes normally starts all shots at frame #0. This is true whether the
shot comes from a movie file of some type, or an image sequence, regardless of
the image numbers used by the sequence. For example, you have img037,
img038, img039, ... SynthEyes starts with frame 0 (img037), frame 1 (img038),
frame 2 (img039), etc.
By far, this approach will result in the highest efficiency and fewest
problems throughout the entire tool-chain, no matter what other applications you
use.

76
OPENING THE SHOT

If you must, there's also a preference setting that lets you start at frame 1
instead of 0, if all your image sequences start at frame 1.
Some studios have been working with an approach where every frame
ever shot on a given production has its own unique frame number. SynthEyes is
happy to read shots using this scheme, and let you use them with internal frame
numbers starting at zero. We highly recommend allowing SynthEyes to keep
with that, numbering frames from zero.
If you really, really, want to be reading and typing six or seven digit frame
numbers within SynthEyes, you can turn on the "Match frame#" preference.
When you do so, SynthEyes will add the first frame number of the currently-
active camera to all frame numbers it displays to you. You can turn this
preference on or off if you want to see the file numbering temporarily.

Note: SynthEyes 2011 used a different approach to Match frame#

If you have only a single shot in a scene, this may do what you want. If
there are multiple shots for stereo, witness cameras, etc, you will have to
exercise caution to avoid confusing yourself. If you think about it, you'll see that if
the left and right eyes of a stereo shot have different image frame numbers,
according to this scheme, then it's not at all clear what frame the time bar refers
to. Same with witness cameras. This scheme shows the numbering
corresponding to the active camera, while still aligning all shots at the same
starting point.
If you have Match frame#'s turned on, it affects frame number display, but
not exports. The compatibility of other software with very large frame numbers
will vary. There a several technical problems that they cause. You may be able to
easily modify some exports to accommodate large frame numbers, but this is not
a standard option.
In summary: starting at frame zero is a good idea! We like to teach what
we feel is the best approach.

Changing the Imagery


You may need to replace the imagery of a shot, for example, with lower-
or higher-resolution versions. Use the Shot/Change Shot Images menu item to
do this. The shot settings dialog will re-appear, so you can adjust or correct
settings such as the aspect ratio.
When activated as part of Change Shot Images, the shot settings dialog
also features a Time-Shift Animation setting. If you have tracked a shot, but
suddenly the director wants to extend a shot with additional frames at the
beginning, or removed some of them, use the Change Shot Images selection, re-
select the new version of the shot (with the additional or missing images), and set
the Time-Shift Animation setting to the number of frames added or removed
(positive for added, negative for removed). When you click OK, this will time-shift
all the tracking data, splines, object paths, etc later or earlier in the shot by that

77
OPENING THE SHOT

amount. You can extend the trackers or add additional ones, and re-solve the
shot.
Time-shifting is a fairly complex operation not to be taken lightly, as it
involves the creation or destruction of information. Some caution and scrutiny
should be given to shifted shots, and some cleanup or fine-tuning of animation
may be required in the vicinity of the beginning of the shot..
If frames from the beginning of the shot are no longer needed, it may be
easiest and best to leave them in place, but change the shot start value by shift-
dragging it in the time bar.

Image Preprocessing Basics


The image preparation dialog provides a range of capabilities aimed at the
following primary issues:
Stabilizing the images, reducing wobbles and jiggles in the source
imagery,
Making features more visible, especially to you for supervised
tracking,
Reducing the amount of memory required to store the shot in RAM,
to facilitate real-time playback,
Correcting image geometry: distortion or the optic axis position.
You can activate the image preprocessing panel either from the Open-
Shot dialog, or from the Shot menu directly.
The individual controls of the image preprocessor are spread among
several tabbed subpanels, much like the main SynthEyes window. These include
Rez, Levels, Cropping, Stabilize, Lens, Adjust, Output, and ROI.

78
OPENING THE SHOT

As you modify the image preprocessing controls, you can use the frame
spinner and assorted buttons to move through the shot to verify that the settings
are appropriate throughout it. Fetching and preprocessing the images can take a
while, especially with film-resolution images. You can control whether or not the
image updates as you change the frame# spinner, using the control button on the
right hand side of the image preprocessor.
The image preprocessing engine affects the shots as they are read from
disk, before they are stored in RAM for tracking and playback. The preprocessing
engine can change the image resolution, aspect ratio, and overall geometry.
Accordingly, you must take care if you change the image format---if
you change the image geometry, you may need to use the Apply to Trackers
button on the Output tab, or you will have to delete the trackers and do them
over, since their positions will no longer match the image currently being supplied
by the preprocessing engine.
The image preprocessor allows you to create presets within a scene, so
that you can use one preset for the entire scene, and a separate preset for a
small region around a moving object, for example.

Image Adjustments
As mentioned, the image adjustments allow you to fix up the image a bit to
make it easier for you and SynthEyes to see the features to be tracked. The
preprocessors image adjustments encompass 3-D LUTs, saturation and hue,
level adjustments, and channel selection and/or bit depth.
Rez Tab.
You can change the processing and storage formats or reduce image
resolution here to save memory. Floating point format provides the most
accuracy, but takes much more time and space. Float processing with Half or 16-
bit storage is a reasonable alternative much of the time. Most tracking activities
use 16 bit format internally; you may wish to use 8 or 16 bit while tracking for
speed and to maximize storage, then switch to float/float or float/half when you
render undistorted or re-distorted images, if you have high-dynamic-range half or
float input.
It may be worthwhile to use only one of the R, G, or B channels for
tracking, or perhaps the basic luminance, as obtained using the Channel setting.
(The Alpha channel can also be selected, mainly for a quick check of the alpha
channel.)
If you think selecting a single channel might be a good idea, be sure to
check them all. If you are tracking small colored trackers, especially on video,
you will find they often arent very colorful. Rather than trying to increase the
saturation, use a different channel. For example, with small green markers for
face tracking, the red channel is probably the best choice. The blue channel is
usually substantially noisier than red or green.

79
OPENING THE SHOT

Levels Tab.
SynthEyes reads files as is by design, especially Cineon files are not
automatically gamma-corrected for display. That permits files to be passed
through with the highest accuracy, and also allows you to select the proper
image and display calibration if you like.
The level adjustments are the simple way, they map the specified Low
level to blackest black out (luma=0), and specified High level to whitest white
(luma=1), so that you can select a portion of the dynamic range to examine. The
Mid level is mapped to 50% gray (luma=0.5) by performing a gamma-type
adjustment; the gamma value is displayed and can be modified. You should be a
bit careful that in the interests of making the image look good on your monitor,
you dont compress the dynamic range into the upper end of brightness, which
reduces that actual contrast available for tracking.
The level adjustments can be animated to adjust over the course of the
shot, see the section on animated shot setup below.
The hue adjustment can be used to tweak the color before the channel
selection; by making yellows red, you can have a virtual yellow channel, for
example.
The exposure control here does affect the processed images, if you write
them back to disk. That is different than the F-P Range Control setting on the
Shot Setup panel. See the section on using floating-point images.
Note that you can change the image adjustments in this section without
having to re-track or adjust the trackers, since the overall image geometry does
not change.
3-D Look-Up Tables (LUTs)
If you have them available, SynthEyes can read .3dl, .csp, or .cube 3-D
color lookup tables and use them to process your images. These allow rather
complex color manipulations to be performed, as well as potentially allowing you
to match your color monitor exactly. The .csp format is the most powerful and
flexible.
LUTs can be placed within one of the "script" folders so they are available
for easy access as presets, either the user or system folder locations. You can
open up a window to those folder from inside SynthEyes by using the
"Script/User script folder" or "Script/System script folder" menu items. If you add
new profiles while SynthEyes is running, click the "File/Find new scripts" menu
item.
Color profiles are visible in the Image Preprocessor on the Levels tab ---
there is a dropdown labeled 3-D Color Map; you can select the desired profile
there. If you change a profile after SynthEyes has read it, click the Reload button
under the dropdown to force the map to be re-read.

80
OPENING THE SHOT

You can also open color profile files anywhere in the file system by
clicking the "Open" button on the Levels tab.

Note: Having a large number of color map presets will increase


SynthEyes's startup time. One-time-use color maps should be stored
with the source imagery.

SynthEyes can save the current Level Adjustment, Hue, Saturation, and
Exposure settings in the form of a 1D or 3D color map (as needed). This allows
you to create color-only presets independent of a particular shot. The LUT
resolution is set by a preference in the FILE EXPORT section (1D LUTs are 8x
the setting, which is for 3D LUTs).
With that exception, SynthEyes will otherwise not build LUT tables for you;
it is not a color correction tool. You will need to obtain the tables from other
sources such as the film scanning house. There are some tool scripts for
generating or manipulating color maps in the Script/Lens menu item. There is a
script for combining LUTs, for example if you have a film LUT and a LUT for your
own monitor, you can combine them using the script, since you can only apply
one at a time. There is also a tool for creating color maps for Cineon files from
the white- and black-point values.
You can find additional tools for manipulating and converting LUTs online,
including digitalpraxis.net including a tool for ripping LUTs from before and
desired-after images. That permits you to adjust a sample image in your favorite
color-correction app, then burn what you did to it into a 3D LUT SynthEyes can
use. (Their tools are commercial software, not freeware, we have no relationship
with them and can not vouch for the tools in any fashion, merely cite them as a
potential example.)

Floating-Point Images
SynthEyes can handle floating-point images from EXR, TIFF, and DPX
image formats. Floating-point images offer the greatest accuracy and dynamic
range, at the expense of substantially greater memory requirement and
processing time. The 64-bit SynthEyes version is recommended for handling
floating-point images due to their large size. DPX images will offer the highest
performance.
Floating point images may use 32-bit floats, or the 16-bit half format. The
half format does not have as much dynamic range, but it is almost always
enough for practical work even using High-Dynamic-Range images. The good
news is that Half-floats are half the size, only 16 bits. The bad news is that it
takes a substantial amount of time to translate between the half format and an 8
bit, 16-bit, or float format you can track or display.
Accordingly, SynthEyes offers separate bit-depth selections for processing
and for storage. If you need the extended range of a float (or 16-bit int) format,
you can use that for any processing (especially gamma correction and 3-D

81
OPENING THE SHOT

LUTs), to reduce banding, then select a smaller storage format, Half, 16-bit, or 8-
bit. But keep in mind that additional processing time will be required.
Though a floating-point imagefloat or halfprovides accuracy and
dynamic range, to track or display it, it must be converted to a standard 8-bit or
16-bit form, albeit temporarily. To understand the necessary controls, here are a
few details on how that is done (industry-wide).
Eight and sixteen bit (unsigned) integers are normally considered to range
from 0 to 255 or 65535. But to convert back and forth, the numbers are
considered to range from 0 to 1.0 (in steps of 1/255), or 0 to 1.0 (in steps of
1/65535).
Correspondingly, the most-used values of the floating-point numbers
ranges from 0 to 1.0 also. With all the numbers ranging from 0 to 1, it is easy to
convert back and forth.
But, the floating point values do not necessarily have to range solely
between 0 and 1. With plenty of dynamic range in the original image, there may
be highlights that may be much larger, or details in the shadow that are much
lower. The 0 to 1 range is the only portion that will be converted to or from 8- or
16-bit.
The F.-P. Range Adjustment (F.-P. for floating-point) on the Shot setup
dialog allows you to convert a larger or smaller range of floating-point numbers
into the 0 to 1 range where they can be inter-converted. The effect of this control
is to brighten or darken the displayed image, but it affects only the display and
trackingnot the values themselves.
You can adjust the F.P. Range Adjustment, and it will not affect the
floating-point images later written back to disk after lens distortion or
stabilization.
This is quite different than the Exposure control on the Levels tab. The
Exposure control changes the actual floating-point values that will be written back
to disk later. The two controls serve different purposes, though the end result
may appear the same at first glance.

Advanced Details: Reading "Movie" Files


Movie files such as AVIs and MOVs present some challenges to the
unwary; the following sections provide education to better understand what can
be read and how to configure movie reading. Apologies in advance that this topic
is so complex, but that is the reality.
When opening movie files for the first time with the latest OS X and
Windows schemes, you may encounter substantial delays comparable to the run
time of the shot, as the file must be read or even fully decoded to locate the
frames in it. This is an unfortunate consequence of the operating-systems
vendors being focused on TV-playback applications, not on the frame-accuracy

82
OPENING THE SHOT

required for post-production. SynthEyes writes an index file with a ".times"(Win)


or ".mtimes"(Mac) extension so that the file will open quickly the following times.

Tip: You may be able to open movie files in SynthEyes that you cannot
open in your downstream post-production software. If that is the case,
use the image preprocessor to write an image sequence version of the
shot, then use Shot/Change Shot Images to switch SynthEyes to use
that version also. It will then export the sequence name for the
downstream software, rather than the hard-to-read original.

Background
SynthEyes recognizes the following file extensions as possibly-readable
movie files (these lists are subject to change at any time):

Mac OS X : .mov .mpg .mpeg .mp4 .avi .r3d .ts .dv .3g2 .3gp .3gp2 .3gpp .m2v
.m4v .mp4v .wmv .asf .divx .mts .m2t .m2ts

Windows : .avi .mov .mpg .mpeg .mp4 .r3d .mxf .wmv .3g2 .3gp .3gp2 .3gpp
.m2v .m4v .mp4v .asf .ts .dv .divx .mts .m2t .m2ts

Linux: .r3d

Just because an extension is on this list does not mean that SynthEyes
can read it. These extensions are here in the hopes that your system contains a
codec that can read it.
SynthEyes reads only RED R3D files, all others are read by your
operating system on SynthEyes's behalf, either directly or using additional
specialized software called codecs.
Files such as .avi and .mov are container formats that specify only how
data is wrapped, not the format of the image data. So h.264 data can be
contained in a .avi or in a .mov. While your operating system may be able to
unwrap the data from an AVI or MOV, if it does not contain the appropriate
codec, it will not be able to uncompress the images.
Some codecs are supplied by the operating system, while others are
available in the software supporting particular cameras or storage systems, or on
the internet for free or for purchase. If SynthEyes cannot read a particular movie
file, you should determine the codec involved and establish where it comes from
(the Quicktime player's Movie Inspector window can help with this).
Movie-Reading on Mac OS X
In OS X, there is only a single movie-file reading subsystem, Quicktime.
Apple does not supply a 64-bit version of Quicktime. Apple provides a set of
stubs that translate 64-bit requests from 64-bit SynthEyes to 32-bit Quicktime.

83
OPENING THE SHOT

For information on formats supported by OS X, see


"http://support.apple.com/kb/HT3775 : Media formats supported by QuickTime
Player."
For information on some additional formats that you may be able to read
using additional third party software, see "http://support.apple.com/kb/HT3526 :
Adding additional media format support to QuickTime."
You may be able to locate additional Quicktime codecs as well.
Note that though Apple lists the AVI file format, there are few available
codecs for reading AVIs using Quicktime (Win on Mac), and that due to the
details of the 64-bit to 32-bit translation, only a predefined list of codecs can be
used to output Quicktimes.
Movie-Reading on Windows
Windows has maintained compatibility with quite old codec software by
providing a succession of movie-reading systems, resulting in a rather complex
situation.
Reader Subsystems
Windows provides an extensive set of subsystems, from oldest (1990s) to
newest:
the original Video for Windows (VfW, AVI) subsystem,
the Quicktime for Windows subsystem (supplied by Apple),
the DirectShow subsystem, and
the Windows Media Foundation (WMF) subsystem. (Windows 7+ only!)
(There's another subsystem for DRM-protected content that SynthEyes
does not support, as well)
Each subsystem has its own strengths and weaknesses. The VfW
subsystem is limited to files of at most 2 GB but opens files quickly. The WMF
subsystem is the most advanced, 64-bit with support for the latest formats
including those from cell phonesbut Windows 7 is required to use WMF.
Each of the subsystems supports a different set of codecs, corresponding
to its age. So the AVI subsystem supports old codecs, but not the latest, while
the WMF subsystem supports only recent codecs.
For information on media formats supported in WMF, see
"http://msdn.microsoft.com/en-us/library/windows/desktop/dd757927%28v=vs.85%29.aspx :
Supported Media Formats in Media Foundation." Note that WMF reads some
common OS X .MOV formats.
For information on DirectShow, see "http://msdn.microsoft.com/en-
us/library/windows/desktop/dd407173%28v=vs.85%29.aspx : Supported Formats in
DirectShow."

84
OPENING THE SHOT

64-Bit SynthEyes
Windows can run both 32-bit and 64-bit code. Many of the older codecs
are available only in 32-bit form. There are few 64-bit VfW AVI-writing codecs on
Windows (Microsoft RLE, Video 1, Intel IYUV).
To access the 32-bit codecs, SynthEyes contains a subsystem that sends
movie-reading requests from 64-bit SynthEyes to a 32-bit server that is part of
the SynthEyes installation.
Subsystem Ordering
A single .AVI or .MOV to be opened might be opened on all four movie-
reading subsystems. Oh, it might be opened in 64-bit mode, or in 32-bit mode,
for a total of eight combinations!
Depending on the movie, all 8 subsystems might be able to open it, or
none! Each subsystem that can open a movie may result in different
performance.
To bring order to this madness, the SynthEyes Image Input preferences
allow you to control the order in which SynthEyes tries to open a movie file: the
1st Reader (subsystem), the 2nd Reader (subsystem), etc. The first subsystem
able to open the file wins.
When "Via 32-bit" is encountered, the file is passed to the 32-bit server,
which attempts to use it using exactly the same order (excepting itself). So you
can try opening a file in the 32-bit environment first if you want, or last (the
default).
Any subsystems that are not present are ignored, ie WMF is ignored on
Vista and XP machines, and the "Via 32-bit" subsystem is ignored on 32-bit
machines.
The default ordering should be useful in most circumstances. Changes to
the preferences take effect when previously unseen files are opened. To see new
settings on an already-open file, close and reopen SynthEyes.
Threading Control for Reading Movies
SynthEyes utilizes many different threads during playback and processing.
Unfortunately modern movie-reading systems are designed for simple linear
playback, with a single "read point" that may need to shuttle forward and back to
accommodate multi-threaded processing or a user scrubbing through a shot.
If the shot has few keyframes, that may pose a problem. The Image Input
preferences contain a Reader Threading Mode control.
When set to None, all requests go, in order, through a single worker
thread, producing a relatively well-ordered stream of requests to the movie
reader.

85
OPENING THE SHOT

When set to IFL/Red/etc, only the IFL image file reader, BAFF reader,
and Red readers permit multi-threaded reading, all other readers use a single
worker thread.
When set to All, all readers are permitted to multithread. Though most
image readers can process requests only one at a time, with multiple workers the
requests can come in many orders.
The control defaults to All. Some experimentation shows that while this
setting slows playback slightly on some AVI and MOV files, it permits much
higher rates when playing in reverse.

Important: To enable multi-threaded reading, keep "Read 1f at a time"


on the shot menu or preferences OFF.

If you have a movie file with very few keyframes, so that seeks are slow,
you might consider experimenting with this setting.

Disk Cache
In addition to in-RAM caching, SynthEyes can cache shots onto a hard
drive for potentially faster access in certain circumstances. This feature is
available only in SynthEyes Pro, and makes sense only in the 64-bit
version.
The disk cache can save time when the original footage is located on a
remote disk drive, or is encoded using a codec that takes a substantial amount of
time to decode (such as RED). The disk cache stores the entire shot's data in a
single large flat file ("BAFF" file) within a folder that you can locate on a fast local
disk.

Warning: The Disk Cache does not preserve the shot. If metadata is
required, either turn off the disk cache, or save the metadata in tandem
with the BAFF file. See Shot Metadata below.

The disk cache stores a (decoded) version of the entire original shot,
regardless of shot begin/end settings, before any effects caused by the image
preprocessor. The disk cache stores the images as you work, whenever they are
read by SynthEyes. If you want to load the entire shot, click Play to run through it.
You can monitor the disk cache load percentage in the Memory Status
area of the Shot Setup panel. Unlike the RAM cache, the disk cache is still there
after you close and reopen SynthEyes.
To estimate the size of a disk cache file, multiply the image width times
the image height times the number of frames in the shot times THREE. If the
"Cache only 8-Bit versions" preference is off, then multiply by two if you require
16 bit/pixel processing or "half" OpenEXR files, or multiply by four if the shot has
floating-point values (Don't multiply at all if the preference is on). An alpha
channel will add an additional byte per pixel per frame.

86
OPENING THE SHOT

For example, 1000 RED images at 4096x2304 will require about 28 GB,
which will fit in RAM on customer's machines with 40 GB or more, but with disk
cache, the decoded version will persist from run to run, instead of requiring
decoding each time.
The entire disk cache file appears simultaneously in the address space of
SynthEyes. Since 32-bit applications are limited to 2-4 GB of total address space,
with much overhead, and the cache files are typically at least several GBs, this
explains why disk caching isn't very useful for the 32-bit builds of SynthEyes.
Shot Naming
SynthEyes uses the file name of the shot, with a .baff extension, as the
name of its file in the disk cache folder. For example, Shot15.mov becomes
Shot15.baff. For an image sequence, the name of the image-file-list (IFL) file is
usedwhich is the name of the first image in the sequence by default.
SynthEyes keeps track of the last-modified date of the original file, so that
if you change the original footage, the disk cache will be flushed and rebuilt, and
it stays synchronized with the footage.

Tip: If you open an image sequence (creating a .IFL file) that has
loaded into the disk cache, and you want to create another SNI file
based on the same image sequence without causing the disk cache to
reload, be sure to (re-)open the .IFL file directly, rather than the first
image in the sequence, which would cause the IFL to be re-written, in
turn causing the disk cache to be invalidated and have to reload.

You may need to pay some attention to make sure that different shots
don't have the same BAFF file name. If they do, either one shot will not be
cached (if they are both open simultaneously), or each time you open one shot,
the cache for the other will be replaced. If you won't be working on the other shot
any more, that is fine. But if you want both caches to be persistent, you need to
name them separately.
This is especially relevant for stereo work: you should name each eye's
images separately. If you have a LeftEye and RightEye folder with Shot37.mp4 in
each, they will conflict. Name them Shot37L.mp4 and Shot37R.mp4, for
example. This will probably help avoid mistakes as well.
Placing the Disk Cache Folder
The disk cache consists of files that reside in a folder whose location you
must specify. Disk caching is off by default, as the best location depends on the
details of your particular machine's configuration, and disk caching may or may
not be a good idea on your machine.
Carefully consider the following factors to determine the possible location
of the disk cache folder:
DO use SSD or RAID drives

87
OPENING THE SHOT

unless you commonly work with compressed "movie" files (such as


RED or MP4) that take a long time to decode per frame, do NOT
use conventional rotating hard drives
DO use disk drives internal to your machine, ie connected via
SATA or equivalent.
DO use disk drives connected via Thunderbolt
do NOT use disk drives attached via USB
do NOT use disk drives attached via a network (Ethernet or SAN)
DO create a new folder just for the disk cache (it will contain many
large files)
DO use D:/DiskCache or /Volumes/FASTRAID/DiskCache for
example
do NOT use D: and do NOT use /Volumes/FASTRAID etc
do NOT use a folder on a partition formatted as FAT-32 (the
maximum allowable file size is too small)
DO ensure you have full read/write permissions for the disk cache
folder
do NOT allow the disk cache folder to be included in your regular
backups (This folder will be very large and frequently changing. If
you back it up, your backup size will explode. If the file is lost, it is
easily regenerated from the original shot.)
DO be aware that the large files written for disk caching may result
in life-limiting "wear" on SSD drives, as determined by how many
different shots you open per day and their sizes etc.
we recommend NOT using your main system SSD as a disk cache.
Using a secondary SSD drive will balance the performance and life
of the drives.
To turn disk caching on, open the SynthEyes preferences (Edit/Edit
Preferences), and look at the Folder Presets group box, at the bottom right of the
preferences dialog. Change the drop-down selector to Disk Cache, then click Set
and select the desired folder to use (see below).
To turn disk caching off, use the same process as to set the Disk Cache
folder, but click the Clear instead. This does not delete the folder or any file within
it.
Disk Cache Preferences
Aside from the folder location described above, which not only controls the
location but whether disk caching is enabled or not at all, there are some
additional controls to determine which shots are cached and how much space is
allocated to caches. These may be found in the Image Input subsection of the
preferences panel. The spinner values are in units of gigabytes (GB), ie
1024*1024*1024 bytes.
Min disk-cache shot Shots must require a cache at least this big to be cached.
There's no point caching a shot that fits easily in RAM.

88
OPENING THE SHOT

Max disk-cache shot Shots must be less than this size to be cached. Defaults to
50 GB. This is just a reference value; if you are working with
large 4K shots you may need to increase it.
Maximum disk-cache size The maximum total size of all the files in the disk
cache. Old disk cache files will be deleted to stay under this
maximum size. Defaults to 250 GB. You will almost certainly
need to set this to correspond better to the size of the disk
containing the disk cache. If you are dedicating an SSD or
hard drive as a disk cache, allow 5-10 GB of margin.
Cache only 8-Bit versions When set, all shots will be disk-cached at 8 bits per
channel bit depth, regardless of their original depth, to
reduce disk cache space and improve speed. When this
checkbox is off, shots are cached at the bit depth of the
original images. See the following section for more
information.
Note that disk cache files are created at the required final size. You cannot
determine how much of the cache has been loaded by looking at the (BAFF) file
in your operating system... see the Shot/Edit Shot panel's Memory Status area.
Native vs 8-Bit Caching
Normally, when a shot is read, the image bit depth (8-bit, 16-bit, floating)
is determined by the "Process Depth" setting on the Shot Setup panel (which is
also found on the Rez tab of the image preprocessor). That turns out to be
problematic for disk caching, where shots can be open from multiple places and
settings can change without warning.
The disk caching system uses one of two different approaches, controlled
by a preference (Cache only 8-Bit versions):
Always caching a shot in its native bit depth
Always caching every shot at 8-bit depth
Note that the setting does not matter if all your shots are 8-bit, for
example, a typical AVI, MOV, Targa, or JPEG!
Using the native setting (default) preserves the full original content of the
files for sure, and is highly recommended for applications such as stabilization or
lens undistortion, where modified images will be output.
Using the 8-bit-only setting will cause the images to be converted to 8-bit,
then stored in the disk cache. This will speed improve interactive performance
and minimize the size of the disk caches. Do not use if for stabilization or shot
undistortion, however.

Fine point: if you have half-float or floating-point images and 8-bit


depth, the Range Adjust control affects how the images are converted
to 8-bit format. Normally, floating-point values range from 0..1 and the
range control can be left at zero. If you adjust the range setting, it will

89
OPENING THE SHOT

affect all images as they are read and placed in the disk cache. If you
change the value after reading some but not all images, you can wind
up with a mixture in the disk cache, which is not good. If that occurs,
you should flush the cache using Script/Flush Shot's Caches.

When you change the preference, all currently-open shots will be


converted (and flushed in the process). Other disk caches are unaffected until
the shot is reopened.
Filling the Disk Cache
A disk cache will fill automatically as you work within SynthEyes, storing
each frame you use.
If you want to fill the cache all at once, perhaps before working on a shot
at all, you should open SynthEyes, open the shot, and click Play to play through
the entire shot. If you are using only a portion of the shot's entire frame, then only
frames within that range will be valid in the cache. If you want the entire shot to
be valid in the cache (for example, to work disconnected when you might change
the frame range), then you should load the disk cache before changing the
beginning and ending frames of the shot.
The Shot/Enable Prefetch and Shot/Read 1f at a time settings may affect
how fast frames read, and thus how long it takes to fill the cache. The best
settings will depend on your particular machine and the exact details of the
source imagery (file type and codec), so you may want to experiment a little. You
can monitor the playback frame rate on the SynthEyes status line to help.
When a shot is played, reading and decoding the original imagery and
storing it in the disk cache, a large data flow is produced.
If your machine has RAM available, some of the disk cache will be stored
temporarily by the operating system in that RAM, which saves time versus writing
it to disk immediately. The operating system will move some of the data to the
disk as it needs to use that RAM instead.
Once all available RAM has been used, additional frames can be decoded
only as fast as they can be stored to disk. For example, with a 30 MB/sec non-
SSD disk and 6 MB/frame, frames will be read at about 5 frames/second.
SSD drives are recommended for disk caches because they are much
faster than hard drives. An Intel 520 SSD is specified to write at up to 520
MB/sec (6 Gbit/sec SATA), which would permit writes 10x faster, in the 50
frame/sec range. At that speed, the image decode time will dominate the
playback rate. (SSDs similarly read much faster than hard drives, which is useful
once the shot is in the disk cache.)
When you close SynthEyes, any unsaved data will immediately be
queued to be written to disk. That can be tens of GBs depending on size of the
file and the amount of RAM on your machine. Since a standard non-SSD disk
drive has only a few tens of MBs/sec of write speed, saving the remaining data

90
OPENING THE SHOT

can take ten minutes or more. Your operating system does that automatically in
the background, so that it will have little impact on other things you are doing.

Warning: if your electric power fails or you force your machine to shut
off before the BAFF file has been completely written, then any un-
saved data will be lost. The BAFF file will be unusable, since it will
claim to have many valid frames, but some will not have the complete
image stored. Although this is inconvenient, it is easily rectified by
clicking Script/Flush Shot's Caches..

The Disk and RAM Cache Pipeline


To help you understand disk and RAM caching in SynthEyes, this section
provides some more detailed information about how images are read, processed,
and stored. The sections below provide information on each stage/step in this
pipeline.
Image Readers. There is an image reader in SynthEyes for each supported still
RGB image format, such as JPEGs, TIFFs, PNGs, Cineons, etc. These
components get told to produce 8-bit, 16-bit, or floating-point buffers, filling them
with whatever data the image supplies, which may be 8-bit, 16-bit, or floating
point depending on the particular image. The image readers can also read and
store the alpha channel, if one is in the file, or read a separate alpha-channel file.
Movie Readers. There is a movie reader for each supported movie file type,
such as MOVs, AVIs, MP4s, IFLs, BAFFs, etc. The IFL movie reader uses the
appropriate still image reader to read each image. Movie readers for AVIs,
MOVs, etc, use the operating system to decode the image, as described
previously. As with the image readers, the movie readers get told what format
buffer to produce, and whether or not to read the alpha channel, if it is present
and supported.
Disk Cache Reader. The disk cache is a movie reader too. If an image is
present in the disk cache, it is supplied immediately from the cache. If not, the
disk cache passes the request to the underlying original movie reader. When the
disk reader asks for an image, it always asks for the image in its original bit
depth, unless the Cache only 8-Bit versions preference is on. If that preference is
on, the disk reader always requests 8-Bit versions. Either way, when the disk
cache receives an image from the underlying reader, it caches it in the BAFF file
for (faster) future use. The disk cache images are not pre-processed.
Image Preprocessor. The image preprocessor performs all the color
adjustments, resolution adjustments, distortion correction, stabilization, etc. It
asks for images from the shot's movie reader, which is either a disk cache reader
(if the shot is disk-cached), or the appropriate movie reader. The image
preprocessor asks the movie/disk cache reader for the image in the format set by
the Process Depth area of the Shot/Edit Shot shot setup dialog, which is also on
the Rez tab of the image preprocessor. The image preprocessor processes the
image in whatever format it is presented in by the reader. Accordingly: if the

91
OPENING THE SHOT

shot is in disk cache, the Process Depth setting will be ignored, since the disk
cache reader uses the Cache only 8-bit versions setting.
RAM Cache. The RAM cache stores preprocessed image, so that images don't
have to be repeatedly reread and reprocessed. It stores up to the number of GBs
set by the Max RAM Cache GB preference in the Image Input section. The RAM
cache requests images from the image preprocessor, asking for images in the
Store/Output format. The pre-processed images are always converted into this
desired format for storage, right at the end of pre-processing. This permits you to
use floating point format for processing, but convert to 8-bit for storage, for
example. The RAM cache is shared among all open shots.
Tracking and Display. The trackers and viewport display code request images
from the RAM cache, based on what you want SynthEyes to do. If you scrub
through only frames 20-30, then only frames 20-30 will be fetched and potentially
reside in the RAM or disk cache: image reading is based on "pull" not "push."
Operating System Free RAM Pool. (Not an official part of the processing
pipeline, but working off to the side.) Modern computers often have many GBs of
RAM, even though they are running fairly small applications such as web
browsers. This means that often there is a large pool of unused free RAM
unused in the sense that it is not part of any particular application or process on
your machine.
The operating system uses the free RAM as a cache for your disk. Once you
read a file, it stays in RAM as well as in disk, so that if you need it again, the
operating system already has it. And when you are writing to disk, the operating
system will put the data in RAM, then tell the application that the write is done,
even though it is merely queued, and may not be written for some time. So this
pool of unused RAM is quite helpful. The operating system may keep the disk
cache's BAFF files in this unused RAM, if there is enough available, which makes
your machine quicker. If it needs the RAM, the operating system writes the data
to disk, if that is needed. If the data on disk is already correct (the file is being
read), then the operating system can take the RAM back at any time. All of this
happens automatically without your interaction.
Final exam question. If you change the Saturation value in the image
preprocessor, what caches are affected? Answer: only the RAM cache. Any
affected images stored in the RAM cache will be invalidated (dropped), and the
image preprocessor will have to regenerate them as they are needed. Since the
disk cache stores unprocessed images, it is unaffected, and the RAM cache will
be supplied with images from the disk cache (if present). SynthEyes will not have
to go back to the movie reader (and maybe image reader) to reread the original
images.
RAM Cache Size with Disk Caching
The RAM cache and disk cache both use copious amounts of RAM,
though in different ways. The size of the RAM cache is controlled by the Max
RAM Cache GB preference in the Image Input section. The size of the disk

92
OPENING THE SHOT

cache is determined by the size of the shot, but the amount of the disk cache in
RAM is controlled adaptively by the operating system, depending on the
requirements of SynthEyes and other running applications.
If you have the image preprocessor performing extensive processing
(especially un- or re-distortion or stabilization), and the shot can fit into RAM, be
sure to keep the RAM cache large enough to cache the shot, if possible, so that
you avoid repeatedly pre-processing the shot. This is no different than without
the disk cache. Though the disk cache helps quickly access the original images,
it is important to use the RAM cache to minimize repeated preprocessing.
When the image preprocessor is not being used (ie all controls are at their
default settings), then the image supplied by the disk cache will be the same one
saved in the RAM cache. Though the image is listed at its normal size on the
shot setup panel, in fact it does not take any extra RAM, because the image is
already stored in the file on disk. When the image is "stored" in the RAM cache, it
occupies less than a hundred bytes regardless of the image resolution.
Using Disk Cache Files Directly
The disk cache consists of large flat files, one for each shot, each with a
BAFF file extension. BAFF is a custom file type that SynthEyes can open and
read directly as well: essentially it is another kind of movie file, like an AVI or
MOV or MP4.
Once a shot has been completely loaded into the BAFF disk cache file,
the BAFF file can be opened directly by SynthEyes. For example, you might want
to create a BAFF file from a shot on a network, then work on the shot from a
different physical location, disconnected from the original network.
To do that:
Make sure the shot is 100% loaded into disk cache by playing
through the entire shot, then checking the Memory Status area of
Shot/Edit Shot. Make sure it is at 100.0%.
Close SynthEyes
You MUST move the BAFF file to a different folder, because BAFF
files in the disk cache folder are subject to deletion at any time (to
limit the size of the cache, or clear it).
Reopen SynthEyes, and then the SNI file referencing the original
shot.
Do a Shot/Change Shot Images, and select the BAFF file
The entire shot will be available immediately, subject to the time
required for your disk to read the section you need.
The BAFF file format is a new simple flat file-type specifically designed for
very fast operation using advanced virtual memory techniques. SynthEyes never
reads it at all: the entire file is mapped directly from disk into the application's
address space, and SynthEyes can pull data from it as if it was in RAM.

93
OPENING THE SHOT

The operating system reads data for the BAFF file on SynthEyes's behalf,
using unused system RAM as a buffer to speed operation.
Interested developers can inquire about the BAFF format; it should be
straightforward to add high-performance BAFF image readers to other
applications.

Reading RED R3D Files


SynthEyes can read RED's REDcode R3D files directly on Windows,
Linux, and OS X systems.

Warning: RED periodically updates their cameras and software with


new features and data formats that can make them unreadable by
software built with earlier versions of the RED SDK. SynthEyes
currently uses RED SDK 6.2.2, which includes 8K support; check the
release history for your version for its RED SDK version.

RED Files can be read by three different methods, ordered fastest to


slowest:
RED Rocket hardware board
GPU-accelerated processing using your video card
Main CPU processing
They can't all read all types of file, however. Most of the time, something
reasonable should happen automatically without your involvement, if you adjust
Shot/Enable prefetch appropriately. We'll discuss optimizing your configuration
for each case below.

Warning: Though old 32-bit machines can read RED files, the large
memory size of the files will make them very difficult to use effectively
on 32-bit systems: it's time to get a 64-bit system.

RED Rocket Decoding


If one or more RED Rocket boards are present, they will be used by
default. We're only assuming that the Rocket is fastest, though. If you have a big
powerful video card, you can try using the GPU instead to make sure. See the
following section to see how to enable the use of Rocket.
For RED Rocket decoding, you should have "Shot/Enable Prefetch"
turned on.
RED Rocket does not support decoding Dragon material. (RED Rocket-X
might?)

94
OPENING THE SHOT

RED GPU Video Card Decoding


If you have a more recent video card with 1GB or more of video RAM, the
RED GPU reader can provide much faster RED file decoding times, as much as
ten times faster or more!

GPU Requirements: 1 GB of VRAM, 2 GB or more to enable faster


acceleration. CUDA 5.5 or higher with GPU compute capability 2.0 or
higher, or OpenCL 1.1 or higher. (Specs according to RED.)

Note: The RED GPU decoder does not support decoding HDRx or
ColorVersion1 files. Opening these files with the GPU reader active will
cause the software reader to be successfully used instead, but you will
want to turn "Shot/Read 1f at a time" back off to maintain interactive
performance while reading.

The real test of whether your video card is usable or not is to actually try it;
we highly recommend that approach (yes, that's you, demo users).
When opening RED shots using the GPU, you have "Shot/Enable
Prefetch" turned on to start with.
You can see whether your video card is being used the first time you open
a RED shot after starting SynthEyes. When the shot setup dialog appears, the
status line will provide information on whether CUDA or OpenCL are being used;
if nothing appears, software will be used.
There are a substantial number of preferences affecting RED GPU
Processing, as listed in the RED section of the preferences. Note that these
preferences all take effect the first time you open a RED file after starting
SynthEyes. If you've already opened a RED file and change a RED preference,
you'll need to restart SynthEyes to see the result.
The first and most important preference controls whether Rocket, CUDA,
or OpenCL are used. By default ("Any Type"), they will be tried in that order. You
can select one of the three, or disable them all and force software decoding.
The second preference controls what CUDA or OpenCL device (video
card) will be used, if there are more than one. By default (ALL), they will all be
used, which may be fast, or may run your system out of memory. So you can
also select the first, second, third, or fourth usable devices of that type. (This
control does not affect Rocket, all Rockets are used.)
The remaining preferences are very technical. The tooltips on the
preferences panel provide some description based on RED's explanations. If you
are working with a lot of RED footage and want to experiment with a stopwatch,
you might consider trying some different settings and timing the resulting
performance... but probably not.

95
OPENING THE SHOT

RED CPU Software Decoding


When reading RED files using the software decoder, it is a good idea to
turn off the Shot/Enable Prefetch setting before opening the R3D shot. This
because the RED SDK can currently decode only a single frame at a time and
each frame requires a substantial amount of computer time. If the SynthEyes
prefetch engine is active, ordinary screen redraws will be queued behind 8 to 24
separate frame decodes, depending on the number of processors on your
system, which will temporarily freeze the SynthEyes user interface. Hopefully
RED will eliminate the single-frame-decode limitation in future releases of the
SDK.
SynthEyes supports RED HDR files, which contain two video tracks: a
normal track and an extended range track that is stopped down via a short
shutter time. HDR exposure is controlled from the Shot Setup panel (not the
image preprocessor!). Use the "F.-P. Range Adjust" spinner on the shot setup
panel to adjust exposure. For RED footage, this value corresponds to the "Bias"
control in RED HDR processing. At +10, the image is brighter and only the
normal image is used. At -10, the image is darker, and only the extended-range
(stopped down) image is used.
When HDR footage is available, the shorter exposure time of the
extended-range "X" image will reduce motion blur and therefore make it easier to
track than the normal-exposure image. As described in the previous paragraph,
you can set the Shot Setup panel's range adjust control to -10 to use only the
extended-range image. The Image preprocessor's exposure control and the
Tracker Control Panel's gain control can restore visibility of dim features. This
approach should not be used if the extended-range image is stopped down so
far that it has excessive noise in normal portions of the image (ie, other than for
bright lights).
While SynthEyes can read RED R3D files, there is no assurance that
downstream compositing or animation applications can do the same. You may
need to convert the R3D movie file to an image sequence in order to export files
from SynthEyes that downstream applications can read.

Reading DNG/Cinema DNG Sequences


SynthEyes has basic capabilities to read DNG (Digital Negative) files, and
sequences conforming to the Cinema DNG numbering scheme. Normally DNG
files should be read, processed, and appropriate color grading applied in other
applications, and the resulting output fed to SynthEyes.
However, it may sometimes be worthwhile for SynthEyes to read DNG
sequences, for example for on-set verification or to get a head start on tracking
before the graded sequence is available, as the color grade generally won't affect
tracking. The imported DNG files should not be written and used for delivery.

96
OPENING THE SHOT

Color Processing
The DNG standard and the Adobe DNG SDK implementing it address
mainly the low-level storage of raw image data and the recovery of raw RGB data
from it. Producing the images you are accustomed to from that data requires
extensive digital image processing, which Adobe does not include in their SDK
(likely because they hold it to be proprietary). So although the DNG standard and
SDK can deliver the bits, it doesn't deliver the image. Different software vendors
will produce different images from the same DNG file.
This type of processing is outside the scope of SynthEyes, so at this time
it offers only some basic postprocessing.
The raw as-read images contain large amounts of chroma noise, due to
undersampling of color (there is only a single R, G, or B sensor site for each
pixel, not all three).
Chroma noise can be very hazardous to normal color supervised trackers,
because the errors will be very large. Consider using auto-tracking or
monochrome channel selections for supervised trackers.
Alternatively, you can use the luma and chroma blur filtering options on
the Image Preprocessor's Filtering (color) tab to create a small amount of luma
blur and larger amount of chroma blur. Some artifacts can be expected in spots
where the filtering results in illegal color combinations; larger blur values reduce
those artifacts. (Don't use luma+chroma blur for normal degraining, use Blur
instead.)

Warning: if you are using a two-pass lens workflow with re-distortion in


SynthEyes, be sure to remove the luma and chroma blur

Hopefully the DNG standard and SDK will provide standardized


processing in the future.
File Sequences
Cinema DNG sequences are a little different from SynthEyes's normal
image sequences, so we list them here:
The filenames shall include a sequencing field that is at the same position
and of the same length for all filenames.
The sequencing field shall contain characters 0 through 9 only. A file whose
filename contains other characters in the location of the sequencing field
(including sign, period, comma, or space) shall not be part of this sequence.
The sequencing field shall be a run of at least four decimal characters in the
filename. When more than one such run exists in the filename, the
sequencing field shall be the run closest to the end of the filename.
Omitted intermediate numbers shall indicate corresponding missing frames.
For example, in the filename bridge_0812.1136.day13.dng, "1136" is the
sequencing field.

97
OPENING THE SHOT

By contrast, a normal SynthEyes sequence field can be fixed length or


steadily increase, is always located at the end of the file name, and any missing
frame ends the sequence. If the usual SynthEyes rules are desired for DNG
sequences, turn off the Cinema DNG rules preference in the Image Input section,
then open the file to produce a normal IFL file. The preference can then be
turned back on.

Tip: To facilitate later editorial changes and compatibility with other


software, you should always open the first file in a DNG sequence.

DNG files can be opened as 8-bit integer, 16-bit integer, or 32-bit floating
point files. The 32-bit format is the most accurate.
Performance
Reading DNG files and processing them to usable information is much
more time-consuming compared to simple TIFF or PNG files.
To improve performance, keep the Shot/Enable Prefetch menu item on,
and the Shot/Read 1f at a time menu item off. This will enable all the cores in
your processor to work simultaneously to read frames.

Warning: if your images are located on a remote network drive, some


networks have been found to not operate correctly when faced with
many simultaneous requests, and you'll need to keep the files locally,
or turn Shot/Read 1f at a time on.

You can also use SynthEyes's local disk caching capabilities, so that the
DNG files are only read and converted once, then kept locally behind the scenes,
preferably on a fast RAID or SSD disk.

Shot Metadata
SynthEyes can retrieve metadata about shots (more specifically about the
individual frames in the shot) such as lens focal length, focus distance, iris, ISO,
etc, depending on the information available and extracted by the image-format-
specific reader. So it may be available or not depending on the image format and
what camera produced it, and what applications have processed it after that.
Current image/movie formats that SynthEyes can read metadata from:
DNG, EXR, JPEG, RED, and TIFF.

Note: Metadata may be supplied in a tab-delimited ..._meta.txt file


containing a header line and a data line for each frame. Empty data
fields are pulled from the previous line so that data can easily and
compactly be repeated for the entire file. This file overlays existing
metadata in the movie or imagery.

Warning: Metadata in movies and images is not well-standardized (ie


it's quirky and application-specific), and can exist in many forms.

98
OPENING THE SHOT

SynthEyes may not be able to find any or all kinds of metadata in a


particular image or movie file, even if some other program can. If you
want to lobby for some specific data from a specific image type to be
readable, you'll need to supply sample images and the name of the
program writing it.

Warning: The Disk Cache does not preserve the metadata from the
source shot by itself. If metadata is required, either turn off the disk
cache, or use the Metadata/Export All Frames script to save the
metadata in tandem with the BAFF file.

There are a limited number of things that can be done with the metadata:
Take fixed or animated focal length and plate size data from a
zooming shot, and drive that into SynthEyes's seed FOV track via
the "Metadata/Retrieve focal length and plane" script (which is a
Tool script, not an importer!).
Look at it for clues to what happened on set with the
"Metadata/Export Single Frame" exporter.
Export it to a text file with "Metadata/Export All Frames", for
example to use metadata from a RED file that has been converted
to an image sequence.
Access the predefined metadata with Synthia, for quick looks or
setup gimmicks.
Access it via your own Sizzle workflow-integration scripts.
Note that the metadata is specific to each individual frame, ie it is
animated. Generally setup information should be available, and the same, for
each frame, though that may depend on the source.
Here are the predefined metadata items, which are created from other
metadata items. The names shown are literal and must be used exactly (without
the quotes) within a metadata file, Sizzle or python. Missing values will be zero or
the null string, or use MetaPresent to test. Note that the importers will typically
define many more format-specific items; exif_* items are frequently present.
"exposure" The shutter exposure time in seconds.
"fnumber" Lens f-number in f-stops (may be a t-stop for RED, see the
RED-specific metadata)
"focal" The lens focal length, in mm. WARNING: useless unless you
also really know the back plate width! Even still, not very
accurate!
"focus" The lens focus distance, in mm.
"iso" The ISO setting of the camera 100, 200, 400, etc.
"plateHeight" The height of the plate (sensor) corresponding to the actual
image, in mm.
"plateWidth" The width of the plate (sensor) corresponding to the actual
image, in mm.

99
OPENING THE SHOT

"shootDate" Shooting date YYYY-MM-DD (local time)


"shootTime" Shooting time HH:MM:SS (local time)
"shutter" Exposure as a shutter angle
"timecode" Timecode HH:MM:SS:FF
You can easily access any additional metadata items that may appear in
future camera firmware. If you use the Metadata/Export Single Frame exporter,
you'll see the tag name that you can use in Sizzle or Synthia. For example, you
can access RED's 8-character reel ID from Sizzle with
shot.metastr.red_reel_id_8_character To teach Synthia to retrieve it, use
define an object attribute "reel id" values are a string accessed
readonly by `shot.metastr.red_reel_id_8_character`.
Then you can ask for camera 1's reel id, for example. Use metanum
and "a number" for numbers.
Retrieving Focal Length from Metadata
The (Scripts/) "Metadata/Retrieve focal length and plane" script looks at
the available metadata and drives it into the seed field-of-view track. This script is
the primary current use of metadata. The script handles both fixed and zooming
shots. Different controls will appear in the script depending on what metadata is
available.
It is crucial to have an accurate back plate width number as well, as
otherwise the focal length means nothing. Sometimes the back plate width
information is available from the metadata; if not you must acquire it from the
camera specifications or from other calibration shots. In at least some cases,
more metadata may be available from still images acquired as "raw" and then
saved as other image types.
In the event that no better information is available, the script may use the
exif_focal_length_in_35mm tag, which is the equivalent focal length, as if the
camera was a 35-mm still camera. This tag is dependent on the camera
manufacturer doing the math right. Unfortunately a technical issue in the EXIF
data format requires that the number be a whole (integer) number, limiting its
accuracy.
The script writes to the "seed" track, ie the initial suggested value for the
lens field of view. Check the "Set lens mode to Known" checkbox (or set the
mode yourself) to have the lens field of view used exactly for the solve.
Otherwise it may be used as a starting point, or just for comparison.
If the scene has already been solved, you'll be asked if you want to clear
the solved camera FOV track. If it is cleared, then the newly-created metadata-
based FOV is visible directly and immediately. If not, it won't be visible unless the
scene is re-solved, the solution is cleared, "View/Show seed path" is selected, or
you look at it directly in the graph editor.

100
OPENING THE SHOT

Writing Metadata
At the current time, only the TIFF image writer can write metadata into the
images it produces, ie via Save Sequence in the image preprocessor. It writes
the EXIF data produced by the DNG, JPEG, or TIFF image readers. Note that
the data is not currently modified to account for the potential effect of changes
made by the image preprocessor, such as resampling, padding, etc.
For bulk storage of metadata, use the Metadata/Export all frames script.

Separate Alpha Channels


SynthEyes will read the alpha channel from many common image file
formats. The alpha channel can be used for foreground/background object
separation during auto-tracking, planar tracking, or simply passed through during
stabilization or undistortion for use in downstream effects.
SynthEyes can also read alpha-channel data from separate alpha files, if
they are named appropriately, as controlled by a preference in the Image Input
section. (If the file already contains an alpha channel, there is no search for a
separate alpha. This processing is performed only for IFL or DNG image
sequences, not "movie" files.) Be sure to check the "Keep Alpha" checkbox on
the shot settings panel if you want the alpha data read.
With the default setting of "_alpha.png", an RGB file name of
"myseq042.bmp" is converted to an alpha name of "myseq_alpha042.png" If that
alpha file exists, it is read and used as the alpha channel. This process occurs for
each frame in the sequence, though there should be a separate alpha file, or not,
identically for each frame.
If there is a dot before the frame number, it is moved as well, ie
"shot37.0100.tif" becomes "shot37_alpha.0100.tif" (Though filenames with two
dots are common, we recommend against it, since it can cause problems when
operating-system dialogs attempt to hide the extension.)
If the separate-alpha preference does not contain an extension, then the
extension of the original RGB file is used. For example, with a preference of
"Alpha" then "scene2B_0029.bmp" becomes "scene2B_Alpha0029.bmp" and
"lastshot1023.jpg" becomes "lastshotAlpha1023.jpg".
That's not a great idea for two reasons, however. First, alpha channels
have a lot of redundancy, so you want to make sure that the format do a good job
compressing the alpha channel, without introducing artifacts. Run-length or ZIP
compression are good choices (not JPEG!).
Second, SynthEyes is intended for processing RGB imagery, so support
for reading gray-scale formats is very limited (currently PNG or TIFF). You can
write an alpha channel as RGB if really necessary; due to the compression it will
probably not be much larger than it would be if it was gray-scale. In this case,
SynthEyes will use the green channel for alpha.
But for most usage, PNG is the recommended alpha-file format.

101
OPENING THE SHOT

Rolling Shutter
Rolling shutter is an imaging problem produced in many popular "CMOS"
cameras. This includes company-specific proprietary variations such as 3MOS
HyperMOS, etc, as well as other cameras such as RED that have other sensor
names. It occurs just as much in expensive cameras as cheap ones. Only CCD
cameras do not suffer from this problem.
The rolling shutter problem arises because there is no consistent shutter
period for the entire image. The top lines of the image are "taken" at over a
physically different stretch of time than the bottom lines. The lines at the top are
taken earlier, the lines at the bottom much later.
Consider a 30 fps 1080p camera. The top line is read out then reset; it
begins accumulating light and charge for the next 1/30th of a second. As soon as
the first line has been read out and reset, the second line is read out and reset,
and it then begins accumulating light for 1/30th second. That continues for every
line in the chip. Generally, for a 30 fps camera, it will take about 1/30th of a
second to read out, one after another, all 1080 lines.
That means that by the time the bottom, 1080th, line is read out, almost a
full 1/30th of a second has gone by. It will be accumulating light for a period of
1/30th of a second with virtually no overlap with the 1/30th of a second that the
top line is integrated over! The top line of the next frame will be read out and
begin integrating in just an instant. So the last line is closer to the next frame
than the first!
This wreaks havoc on the geometry of the image. Depending on how the
camera is moving during the 1/30th of a second, and what it is looking at, an
alarming number of geometric distortions will be introduced.
To the extent that the camera is panning left or right, the image will be
skewed horizontally: a vertical pole will become slanted. To the extent that the
camera is tilting vertically, images will be squashed or stretched vertically. If the
camera is pushing in or pulling back, keystone distortions will result.
If the camera is vibrating, the image turns completely to jello. (We rather
famously lost an expensive helicopter shoot to this effect.)
If there are cars driving by, one distortion can happen to the background,
and a completely different distortion to the car!
In short, rolling shutter, and CMOS cameras, are pretty much a disaster
for visual effects. If at all possible, we recommend using CCD cameras, but these
days that can be difficult.
Unfortunately, there's no way to eliminate this problem. You can reduce it,
work around it, but not eliminate it. Contrary to the claim of some, a short shutter
time does not reduce rolling shutter. If you think about the explanation above,
you'll see why. A short shutter time that reduces blur will make it harder to hide
mistakes, as well.

102
OPENING THE SHOT

Improperly shot footage with CMOS cameras will be objectionable even to


lay human observers, because it does not correspond to our perception of the
worldexcept in cartoons!
For professional shooters, the usual tactic for CMOS is to make sure that
the camera motion is slow in all directions, so that there is comparatively little
motion from frame to frame (wasn't it supposed to be "moving pictures"?). And
CMOS cameras should always be shot from hard mounts such as dollies,
cranes, etc.

Important: Always use a CCD camera for aggressive hand-held


shooting such as combat POV footage. See the paragraph about jello
above if you doubt that.

Moving a CMOS camera slowly does not eliminate the rolling-shutter


problem. It may reduce the geometric distortion sufficiently that you cannot see
it. However, in match-moving we typically match shots down to the sub-pixel
level, so something you can't see may still be a ten pixel error! That's bad.
There are several possible approaches to deal with that:
1. try to shoot to keep the rolling-shutter low, and try to cheat the
tracks and inserts to allow for the inevitable errors,
2. use a third-party software tool to try to correct the footage, or
3. compensate for the rolling shutter in the solve, producing a solve
for an 'ideal' camera.
The first choice can work out for small amounts of distortion and modest
tracking requirements. The solver will adjust the solve to best match the distorted
data, which winds up allowing inserts to appear at a correspondingly distorted
location. Long shots that loop back on themselves and other shots with high self-
consistency requirements will be substantial problems.
The second choice, a third-party tool, can be OK for simple images too.
But, keep in mind that rolling-shutter causes an unrecoverable loss of data in the
image. Any repair can be only a best guess at the missing data, and for that
reason you will commonly see artifacts around edges, and residual distortion.
The third choice, producing a solve for an ideal camera, is the approach
we make available in SynthEyes using the rolling-shutter controls on the Shot
Setup panel, as described above.
When you turn on rolling shutter compensation, the solver will correct the
tracker data (not the images) based on the motion of each tracker, so that the
tracker's position is the position it had at the vertical center line of the image.
To do that, you must turn on rolling shutter compensation and supply a
single number, which is the portion of the frame time required to read out the
image data. An online tutorial shows how to measure rolling shutter.

103
OPENING THE SHOT

For example, a 30fps camera with a 27 msec readout has a rolling shutter
fraction of 0.81. Camera manufacturers are moving to reduce the readout time to
reduce the rolling shutter problem, so a 5 msec readout at 30 fps would be a
rolling shutter factor of 0.15 (quite modest).

Gory detail: on footage shot on mirror stereo rigs, the footage from
the camera that is mirrored vertically will have rolling shutter in the
reverse direction, so the rolling shutter value should be set to the
negative of the usual value.

Eliminating the rolling shutter distortion in this fashion can provide very
substantial improvements in the quality of the solve.
In order to composite CGI generated for the ideal camera with the original
rolling-shutter-based footage, you must render the footage with a rolling
shutter also. At present, that capability is not widely available, but we expect it to
be more widely available in the future. It is essentially a modified form of motion
blur.
At present, you can likely simulate the effect by rendering at a multiple of
the frame rate, then combining the subframes in varying amounts depending on
the vertical position in the image.
Rolling shutter compensation also makes it more difficult to assess the
quality of tracking within SynthEyes, as the predicted 3-D position of a solved
tracker is based on the ideal camera, with no rolling shutter. So there will be
apparent errors where there are none. We hope to be able to provide additional
tools to address that in the future.

Minimizing Grain
The grain in film images or speckle noise in digital cameras can perturb
tracking somewhat. Use the Blur setting on the Filtering tab of the image
preparation panel to slightly filter the image, minimizing the grain. This tactic can
be effective for compression artifacts as well. (Use the blur setting, rather than
matching blurs on luma and chroma.)
There is also a Noise Reduce spinner, which controls a somewhat slower
algorithm for noise reduction with less actual blur. It is intended to help tracking,
rather than for producing ultra-clean final images. It avoids some operations in
typical noise reduction algorithms that can shift the position of features in the
image.
SynthEyes can stabilize the images, re-size them, or correct for lens
distortion. As it does that, it interpolates between the existing pixels. There are
several interpolation modes available. You can produce a sharper image when
you are re-sampling using the more advanced modes, but you increase the grain
as you do so.

104
OPENING THE SHOT

Handling Strobes and Explosion Shots


Quickly varying lighting can cause problems for tracking, especially
supervised tracking. You can reduce the lighting variations by hi-pass filtering the
image with the Hi-Pass setting. The image will turn into a fairly monotonous gray
version (consider using only the Luma channel to save memory). The Hi-Pass
setting is also a Gaussian filter size, but it is generally much larger than a 2-pixel
blur to compensate for grain, say around 10 pixels. The larger the value, the
more the original image will show through, which is not necessarily the
objective, and the longer it will take to process.
You can increase the hi-pass image contrast using the Levels settings, for
example low=0.25, high=0.75.
You can use a small blur for grain/compression in conjunction with the
high-pass filtering. It will also reduce any slight banding if you have used the
Levels to expand the range.

Memory Reduction
It is much faster to track, and check tracking, when the shot is entirely in
the PCs RAM memory, as fetching each image from disk, and possibly
decompressing it, takes an appreciable amount of time. This is especially true for
film-resolution images, which take up more of the RAM, and take longer to load
from disk.
SynthEyes offers several ways to control RAM consumption, ranging from
blunt to scalpel-sharp.
The most important control is the Max RAM Cache GB preference in the
Image Input section. It controls how many frames of the shot are stored in your
computer's RAM. This can be much lower than the length of the shot. If you are
auto-tracking, keep at least two frames per processor (four if your processors are
hyper-threaded, ie two per usable thread). If you are running out of memory (on a
32-bit license), or seeing swapping on a 64-bit license, reducing the MAX RAM
Cache GB should be your first move, and can generally be your last.
A reduced RAM cache will mean that your computer will have to go out to
fetch images from disk more often. That can be painful for some movie codecs. If
you want fast interactive performance, but are willing to give up some other
things in order to fit your shot into RAM, read on.
If your source images have 16 bit data, you can elect to reduce them to 8
bit for storage, by unchecking the 16-bit checkbox and reducing memory by a
factor of two. Of course, this doesnt help if the image is already 8 bit.
If you have a 2K or 4K resolution film image, you might be able to track at
a lower resolution. The DeRez control allows you to select or image
resolution selections. If you reduce resolution by , the storage required drops to
the previous level, and a reduction by reduces the storage to 1/16th the prior
amount, since the resolution reduction affects both horizontal and vertical

105
OPENING THE SHOT

directions. Note that by reducing the incoming image resolution, your tracks will
have a higher noise level which may be unacceptable; this is your decision.
If you can track using only a single channel, such as R, G, or luma, you
obtain an easy factor of 3 reduction in storage required.
The most precise storage reduction tool is the Region Of Interest (ROI),
which preserves only a moving portion of the image that you specify, and makes
the rest black. The black portion does not require any RAM storage, so if the ROI
is only 1/8th the width and height of the image, a reduction by 1/64th of storage is
obtained.

Tip: If you need ROI these days, likely you should just get some more
memory! It was originally intended for processing film images on 32-bit
machines. This feature is subject to future removal!

The region of interest can be used with object-type shots, such as tracking
a face or head, a chestplate, a car driving by, etc, where the interesting part is
comparatively small. The ROI can also be used in supervised tracking, where the
ROI can be set up for a region of trackers; once that region is tracked, a different
ROI can be configured for the next group. A time savings can be achieved even
though the next group will require an image sequence reload. (See the section
on presets, below, to be able to save such configurations.)
The ROI is controlled by dragging it with the left mouse button in the
Image Preprocessing dialogs viewport. Dragging the size-control box at its lower
right of the ROI will change the ROI size.
The next section describes animating the preprocessing level and ROI.
It can also be helpful to adjust the ROI controls when doing supervised
tracking of shots that contain a non-image border as an artifact of tracking. This
extra border can defeat the mechanism that turns off supervised trackers when
they reach the edge of the frame, because they run out of image to track before
reaching the actual edge. Once the ROI has been decreased to exclude the
image border, the trackers will shut off when they go outside the usable image.
As with the image adjustments, changing the memory controls does not
require any re-tracking, since the image geometry does not change.

Animated Shot Setup


The Level, Saturation/Hue, lens Field of View, Distortion/Scale, stabilizer
adjustment, and Region of Interest controls may be animated, changing values
over the course of the shot.
Normally, when you alter the Level or ROI controls, a key at the first frame
of the shot is changed, setting a fixed value over the entire shot.

To animate the controls, turn on the Make Keys checkbox ( ) at lower


right of the image prep dialog. Changes to the animated controls will now create

106
OPENING THE SHOT

keys at the current frame, causing the spinners to light up with a red outline on
keyframes. You can delete a keyframe by right-clicking a spinner.
If you turn off Make Keys after creating multiple keys, subsequent
changes will affect only the keyframe at the start of the shot (frame zero), and not
subsequent keys, which will rarely be useful.
You can navigate within the shot using the next frame and previous frame
buttons, the next/previous key buttons, or the rewind and to-end buttons.

Temporarily Disabling Preprocessing


Especially when animating a ROI, it can be convenient to temporarily turn
off most of the image preprocessor, to help you find what you are looking for. The
enable button (a stoplight) at the lower right will do this.
The color modifications, level adjustment, blur, down-sampling, channel
selection, and ROI are all disabled by the enable button. The padding and lens
distortion are not affected, since they change the image geometryyou do not
want that to change or you can not then place the ROI in the correct location.

Disabling Prefetch
SynthEyes reads your images into RAM using a sophisticated
multithreaded prefetch engine, which runs autonomously much of the time when
nothing else is going on. If you have a smaller machine or are maybe trying to
run some renders in the background, you can turn off the Shot/Enable prefetch
setting on the main menu.
Get Going! You dont have to wait for prefetch to finish after you open a
shot. It doesnt need courtesy. You can plough ahead with what you want to do;
the prefetcher is designed to work quietly in the background.

Correcting Lens Distortion


Most animation software assumes that the camera is perfect, with no lens
distortion, and the cameras optic axis falls exactly in the center of the image. Of
course, the real world is not always so accommodating.
SynthEyes offers several methods to determine the lens distortion, based
on calibration images, by straightening curved lines that are straight in the real
world, or as a result of the solving process, if enough reliable trackers are
available.
SynthEyes accommodates the distortion, but your animation package
probably will not. As a consequence, a particular workflow is required that we will
introduce shortly and in the section on Lens Distortion.
The image preprocessing system lets distortion be removed, though after
doing so, any tracking must be repeated or corrected, making the manual
distortion determination more useful for this purpose.

107
OPENING THE SHOT

The image preprocessing dialog offers spinners to set the distortion to


match that determined. A Scale spinner allows the image to be scaled up or
down a bit as needed to compensate for the effect of the distortion removal.
You can animate the distortion coefficients and scale to correct for varying
distortion during zoom sequences.

Image Centering
The cameras optic axis is the point about which the image expands or
contracts as objects move closer or further away. Lens distortion is also centered
about this point. By convention of SynthEyes and most animation and
compositing software, this point must fall at the exact center of the image.
Usually, the exact optic center location in the image does not greatly affect
the 3-D solving results, and for this reason, the optic center location is notoriously
difficult to determine from tracking data without a laboratory-grade camera and
lens calibration. Assuming that the optic axis falls in the center is good enough.
There are two primary exceptions: when an image has been cropped off-
center, or when the shot contains a lot of camera roll. If the camera rolls a lot, it
would be wise to make sure the optic axis is centered.
Images can be cropped off-center during the first stages of the editorial
process (when a 4:3 image is cropped to a usable 16:9 window), or if a film
camera is used that places the optic axis allowing for a sound channel, and there
is none, or vice versa (none is allowed for, but there is one).
Image stabilization or pan/scan-type operations can also destroy image
centering, which is why SynthEyes provides the tools to perform them itself, so
they can be done correctly.
Of course, shots will arrive that have been seriously cropped already. For
this reason, the image preprocessing stage allows images to be padded up to
their original size, putting the optic axis back at the correct location. Note that the
shot must be padded to correct it, rather than cropping the image even more! It
will be important to identify the degree of earlier cropping, to enable it to be
corrected.
The Fix Cropping (Pad) controls have two sets of three spinners, three
each for horizontal and for vertical. Both directions operate the same way.
Suppose you have a film scan such that the original image, with the optic
axis centered, was 33 mm wide, but the left 3 mm were a sound track that has
been cropped. You would enter 3 mm into the Left Crop spinner, 30 mm into the
Width Used spinner, and 0 mm into the Right Crop spinner. The image will be
padded back up to compensate for the imagery lost during cropping.
The Width Used spinner is actually only a calculation convenience; if you
later reentered the image preprocessing dialog you would see that the Left Crop
was 0.1 and the Width Used 1.0, ie that 10% of the final width was cropped from
the left.

108
OPENING THE SHOT

The Fix Cropping (Pad) controls change the image aspect ratio (see
below) and image resolution values on the Open Shot dialog, since the image
now includes the padded regions. The padding region will not use extra RAM,
however.
It is often simpler to fix the image centering in a way that does not change
the image aspect ratio, so that you can stay with the official original aspect ratio
throughout your workflow. For example, if the original image is 16:9 HD, it is
easiest to stay with 16:9 throughout, rather than having the ratio change to 1.927
due to a particular cameras decentering. The Maintain original aspect
checkbox will permit you to update the image center coordinates, automatically
creating new padding values that keep the aspect ratio the same.

Image Preparation Preset Manager


It can be helpful to have several different sets of image preprocessor
settings, tailored to different regions of the image, or to different moving objects,
or different sections of the overall shot. A preset manager permits this; it appears
as a drop-down list at the center-bottom of the image preparation dialog.
You can create a preset by selecting the New Preset item from the list;
you will be prompted for the name (which you can later change via Rename).
The new preset is created with the current settings, your new preset name
appears and is selected in the preset manager listbox, and any changes you
make to the panel continue to update your new preset. (This means that when
you are creating several presets in a row, create each preset before modifying
the controls for that preset.)
Once you have created several presets, you can switch among them
using the preset manager list. All changes in the image preprocessor controls
update the preset active at that time.
If you want to play for a bit without affecting any of your existing presets,
switch to the Preset Mgr. setting, which acts as a catchall (it disconnects you
from all presets). If you then decide you want to keep the settings, create a new
preset.
To reset the image preprocessor controls (and any active preset) back to
the initial default conditions, which do nothing to the incoming image, select the
Reset item from the preset manager. When you are creating several presets, this
can be handy, allowing you to start a new preset from scratch if that is quicker.
Finally, you can delete the current preset by selecting the Delete item.

Rendering Sequences for Later Compositing


The tracking results provided by SynthEyes will not produce a match
within your animation or compositing package unless that package also uses the
same padded, stabilized, resampled, and undistorted footage that SynthEyes
tracked. This is also true of SynthEyess perspective window.

109
OPENING THE SHOT

Tip: Save Sequence can also include a simple render of the meshes in
your scene, for previewing with still-distorted footage. It is also useful
for rendering quick inserts in 360 VR footage, since the perspective
view doesn't use 360 VR views. Normally, use the perspective view for
better renders with motion blur and control over antialiasing.

Use the Save Sequence button on the Image Preparation dialogs Output
tab to save the processed sequence. If the source material is 16 bit, you can
save the results as 16 bit or 8 bit. You can also elect whether or not to save an
alpha channel, if present. If the source has an alpha channel, but you are not
given the option to save it, open the Edit Shot dialog and turn on the Keep Alpha
checkbox.

Important: If you have a 16-bit/channel or floating point image source,


and wish to output that 16-bit or floating point sequence, be sure that
the Process Depth and Store Depth settings on the Shot Setup panel
are set to that depth! By default, they are set to 8 bit for speedy
tracking, which will make your imagery noisy!

Output file formats include ASF, AVI, BMP, Cineon, DPX, JPEG,
OpenEXR, PNG, Quicktime, SGI, Targa, TIFF, and WMV. Details of supported
number of bits per channel, compression formats, and alpha availability vary with
format and platform. Those settings will be available for later reuse if the same
file extension is selected.
If you have stabilized the footage, you will want to use this stabilized
footage subsequently.
However, if you have only removed distortion, you have an additional
option that maximized image quality and minimizes the amount of changes made
to the original footage: you can take your rendered effects and run them back
through the image preprocessor (or maybe your compositing package) to re-
introduce the distortion and cropping specified in the image preprocessing panel,
using the Apply It checkbox.
This redistorted footage can then be composited with the original footage,
preserving the match.
The complexity of this workflow is an excellent argument for using high-
quality lenses and avoiding excessively wide fields of view (short focal lengths).
You can also use the Save Sequence dialog to render an alpha mask
version of the roto-spline information and/or green-screen keys.

110
Automatic Tracking
Overall process
The automatic tracking process can be launched from the Summary panel
(AUTO button), by the batch file processor, or controlled manually. By breaking
the overall process down into sub-steps, you can partially re-run it with different
settings, saving time. Though normally you can launch the entire process with
one click, the following write-up breaks it down for your education, and
sometimes you will want to run or re-run the steps yourself.
The full automatic tracking process has several stages:
1. finding potential trackable points, called blips
2. linking blips together to form paths
3. selecting some blip paths to convert to trackers
4. optionally fine-tuning the trackers, which re-analyzes the trackers using
supervised tracking techniques to improve their accuracy,
5. running the solver to find the 3-D coordinates of the trackers, as well as the
camera path and field of view,
6. optionally, automatically running a tracker cleanup and refining (re-solving)
the scene again with the (hopefully!) improved tracking data,
7. optionally, automatically, placing the scene in the 3-D environment by
assigning a coordinate system.
After the automatic tracking process runs, you should clean up trackers
and create a coordinate system manually, if it hasn't been done automatically,
and then you'll export to your 3-D compositing or animation package, but those
topics are discussed separately and are the same for automatic and supervised
tracking.

Note: The tracker cleanup stage above is labeled for experts because
if the initial solve doesn't go wellfor example you have unmasked
actors walking aroundthe tracker cleanup will make the situation
worse, not better. If you're not certain the track and solve will go well,
you're better off examining them yourself and making any necessary
changes before running the cleanup. If you expect it to go well and it
doesn't, you can undo the different stages of AUTO individually.

The overall 2-D automatic tracking process is controlled from the Features
Panel.
Typically, blips are computed for the entire shot length with the Blips all
frames button. They can be (re)computed for a particular range by adjusting the
playback range, and computing blips over just that range. Or, the blips may be

111
AUTOMATIC TRACKING

computed for a single frame, to see what blips result before tracking all the
frames, or when changing blip parameters.
As the blips are calculated, they are linked to form paths from frame to
frame to frame.
Finally, complete automatic tracking by clicking Peel All, which will select
the best blip paths and create trackers for them. Only the blip paths of these
trackers will be used for the final camera/object solution.
You can tweak the automatic tracking process using the controls on the
Advanced Features panel, a floating dialog launched from the Feature control
panel.
You can delete bad automatically-generated trackers the same as you
would a supervised tracker; convert specific blip paths to trackers; or add
additional supervised trackers. See Combining Automatic and Supervised
Tracking for more information on this subject.
If you wish to completely redo the automated tracking process, first click
the Delete Leaden button to remove all automatic trackers (ie with lead-gray
tooltip backgrounds), and the Clear all blips button. After changes to the Roto
splines, you may also need to click Link Framesin most cases you will be
prompted for that.
Note that the calculated blips can require tens of megabytes of disk space
to store. After blips have been calculated and converted to trackers, you may
wish to clear them to minimize storage space. The Clean Up Trackers dialog
encourages this. (There is also a preferences option to compress SynthEyes
scene files, though this takes some additional time when opening or saving files.)

Spot vs. Corner Detection


The SynthEyes auto-tracker normally looks for bright or dark spots in the
images, which are tracked (linked) from one image to the next, then eventually
made ("peeled") into automatic trackers.
Spots are very prevalent in a wide variety of indoor and outdoor scenes,
and have the advantage of being largely unaffected by a wide variety of imaging
conditions, including image rotation, viewing at an angle, defocus, motion blur,
compression artifacts, and noise.
In the following situations, it is beneficial to detect corners instead:
Clean sparse man-made interior sets may have only linear features
and corners, instead of spots.
Reconstructing building meshes, where we want trackers on the
exterior corners of the building (spots are on the walls)
The corner auto-tracker is turned on from the Summary panel. It does add
considerable processing time, so we recommend keeping it off unless suitable
corners are present and desired.

112
AUTOMATIC TRACKING

The SynthEyes corner detector is specifically designed to produce corners


features "at a distance." This is because the pixels that are physically located at
a corner location are frequently not particularly reliable, being subject to varios
blurs and noise that have the effect of minimizing a corner, or shifting it from its
true location.
Consequently the SynthEyes corner detector looks for suitable line
segments at a distance from the corner, and intersects them to produce the
location of the corner. So the accuracy derives from many smooth pixels further
from the corner, rather than the few unreliable pixels located at it. Even still,
corner features are inherently less accurate than spot features, which are based
on a whole region of pixels. Since a tracker's 3-D position is based on many
frames of data, generally an excellent position can still be obtained.
There will typically be fewer corners than spots located by an autotrack.
For modeling, you may want to increase the number of corners. The Add Many
Trackers panel can be set to selectively add only corner features. Or, the
Features Control panel can be set to make only prospective trails of corner blips
visible and eligible to be promoted to trackers (at your instruction).
You can adjust the corner detector's parameters from the Advanced dialog
on the Features panel. To better understand the process, you can use the edge
or corner view types, which will produce rather colorful displays in the main
camera view.
The tooltips contain simple descriptions of the parameters. If you are using
very high or low resolution images, you might consider changing the various
pixel-based numbers such as edge width, minimum length, and intersect
distance. In low-contrast situations you might experiment with Edge Threshold
and Contrast. Be sure to experiment first, before doing an autotrack, so that you
don't have to worry about trackers you've already worked on.

Motion Profiles
SynthEyes offers a motion profile setting that allows a trade-off between
processing speed and the range of image motions (per frame) that can be
accommodated. If the image is changing little per frame, there is no point
searching all over the image for each feature. Additionally, a larger search area
increases the potential for a false match to a similar portion of the image.
The motion profile may be set from the summary or feature panels.
Presently, two primary settings are available:
Normal Motion. A wider search, taking longer.
Crash Pan. Use for rapidly panning shots, such as tripod shots. Not only a
broader search, but allows for shorter-lived trackers that spin rapidly across
the image.
Low Detail. Use for green-screen shots where much of the image has very
little trackable detail.

113
AUTOMATIC TRACKING

There are several other modes from earlier SynthEyes versions which may be
useful on occasion.

Controlling the Trackable Region


When you run the automatic tracker, it will assign all the trackers it finds to
the camera track. Sometimes there will be unusable areas, such as where an
actor is moving around, or where trackers follow a moving object that is also
being tracked.
SynthEyes lets you control this with animated rotoscoping splines, or an
alpha channel. For more information, see the section Rotoscoping with animated
splines and the alpha channel.

Green-Screen Shots
Although SynthEyes is perfectly capable of tracking shots with no artificial
tracking marks, you may need to track blue- or green-screen shots, where the
monochromatic background must be replaced with a virtual set. The plain
background is often so clean that it has no trackable features at all. To prevent
that, green-screen shots requiring 3-D tracking must be shot with tracking marks
added onto the screen. Often, such marks take the form of an X or + made of
electrical or gaffing tape. However, a dot or small square is actually more useful
to SynthEyes over a wide range of angles. With a little searching, you can often
locate tape that is a somewhat different hue or brightness as the background
just enough different to be trackable, but sufficiently similar that it does not
interfere with keying the background.
You can tell SynthEyes to look for trackers only within the green- or blue-
screen region (or any other color, for that matter). By doing this, you will avoid
having to tell SynthEyes specifically how to avoid tracking the actors.
You can launch the green-screen control dialog from the Summary control
panel, using the Green Screen button.

114
AUTOMATIC TRACKING

When this dialog is active, the main camera view will show all keyed (trackable)
green-screen areas, with the selected areas set to the inverse of the key color,
making them easy to see. [You can also see this view from the Feature panels
Advanced Feature Control dialog by selecting B/G Screen as the Camera View
Type.]
Upon opening this dialog, SynthEyes will analyze the current image to
detect the most-common hue. You may want to scrub through the shot for a
frame with a lot of color before opening the dialog. Or, use the Scrub Frame
control at lower right, and hit the Auto button (next to the Average Key Color
swatch) as needed.
After the Hue is set, you may need to adjust the Brightness and
Chrominance so that the entire keyed region is covered. Scrub through the shot
a little to verify the settings will be satisfactory for the entire shot.
The radius and coverage values should usually be satisfactory. The radius
reflects the minimum distance from a feature to the edge of the green-screen (or
actor), in pixels. The coverage is the amount of the area within the radius that
must match the keyed color. If you are trying to match solid non-key disks that go
as close as possible to an actor, you might want to reduce the radius and
coverage, for example.
You should use the Low Detail motion hint setting at the top of the
Summary panel to when tracking green-screen shots (it normally reads Normal).
SynthEyess normal analysis looks for the motion of details in the imagery, but if
the most of the image is a featureless screen, that process can break down.
With Low Detail selected, SynthEyes uses an alternate approach. SynthEyes will
configure the motion setting automatically the first time you open the
greenscreen panel, as it turns on the green-screen enable. See also a technique
for altering the auto-tracker parameters to help green-screen shots.

115
AUTOMATIC TRACKING

The green-screen settings will be applied when the auto-track runs. Note
that it is undesirable to have all of the trackers on a distant flat back wall. You
need to have some trackers out in front to develop perspective. You might
achieve this with tracking marks on the floor or (stationary) props, or by rigidly
hanging trackable items from the ceiling or light stands. In these cases, you will
want to use supervised tracking for these additional non-keyed trackers.
Since the trackers default to a green color, if you are handling actual
green-screen shots (rather than blue), you will probably want to change the
tracker default color, or change the color of the trackers manually. See Keeping
Track of the Trackers for more information.
After green-screen tracking, you will often have several individual trackers
for a given tracking mark, due to frequent occlusions by the actors. As well as
being inconvenient, it does not give SynthEyes as much information as it would if
they were combined. You can use the Coalesce Nearby Trackers dialog to join
them together; be sure to see the Overall Strategy subsection.
You can work in the perspective view with a matted version of the green
screen shot, ie with the keyed portions of the shot transparent and leaving only
the un-keyed actors and set visible in the 3-D perspective view. You can use this
to place the actors at their appropriate location in the 3-D world as an aid to
creating your composite.
This will happen automatically if the dynamic projection screen is active in
the perspective view (on by default; see the Perspective Projection Screen
Adjust script). The dynamic projection screen holds the imagery within the
perspective view, and is not selectable or visible in other viewports.
For "real" geometry that is visible in other viewports (notably, other
perspective view), the Projection Screen Creator script creates mesh geometry
in the scene that is textured with the current shot imagery. As with the builtin
screen, you can tell the creator to Matte out the screen color.

Note: whether you use the dynamic projection screen or the creator
script, you can use an alpha channel on your shot instead of
SynthEyes's keyer. When doing this, be sure to turn on Keep Alpha
when opening the shot

With a suitable tracker, use the Camera to Tracker Distance script to


determine the distance to the screen: use the value in parentheses, ie along the
camera axis
You can write the green-screen key as an alpha-channel or RGB image
using the image preprocessor. Any roto-splines will be factored in as well. With a
little setup, you can use the roto-splines as garbage mattes, and use small roto
dots to repair the output matte to cover up tracking marks.

116
AUTOMATIC TRACKING

Promoting Blips to Trackers


The auto-tracker identifies many features (blips), and combines them into
trails, but only converts a fraction of them to trackers to be used in generating the
3-D solution. Some trails are too short, or crammed into an already densely-
populated area.
SynthEyes allows you to specify that additional blip trails should be
promoted to trackers, either by manually specifying a trail to be converted, or by
using the Add Many dialog to locate many more potential trackers, often in a
special area of interest.

Important: do not do a Clear All Blips or Clean Up Trackers (with


Clear Blips enabled) before attempting to use either method of
converting blip trails to trackers. If the blips are cleared, there will be no
raw auto-track data to create trackers from.

If you wish to have a tracker at a particular location to help achieve an


effect, you could create a supervised tracker, but a quicker alternative can be to
convert an existing blip trail into a trackerin SynthEyes-speak, this is Peeling a
trail.
To see this, open the flyover shot and auto-track it again. Switch to the
Feature panel and scrub into the middle of the shot. Youll see many little
squares (the blips) and red and blue lines representing the past and future paths
(the trails).

117
AUTOMATIC TRACKING

You can turn on the Peel button, then click on a blip, converting it to a full
tracker. Repeat as necessary.
Use the controls at the bottom of the Feature panel to show only the
longest potential trailshere we will show only those that are 100 frames or
longer. If you have corner detection on, you can select only corners as well.
Alternatively, you can use the Add Many Trackers dialog to do just that in
an intelligent fashionafter an initial shot solution has been obtained.
The Add Many Trackers dialog searches the autotracking data for
additional trails that can be converted to trackers. It allows you to specify your
requirements for the trackers to be added, such as a minimum length, maximum
error, or coverage of a certain range of frames.
And, especially for mesh building, it allows you to use a previous camera-
view lasso to indicate an area in which new trackers should be selectively added.
The new trackers are chosen so that they are evenly distributed over the lassoed
area to the extent possible. (If there are few or no significant blips in an area,
nothing can be added there).

Keeping Track of the Trackers


After an auto-track, you will have hundreds or even thousands of trackers.
To help you see and keep track of them, SynthEyes allows you to assign a color
to them, typically so you can group together all the related trackers.
SynthEyes also provides default colors for trackers of different types.
Normally, the default color is a green. Separate default colors for supervised,
automatic, and zero-weighted trackers can be set from the Preferences panel.
You can change the defaults at any time, and every tracker will be updated
automaticallyexcept those for which you have specifically set a color.

You can assign the color by clicking the swatch on the Tracker panel,
or by double-clicking the miniature swatch at the left of the tracker name in the
graph editor . If you have already created the trackers, lasso-select the
group, and shift-click to add to it (see the Lasso controls on the Edit menu for
rectangular lassos). Then click the color swatch on the Tracker panel to set the
color. In the graph editor panel, if you have several selected, double-click the
swatch to cause the color of all the trackers in the group to be set. Right-clicking
the track panel swatch will set the color back to the default.
If you are creating a sequence of supervised trackers, once you set a
color, the same color will be used for each succeeding new tracker, until you
select an existing tracker with a different color, or right-click the swatch to get
back to the default.
You will almost certainly want to change the defaults, or set the colors
manually, if you are handling green-screen shots.

118
AUTOMATIC TRACKING

You will see the tracker colors in the camera view, perspective view, and
3-D viewports, as well as the miniature swatch in the graph editor.
If you have set up a group of trackers with a shared color, you can select
the entire group easily: select any tracker in the group, then click the Edit/Select
same color menu item or control-click the swatch in the graph editor.
Each tracker has two possible colors: its primary color, and a secondary
color. The secondary color is used for each tracker when the View menu's Use
alternate color menu item is checked. The second color is usually set up by the
Set Color by RMS Color script; having done that you can switch back and forth
between color selections using the menu item.
To aid visibility, you can select the Thicker trackers option on the
preferences panel. This is particularly relevant for high-resolution displays, where
the pixel pitch may be quite small. The Thicker trackers option will turn on by
default for monitors over 1300 pixels horizontal resolution.
Note that there are some additional rules that may occasionally override
the color and width settings, with the aim of improving clarity and reducing clutter.

Advanced Feature Dialog Effects


The Advanced Feature Dialog controls a few of the internal technical
parameters of the auto-tracker. Here we present a few specific uses of the panel,
both revolving around situations where there are too many blips, degrading auto-
tracking.
Green Screen Shots
A green-screen shot has areas that are flat, meaning that the RGB
values are largely constant over a large area of the screen. Normally, SynthEyes
works adaptively to locate trackable features distributed across the image, but
that can backfire on green-screen shots, because there are usually no features
on the screen, except for the comparatively few that you have provided.
SynthEyes then goes looking for video noise, film grain, small shadows, etc.
Some of the time, it is successful at tracking small defects in the screen.
You can reduce the number of blips generated on these shots by turning
down the Density/1K number in the Small column of the Advanced Feature
dialog, typically to 1%. Try it with Auto Re-blip turned on, then close the panel,
Clear All Blips and do a Blips All Frames.
Too-Busy High Resolution Shots
High-resolution shots can have a similar problem as green-screen shots
too many blipswhich can result in trackers with many mistakes, as the blips are
linked incorrectly. The high-resolution source images can contain too much
detail, even if it is legitimate detail.

119
AUTOMATIC TRACKING

In this case, it is appropriate to tweak the Feature Size numbers on the


Advanced Feature dialog. You can first raise the Small Feature Size to 15, and if
necessary, raise both to 25.
This should reduce the number of features and the chances that trackers
jump along rows of repeated features. However, larger feature size values will
slow down processing substantially.

Skip-Frame Track
The Features panel contains a skip-frame checkbox that causes a
particular frame to be ignored for automatic tracking and solving. Check it if the
frame is subject to a short-duration extreme motion blur (camera bump), an
explosion or strobe light, or if an actor suddenly blocks the camera.
The skip-frames checkbox must be applied to each individual frame to be
skipped. You should not skip more than 2-3 frames in a row, or too many frames
overall, or you can make it more difficult to determine a camera solution, or at
least create a temporary slide.
You should set up the skip-frames track before autotracking. There is
some support for changing the skipped frames after blipping and before linking,
but this is not recommended; you may have to rerun the auto-tracking step.

Strengths and Limitations


The automatic tracker works best on relatively well-controlled shots with
plenty of consistent spot-type feature points, such as aerial and outdoor shots.
Very clean indoor sets with only line features can result in few trackable
features, be sure to turn on the Corner detector on the Summary panel. A green-
screen with no tracking marks is un-trackable, even if it has an actor, since
the (moving) actor does not contribute usable trackers.
Rapid feature motion can cause tracking problems, either causing loss of
continuity in blip tracks, or causing blips to have such a short lifetime that they
are ignored. Use the Crash Pan motion profile to address such shots.
Similarly, situations where the camera spins about its optic axis can
exceed SynthEyes expectations.
You can add supervised guide trackers to help SynthEyes determine the
frame-to-frame correspondence in difficult shots (in Low Detail mode). A typical
example would be a camera bump or explosion with several unusable frames,
disabled with the Skip Frames track. If the camera motion from before to after the
bump is so large that no trackers span the bump, adding guide trackers will
usually give SynthEyes enough information to reconnect the blip trails and
generate trackers that span the bump.

120
Supervised Tracking
Solving for the 3-D positions of your camera and elements of the scene
requires a collection of trackers tracked through some or all of the shot.
Depending on what happens in your shot, 7 or 8 may be sufficient (at least 6),
but a complex shot, with trackers becoming blocked or going off the edge of the
frame, can require substantially more. If the automated tracker is unable to
produce satisfactory trackers, you will need to add trackers directly. Or, you can
use the techniques here to improve automatically-generated ones. Specific
supervised trackers can be especially valuable to serve as references for
inserting objects, or for aligning the coordinate system as desired.
WARNING: Tracking, especially supervised tracking, can be stressful to
body parts such as your fingers, hands, wrists, eyes, and back, like any other
detail-oriented computer activity. Be sure to use an ergonomically sound
workstation setup and schedule frequent rest breaks. See Click-on/Click-off
mode.
To begin supervised tracking, select the Tracker panel. Turn on the Create

button .

Tip: You can create a tracker at any time by holding down the C key
and left-clicking in the camera view. Or, right-click in the camera view
and select the Create Trackers item. In either case you will be
switched to the Tracker control panel.

Rewind to the beginning of the shot .


Locate a feature to track: a corner or small spot in the image that you
could reach in and put your finger on.

Important: Do not select features that shift depending on camera


location, such as a reflective highlight or the X formed by two tree
branches crossing.

Left-click on the center of your feature, and while the button is down,
position the tracker accurately using the view window on the command panel.
The gain and brightness spinners located next to the tracker mini-view can make
shadowed or blown-out features more visible (they do not affect tracking directly).
Adjust the tracker size and aspect ratio to enclose the feature and a little of the
region around it, using either the spinner or inner handle.
Adjust the Search size spinner or outer handle based on how uncertainly
the tracker moves each frame. This is a matter of experience. A smooth shot
permits a small search size even if the tracker accelerates to a higher rate.

121
SUPERVISED TRACKING

Create any number of trackers before tracking them through the shot. You
might create and track a batch of 3-6 at a time in a simple shot, or only one at a
time on shots requiring more supervision.

Tip: later you'll see how to tell if you have enough trackers using the
the colored background in the Graph Editor, or in the timebarif
View/Timebar background/Tracker count is turned on.

To track them through the shot, hit the Play or frame forward
button or use the mouse scroll wheel inside the tracker mini-view (scrubbing the
time bar does not cause tracking by design). Watch the trackers as you move
through the shot. If any get off track, back up a frame or two, and drag them in
the image back to the right location. The Play button will stop automatically if a
tracker misbehaves, already selected for easy correction.
Supervised trackers have many controls; they are there for a reason. You
should be sure to adjust the controls to each specific shot. Usually when people
have problems with supervised tracking , it is because they have not configured
them at all!

Important! In addition to the more obvious tracker size and search


size settings, you should always be sure to select the proper tracker
prediction mode on the Track menu.

Types of Trackers
SynthEyes supports the following types of trackers, as controlled by a
dropdown button on the tracker control panel:

pattern-match trackers ,
white and black spot trackers,
symmetry trackers, and

planar trackers.
Pattern matching trackers are the most commonly used type for
supervised tracking: they allow any feature that you can see to be tracked, since
you position the tracker directly. SynthEyes searches subsequent images for the
same image found within the tracker's interior (specified by its size and aspect).
Pattern match trackers can be thrown off by scenes with rapid overall illumination
changes, especially explosions and strobe lighting. For such scenes, set up the
image preprocessor to perform high-pass filtering.
Spot trackers are produced by auto-tracking, though you may use them as
well. As the name suggests, they look for the center of a white or black spot,
positioning the center of the spot exactly at the center of the tracker. If you have
a suitable feature, they can be tracked easily through the shot without drift. You'll
need to adjust the size of the tracker properly through the shot: if the tracker is

122
SUPERVISED TRACKING

too big, surrounding imagery will influence the tracker position; if it is too small,
the spot will jump around between small local maxima within the tracker. Note
that typically there are many small local spots (local maxima) that might be
selectedSynthEyes uses a preliminary low-resolution pattern match to identify
the right spot from frame to frame. As with straight pattern match trackers, this
preliminary match can be thrown off by adverse conditions in the shot, hence
your supervision.
Symmetry trackers look for locations where the interior of the tracker is
symmetricit looks the same when it is reversed top to bottom and left to right
simultaneously. This encompasses spots as well as other more complex patterns
with shapes similar to X's, H's, and often (weaker) nearly-symmetric features
such as C's and U's.
Planar trackers are substantially different, because they track a whole
rectangular (in 3-D) region, not a single point feature. They are pattern-match
trackers on steroids, if you like. For clarity, planar trackers are described in the
separate 3-D Planar Tracking Manual (available from the SynthEyes Help menu).
When dragging a spot or symmetry tracker in the camera view, tracker
control panel mini-view, or SimulTrack view, the tracker position will automatically
snap exactly to nearby peak locations, so that key locations can be set precisely
to match subsequent tracked frames. In the tracker mini-view and SimulTrack
view, an X marker will appear to show the nearest potential tracker location. If
required, you can suppress snapping by holding down the ALT/Command key
while dragging.
The spot trackers created by autotracking are tracked only to the nominal
shot resolution, while spot trackers that are supervised are tracked at a higher
resolution controlled by the high-resolution setting at bottom of the Track menu
(typically 4x resolution). So you'll see some small off-centering in the X tracker
location when viewing autotrackers with the tracker mini-view or SimulTrack. A
simple re-track of auto-trackers will not eliminate those differences, because the
positions are all keyed directly. To re-run the autotrackers at the higher
resolution, use a Fine-Tune with a Key Spacing of one (ie without converting to
pattern matches).

Channel Selection
Trackers can be set to look at only the red, green, blue, or luminance
channels of the input imagery. (By default, all three channels are used.) Selecting
a single channel may be useful in specific situations, such as when one channel
has very high noise; you should use RGB in most circumstances until proven
otherwise.
You can set the channel for each specific tracker, using the channel
selection on the tracker control panel, or for all trackers by adjusting the channel
selection on the Rez tab of the image preprocessor. The channel selector initially
shows , which is the RGB setting.

123
SUPERVISED TRACKING

Note that while an Alpha selection is shown on the tracker control panel's
channel flyout, it is usable only by planar trackers. If you want to track the alpha
channel with non-planar trackers for some reason, you'll need to set that using
the image preprocessor.

Prediction Modes and Hand-Held Shots


SynthEyes predicts where the feature will appear in each new frame, in
order to minimize the required area that must be searched. That not only saves
time, but more importantly minimizes the chances that a nearby similar-looking
but incorrect feature is selected instead.
SynthEyes has different prediction modes for different types of shots. By
default, in the Steady camera mode, it assumes that the shot is smooth, from a
steadi-cam, dolly, or crane, and uses the previous history over a number of
frames to predict its next position.
If your shot and the individual trackers are very rough, especially as you
are tracking the first few trackers, you may find that the trackers arent too
predictable, and you can set the mode to Hand-Held: Sticky, in which case
SynthEyes simply looks for the feature at its previous location (requiring a
comparatively large search region).
Once you've already tracked several trackers on a hand-held shot, select
Hand-Held: Use others on the Track menu. In this mode, SynthEyes uses other,
already-tracked, trackers to predict the location of new ones. You can greatly
reduce the search size and will need to set new keys only occasionally as the
pattern changes.
Using the predict mode, youll sometimes find that a tracker is suddenly
way out of position, that it isnt looking in the right place. If you check your other
trackers, youll find that one of your previously-tracked trackers is off course on
either this or the previous frame. You should unlock that tracker, repair it, relock
it, and youll see that the tracker you were originally working on is now in the
correct place (you may need to back up a frame and then track onto this frame
again).
For some special situations, the Re-track at existing mode uses the
previously-tracked location, and looks again nearby (perhaps after a change in
some tracker settings). The search size can be kept small, and the tracker will
not make any large jumps to an incorrect location, if the track is basically correct
to begin with. SynthEyes uses this mode during fine-tuning. Note: on any frames
that were not previously tracked, Hand-Held: Sticky mode will be used.
Once a shot is solved, and a tracker has been assigned a 3-D location
(either by the solve or because it is a zero-weighted tracker), an additional option
becomes possible: looking for the tracker at the location predicted from its solved
3-D location. This feature is controlled by the Search from solved option on the
Track menu, and is on by default. It is useful for adding or extending trackers, or

124
SUPERVISED TRACKING

for stereo shots, but not for offset trackers. For more information see the section
on Post-Solve Tracking.

Adjusting While Tracking


You advance the tracker through the shot using Play, frame forward, the
period key accelerator, or middle scroll wheel (for backwards trackers, frame
backward or comma key). You supervise it, making sure it stays accurately
locked on the desired image feature.
Important! SynthEyes can find the pattern only if you have configured the
tracker appropriately: the search area must be large enough, and the prediction
mode suitable, so that the pattern is still inside the search area. You can animate
the tracker size and search size as you progress through the shot.
If a tracker goes off course, you can fix it several ways: by dragging it in
the camera view, by holding down the Z key and clicking and dragging in the
camera view, by dragging in the small tracker mini-view, or by using the arrow
keys on the number pad. (Memo to lefties: use the apostrophe/double-quote key
/ instead of Z.)
You can lock SynthEyes in this "Z-Drop" mode using the Track/Lock Z-
Drop on menu item. In the zdrop-lock mode, a single selected tracker will be
moved to the mouse location immediately when the button is depressed. In
zdrop-lock mode, if you click over a mesh, it will be ignored. You can click a
different tracker to select it, or use other usual left-mouse functionality, without
issue. The status line will show ZDROP-LOCK when the mouse is in the camera
view.
If a tracker gets occluded or goes off the edge you should turn it off (its
stoplight-like enable ) for a few frames. (See also the Hand Animation and
Offset Tracking techniques). When you turn it back on, SynthEyes will try to
seamlessly reacquire the tracker pattern. If so, no intervention is required. If it
has been too long since the tracker was last seen, SynthEyes will not look for it.
You must reposition it manually by dragging in the tracker mini-view or with Z-
Drop. You can adjust the number of frames until SynthEyes stops looking with
the "Stay Alive" preference.
You can keep an eye on a tracker or a few trackers by turning on the Pan
to Follow item on the Track menu (keyboard: 5 key), and zooming in a bit on the
tracker, so you can see the surrounding context. When Pan To Follow is turned
on, dragging the tracker drags the image instead, so that the tracker remains
centered. See the SimulTrack view for monitoring multiple trackers
simultaneously.
Also, the number-pad-5 key centers the selected tracker whenever you
click it.

125
SUPERVISED TRACKING

Staying on Track and Smooth Keying


Help keep the trackers on course with the Key Every spinner, which
places a tracker key each time the specified number of frames elapses, adapting
to changes in the pattern being tracked. If the feature is changing significantly,
you may want to tweak the key location each time the key is added automatically.
Turn on the Stop on auto-key item on the Track menu to make this easier.
When you reposition a tracker, you create a slight glitch in its path that
can wind up causing a corresponding glitch in the camera path. To smooth the
glitches away, set the Key Smooth spinner to 3, say, to smooth it over the
preceding 3 frames. When you set a key, the preceding (3 etc) frames need to be
re-tracked. This will happen automatically if Track/Smooth after keying is turned
on. Or, you can turn on Pre-Roll by Key Smooth on the Track menu, and
SynthEyes will automatically back up and retrack the appropriate frames when
you resume tracking (hit Play) after setting a key.
The combination of Stop on auto-key and Smooth after keying or Pre-
roll by Key Smooth makes for an efficient workflow. You can leave the mouse
camped in the tracker view window for rapid position tweaks, and use the space
bar to restart tracking to the next automatic key frame. See the web-site for a
tutorial showing this.
Warning: if SynthEyes is adding a key every 12 frames, and you want to
delete one of those keys because it is bad, it may appear very difficult. Each time
you delete it (by right-clicking in the tracker view, Now button, or position
spinners), a new key will immediately be created. You could just fix it. Or, you
should back up a few frames, create a key where the tracker went off-course,
then go forward to delete or fix the bad key.
Warning 2: depending on your settings and computer speed, you can
create situations where moving a keyed tracker location is very slow, because
you are asking SynthEyes to recalculate the tracker on many frames. If this
occurs, adjust the number of frames or turn off automatic smoothing (which
means you will have to run through the tracker again when it is complete.)
Warning 3: Smooth after keying now defaults to OFF because too many
people were grinding their machines to a halt without understanding the cause.

Animating Tracker Size and Search Size


As you track through a shot, you can change the tracker size and search
size to adapt to changes in the shot. SynthEyes will record these changes as
keys onto the corresponding animated tracks, which you can see and modify in
the Graph Editor. Values between keys are linearly interpolated, by default.
There are some fine points to consider: tracking is a bit different than animating!

Tip: Normal tracker position keys appear as black triangles on the time
bar. When there is a key on a secondary channel, such as tracker size

126
SUPERVISED TRACKING

or search size, and not on the tracker position, a gray tracker key is
shown instead.

First, suppose you are tracking with a vertical search size of 0.03. At
frame 30, the shot is bouncier, and you increase the vertical search size to 0.04
so that the pattern continues to be found.
Later, you go back and have SynthEyes play through the shot again, re-
tracking the tracker. Frame 29 was originally tracked with a search size of 0.03,
but now it will be tracked with a search size close to 0.04, say 0.395
substantially larger. Depending on the situation, there is a chance that the
tracker will no longer be placed at the same, presumably correct, location as it
was originally. The same is true of the earlier frames, to correspondingly smaller
chances.
For a 100% reproducible effect, the search size keys could be interpolated
using staircase interpolation. But that doesn't really correspond to the underlying
situation, and requires some bizarre and incomplete changes if you later change
the tracking direction. If you'd tracked backwards, you'd probably have set the
sizes the other way, resulting in utterly different sizes in the middle frames. With
linear interpolation, the results are the same in both directions.
The same effect occurs when you change the tracker size and aspect
ratio: due to the linear interpolation, changes are effectively retroactive and may
cause problems to surface in previously-tracked frames.
But especially in the case of tracker size and aspect, the linear
interpolation may result in more accurate tracking data to result if the earlier
frames are re-tracked.
Depending on the tracker settings, in particular if you use Smooth after
Keying, the affected earlier frames may automatically be re-tracked.
Linear interpolation may sometimes cause a change upon re-tracking, but
it makes more sense in general!
Tracker Size/Aspect Details
When SynthEyes goes to track a specific frame, it interpolates the
size/aspect keys to determine the values on that frame. That same size is used
to determine the reference pattern from the reference frame also, ie the
same number of pixels are compared between the two frames. The images are
not rescaledthis is the key difference between normal trackers and planar
trackers. Using the same subimage size is simpler, faster, and avoids resampling
errors..
If the feature being tracked is dramatically different sizes between the two
images (search and reference), tracking will be less accurate or even fail. This
could occur in a long push-in or pull-back shot.

127
SUPERVISED TRACKING

Accordingly, if you are animating the tracker size, you should be sure to
set new tracker position keys regularly, for example using Key Every. That way
the difference in scale never be that large.
If this is an issue, you should use a planar tracker.

Track by X Frames Script


The Track-by-X-frames script creates a small control panel that lets you
click one button and have SynthEyes track for the given number of frames
(forward, if positive, backward, if negative). Additional buttons let you jump back
by X frames, track ahead a frame, go back a frame, set a key, etc.
This gives another workflow similar to using Key-every, Stop on auto-key,
back 1 key, forward/backward one frame, etc, but without generating keys. The
script puts the relevant buttons in one place; you might want to try the Key Every
approach after that.

Supervised Tracking with SimulTrack


The SimulTrack view provides a sophisticated look into the entire history
of a tracker or trackers, serving either as a way to review tracking, or a way to do
supervised tracking, as discussed here. Please see the reference page for
details of operating SimulTrack.

The SimulTrack view shows each frame of a tracker with a position key,
and allows you to adjust the position key location: essentially the SimulTrack is
an entire collection of tracker mini-views, each corresponding to a different
frame.
To take advantage of this, you can set up a tracker for smooth keying, as
in the prior section, open a SimulTrack view (either floating or as part of a
viewport configuration), and track the tracker.
You'll then see all automatically-generated keyframes simultaneously,
and you can adjust any of them directly in the SimulTrack view, without having to
change frames if you do not want to. Make sure Track/Smooth after keying is on,
and the adjacent frames will automatically be updated to reflect the changed key.

128
SUPERVISED TRACKING

Note: if your machine's performance doesn't allow adequately rapid


updates with Smooth after keying turned on, turn it off and re-run the
track from the beginning after you have adjusted all the keys.

After you have an initial solve for the shot, you have an exciting option
available to you: you can have SynthEyes generate the entire set of auto-keys
automatically, acting as if the tracker is a zero-weighted tracker. Set the first
position key at the beginning of the lifetime of the tracker. Then click to a much
later frame where the tracked feature is still visible, and set another key using Z-
Drop (hold down the 'z' key and click in the camera view).
The two keys enable SynthEyes to estimate the tracker's 3-D location,
then generate a position key at each appropriate frame (determined by its key-
automatically setting on the Tracker Control Panel). All of those automatically-
generated keys will pop up in the SimulTrack view. The locations will be
approximate, based on the accuracy of your keys and the existing solve.
Then, use the SimulTrack view to tweak the positioning of each of the
generated key locations, which will trigger the Smooth after keying functionality.
You can use the strobe functionality (click on the 'S') to check each key location
for consistency with its neighborsgo ahead and adjust it even while strobing!
After each key has been adjusted, you'll have an accurate supervised track for
the feature.

Tip: SimulTrack shows the tracked image feature of an offset tracker,


plus an offset marker for the final location, when the tracker is
unlocked. The offset marker can be dragged, or the view shift-dragged.
Once it is locked, SimulTrack shows only the final location.

Note that using SimulTrack is one potential workflow, not a required


workflow. On a simple shot, allowing supervised trackers to track through an
entire shot by themselves may be faster. You can then still use the SimulTrack
view to monitor the results. We provide the tools, you decide the best way is to
use them.

Reference Crosshairs
You can enable the display of reference crosshairs with View/Show
Reference Crosshairs on the main or camera's right-click menu (default keyboard
accelerator: shift-+). The crosshairs can be handy for features on corners, or
where comparison to other nearby features is desired. The horizontal and vertical
crosshairs can be adjusted independently to any desired orientation and length
(the horizontal and vertical nomenclature refers only to the initial position).
The crosshairs can and typically must be animated to be useful, to match
the desired feature over the length of the shot. To manipulate the keys, see the
graph editor.

129
SUPERVISED TRACKING

Occluded or Off-Image Trackers


If an actor or other object permanently obscures a tracker, turn off its
enable button, disabling it for the rest of the shot, or until you re-enable it.
Trackers will turn off automatically at the edge of the image; turn them back on if
the image feature re-appears. (If the shot has a non-image border, use the
region-of-interest on the Image Preprocessing panel so that trackers will turn off
at the right location.)
When a tracker has become lost or disabled, its search box will continued
to be displayed for several more frames (default:10) so that you may re-position
and re-enable the tracker if the feature re-appears. You can control the number
of frames a tracker remains visible using the Stay Alive value on the preferences
panel. If the value is zero, the search box will never be removed. Too many
lingering search boxes will clutter up the display!
If the tracker has been lost/disabled for long enough that its search box
has been hidden, hold down the 'Z' key and click the new location to reenable
and place the tracker. (This is Z-drop, see also Z-drop-lock.) You can also just
drag in the tracker's mini-view and it will reactivate.

Cliffhanger Trackers
As a tracker approaches the edge of the image, it will be shut off
automatically. This is undesirable if the tracker comes close to the edge, but then
moves around without actually going off the edge. Simply turning the tracker
back on is ineffective, because it is immediately turned back off, and even setting
a key position is only temporary, as the tracker will typically be turned off again
the next frame.
To handle this situation, turn on the Cliff (cliffhanger) button on the tracker
control panel, or on the camera view right-click menu in the Attributes section.
This will disable the automatic shutoff at the image edge.

Tip: Cliffhanger mode will turn on automatically if you re-enable a


tracker or move it in the tracker mini-view on the frame it was
automatically turned off.

Once cliffhanger mode is on for a tracker, you should continue to


supervise it carefully for the remainder of the shot, as it will not turn back off
automatically even if it comes back on screen (see next paragraph). Be alert for
situations where the supervised tracker's interiornot just search regiongoes
partially off-screen, as the tracker will likely require you to key it on those frames
(unlike a planar tracker).
We recommend not turning off the cliffhanger button once it has been
turned on for a given tracker. That is so that if the tracker is re-tracked through
the entire shot, the same issue doesn't reoccur. For this reason, SynthEyes does
not turn off the cliffhanger status, though it easily could. Be careful to supervise

130
SUPERVISED TRACKING

situations where a cliffhanger tracker comes back onscreen, then goes offscreen
much later in the shot.

Changing Tracking Direction


You can also track backwards: go to the end of the shot, reverse the
playback direction , and play or single-step backwards.
You can change the tracking direction of a tracker at any time. For
example, you might create a tracker at frame 40 and track it to 100. Later, you
determine that you need additional frames before 40. Change the direction arrow
on the tracker panel (not the main playback direction, which will change to
match). Note that you introduce some stored inconsistency when you do this.
After you have switched the example tracker to backwards, the stored track from
frames 40-100 uses lower-numbered reference frames, but backwards trackers
use higher-numbered reference frames. If you retrack the entire tracker, the
tracking data in frames 40-100 will change, and the tracker could even become
lost in spots. If you retrack in the new direction, you should continue to monitor
the track as it is updated. If you have regularly-spaced keyframes, little trouble
should be encountered.

Finishing a Track
When you are finished with one or more trackers, select them, then click
the Lock button . This locks them in place so that they wont be re-tracked
while you track additional trackers.

Hand-Animating a Tracker
Hand animation can be a useful technique to create approximate tracking
data when an object is occluded and the camera or object motion is very smooth,
ie the camera is on a crane or dolly. It is not useful when there is a lot of vibration
from a hand-held camera; in that case use Offset Tracking. Hand-animation is
typically used when there are few available trackable features, ie for object
tracking. If there are many trackers, there's little incentive to go to the trouble.
Hand-animation uses the By Hand button on the tracker control panel.
Suppose you're tracking a corner of a building and a pole passes in front of the
corner. On the first occluded frame, turn on By Hand (instead of turning off
Enable).

Hint: Follow what By Hand does by using the Camera & Graphs view.
Open it to your selected tracker with the U Pos, V Pos, Enable, and
Hand Animated curves displayed.

The tracker graphic's search rectangle disappears, because it is no longer


searching. If you run forward a few frames, the tracker remains stationary, no
longer following anything: you're hand-animating!

131
SUPERVISED TRACKING

When the tracked feature has reappeared, drag the tracker graphic back
to the correct location, and turn off By Hand. The tracker is now tracking again,
and if you continue to move forward into the shot it continues to track the feature.
What's interesting is what has happened in the middle, while By Hand was
on. The tracker is now linearly interpolated between the two endpoints. You can
adjust the keyframe where you turned off By Hand, and the intervening path
updates accordingly.
Furthermore, you can now add additional tracker position keys in the
middle of the interpolated region to generate a desired trajectory. You can add
keys by repositioning the tracker in the camera view on the appropriate frame, or
in the graph editor using the Add Key mode. Either way, those keys are artistic
decisions based on your tracking skill.

Warning: When you disable and later re-enable a tracker, you are
saying you don't know what happens in between. That's safe. When
you use By Hand, you are claiming you know what happened. If you
aren't close to right, you will make the solution worse. Therefore, if you
have many trackers, it makes sense to Enable and Disable. Hand-
animation makes sense when there are few trackable features and
every one counts.

The By Hand button is an animated track, so you can have multiple


separate hand-animated regions during the shot, for example, each time a
telephone pole goes by. And you can adjust the keys at any time. (It also works
fine for forwards or backwards trackers.)
To better understand what it is doing, here's a brief explanation. When you
change By Hand or set a tracker position key, a spline interpolation routine runs.
It looks to see if the frame where a change was made is in, or immediately
adjacent to, a sequence of frames where By Hand is on. If so, it acquires all the
tracker keys in that region, plus the tracker position immediately before and after
the sequence of frames. Those keys are then interpolated and those positions
stored on every frame that is not a key.
If you originally used Enable and re-enabled for a temporary occlusion,
you can later go back and change it to use By Hand. Just go to the first occluded
frame (which will be disabled), and turn on By Hand. SynthEyes will re-enable
the tracker for the disabled section, and animate By Hand to be on for exactly the
that previously-disabled section. Magic!
There is a more advanced case worth pointing out where By Hand may
appear not to be working, but is being safe. That's when you have a nicely-
tracked tracker with a dodgy section in the middle, and you want to replace the
dodgy part with a hand-animated part.
In that case, when you turn on By Hand, note that initially it will be on for
the entire rest of the shot. If SynthEyes were to blindly do the spline interpolation
described above, it would overwrite the entire rest of the track (except for any

132
SUPERVISED TRACKING

keys). That wouldn' t be a permanent disaster, since you could just have it Play
through the rest of the shot (or Undo), but it would be inconvenient.
To avoid that, when the By Hand region extends to the end of the shot,
SynthEyes stops the splining process at the next following tracker key. That
protects the rest of the already-tracked frames, limiting the potential damage
(often to just the right spot).
If you need to control what frames get replaced precisely before turning on
By Hand, either set a tracker position key at the frame you will turn off By Hand,
or animate the Enable on and off for the right section, so that the usual simple
case applies. In any case, once you've adjusted the By Hand region, you can just
play through some frames after the end of the region to restore them if they had
been replaced.
Note that if you try to animate a tracker completely from scratch using By
Hand, it won't go interpolate past the second tracker key and you may think
something is broken, but that is just the protective mechanism in action. Turn By
Hand off on the very last frame of the shot and put a final position key there and
splining will be active for the entire duration.

Offset Tracking
In offset tracking, you track one feature in order to track another. Offset
tracking utilizes additional controls on the tracker control panel to handle two
situations:
the feature being tracked is temporarily obscured, but a nearby
feature is not, and
when an additional feature is to be tracked that is very close to an
existing completed tracker.
When handling obscured features, hand-animated tracking can be simpler
for shots where the camera is mounted on a crane or dolly. Offset tracking is
valuable for shots from hand-held cameras, since the already-tracked vibration
carries over automatically to the offset tracker.

In both cases above, you'll use the Tracker control panel's offset
channels, which offset the final tracker position from the position being visually
tracked. The offset is applied only if the Offset button is turned on. You can
animate both the offset enable button and the offset channels, though usually
you'll create the offset values by dragging in the camera view, rather than
adjusting the spinner values.

WARNING: you should have plenty of experience with supervised


tracking before considering offset tracking. The offset trackers can only
be as good as the original supervised tracks.

133
SUPERVISED TRACKING

When the offset is enabled, you'll see a small crosshair at the final
location, and the usual tracking graphics at the location being visually tracked
as long as the tracker is not locked. If the tracker is locked, then the tracker
graphics are the standard locked-tracker graphics, at the final location.
Similarly, the tracker mini-view and SimulTrack views show the location
being visually tracked if the tracker is not locked, and show the image of the final
location if the tracker is locked.

Important: Offset tracking is always less accurate than tracking real


image features: you're making data up, hopefully artfully, so it's only an
approximation to the right position. Temporarily disabling the tracker
may be a better alternative. Offset tracking is valuable when very few
trackable features are available, or when unaddressed lens distortion
or other tracking issues are causing pops when trackers appear or
disappear (see also the Transition Frms. spinner on the Solver panel).

Creating a Temporary Offset


Suppose you are tracking a sign on a wall, and an actor suddenly walks in
front of it, blocking it during frames 10 to 20. There's a light fixture a bit higher on
the wall that is never occluded by the actor. You'd like to use offset tracking to
create one continuous track for the sign. Here's what to do:
1. Go to frame 9, the frame BEFORE the sign gets blocked, when both
sign and light fixture are visible.
2. Turn on the Offset button on the Tracker control panel.
3. In the camera view, hold down shift and drag the tracker from the sign
to the light fixture. This will leave the offset marker (final tracker
location) at the original location (sign) and put the tracker on the
feature to be tracked (light).
4. Track forwards normally (on the light), ie by hitting frame forward, the
period key, Play, or scroll wheel in the mini-tracker view. If necessary,
adjust the tracker location, but do not move the offset marker or
change the offset spinners.
5. Track until you reach frame 21stopping on frame 21when the sign
and light are both visible again.
6. Adjust the position of the offset marker to re-position it exactly on the
sign.
7. Turn off the Offset button. The tracker snaps back onto the sign.
8. Continue tracking the sign.
The process looks more complicated than it is in practice. Due to changes
in the camera viewing angle, the required offset (from light to sign) won't be the
same on at the beginning and end of the offset. The point of the steps above is to
set exact keys on the offset channel at both ends of the offset section; the offsets
interpolate linearly in between.

134
SUPERVISED TRACKING

You can change to a different reference pattern at any time during offset
tracking, simply by shift-dragging the tracker to a new feature. (This sets a new
key on the offset channels at the prior frame, in addition to the current frame, to
make the offsets behave properly; see the graph editor for details.)
You can also use offset tracking when you are tracking backwards (from
large frame numbers to small frame numbers); the procedure is the same,
though you track in the other direction.
Tracking an Additional Feature with Offsets
If you've already got one solid tracker, and need a few other nearby
trackers, you can also use offset tracking. For example, you may want to quickly
track additional features for detailed 3-D modeling, or may want a track of the
corner of an object when a very variable background is moving behind it.
Because you are relying on an already-tracked tracker, the additional offset
trackers are easy to generate, even if the original tracking was difficult.
To do this, complete 2-D tracking of the easy feature, then click the New+

button (New Offset Tracker) on the Tracker control panel to make (clone) a
copy of the selected tracker. If the tracker already has an offset track, you will be
asked if you want to remove it, which you shouldit will be baked into the
trackers path.
Rewind to the beginning of the tracker, then drag the offset cursor to the
desired feature to be tracked. Work through the shot, periodically adjusting the
position of the offset cursor to stay accurately on the desired point to be tracked.

Important! Don't change the underlying tracker, only the offset marker.

That might be every 20 frames, or whenever there is a key on the tracker


positionhow often depends on how much the camera view angle is changing.
You may want to use the graph editor to change the key type of the offset
channel keys from linear (suitable for temporary offsets), to smooth.

Tip: To monitor the position of the offset marker carefully, click the '5'
key to turn on pan-to-follow in the camera view. Zoom into the camera
view.

Scrub through the shot to monitor the position of the offset marker. You
can stop adding additional keys when the offset position is suitably accurate
throughout the entire length of the shot (underlying tracker).
If the offset tracker winds up in the wrong 3-D location compared to the
original tracker, it is because you have not set up the offset channel accurately!
The relative 3-D location is ENTIRELY determined by what offset keys you set.
An offset tracker does not contain as much information as a normal tracker.

135
SUPERVISED TRACKING

Offset tracking is easier for simple camera motions, such as a left to right
dolly, even if it bounces quite a bit, versus a complex move that will require many
different keys in the offset to get right.

Combining Trackers
You might discover that you have two or more trackers tracking the same
feature in different parts of the shot, or that are extremely close together, that you
would like to consolidate into a single tracker.
Select both trackers, using a lasso-select or by shift-selecting them in the
camera view or graph editor (see the Lasso controls on the Edit menu for
rectangular lassos). Then select the Track/Combine trackers menu item, or the
Shift-7 (ampersand &). All selected trackers will be combined, preserving
associated constraint information.
If several of the trackers being combined are valid on the same frame,
their 2-D positions are averaged. Any data flagged as suspect is ignored, unless
it is the only data available. Similarly, the solved 3-D positions are averaged.
There is a small amount of intelligence to maintain the name and configuration of
the most-developed tracker.
When you combine trackers that have offsets, the offset channel data is
lost: it is baked into the combined tracker position.
Note: the camera views lasso-select will select only trackers enabled on
the current frame, not the 3-D point of a tracker that is disabled on the present
frame. This is by design for the usual case when editing trackers. Control-lasso
to lasso both the 2-D trackers and the 3-D points, or shift-click to select 3-D
points.

Filtering and Filling Gaps in a Track


To produce even smoother final tracks, instead of Locking the trackers,
click the Finalize button . This brings up the Finalize dialog, which filters the
tracker path, fills small missing gaps, and Locks the tracker(s). Though filtering
can create smoother tracks, it is best used when the camera path is smooth, for
example, from a dolly or crane. If the camera was hand-held, smoothing the
tracker paths causes sliding, because the trackers will be smoother than the
camera!
If you have already begun solving and have a solved 3-D position for a
tracker, you can also fill small gaps or correct obvious tracking glitches by using
the Exact button on the tracker panel, which sets a key at the location of the
trackers 3-D position (keyboard: X key, not shifted). You should do this with
some caution, since, if the tracking was bad, then the 3-D tracker position and
camera position are also somewhat wrong.

136
SUPERVISED TRACKING

Pan To Follow
While tracking, it can be convenient to engage the automatic Pan To
Follow mode on the Track menu, which centers the selected tracker(s) in the
camera view, so you can zoom in to see some local context, without having to
constantly adjust the viewport positioning.
When pan to follow is turned on, when you start to drag a tracker, the
image will be moved instead, so that the tracker can remain centered. This may
be surprising to begin with.
Once you complete a tracker, you can scrub through the shot and see the
tracker crisply centered as the surroundings move around a bit. This is the best
way to review the stability of a track.
Pan to Follow applies to both eyes of a stereo setup simultaneously, ie
turning on Pan To Follow will do so for both eyes, based on the selected trackers
or the cross-selected trackers for the other eye.

Skip-Frame Track
If a few frames are untrackable due to a rapid camera motion, explosion,
strobe, or actor blocking the camera, you can engage the Skip Frame checkbox
on the feature panel to cause the frame to be skipped. You should only skip a
few frames in a row, and not that many over all.
The Skip Frames track will not affect supervised tracking, but it affects
solving, causing all trackers to be ignored. After solving, the camera will have a
spline-interpolated motion on the resulting unsolved frames.
If you have a mixture of supervised and automatic tracking, see the
section on the Skip-Frame track in Automated Tracking as changing the track
after automated tracking can have adverse effects.

137
Fine-Tuning the Trackers
Supervised tracking can always produce the most accurate results by
definition, because a human can always look at an auto-track and find something
to improve. The accuracy of supervised tracking is also aided by the high
accuracy offered by the pattern-matching supervised tracking algorithms.
You can tell SynthEyes to make a second pass through the images, re-
tracking them using the pattern-matching supervised tracker. This fine-tuning
process can give you closer to the accuracy of a careful supervised track, though
it will take the computer a bit longer to process.

Trick: if you set the Key Spacing parameter to 1, ie a key on every


frame, SynthEyes will leave the trackers as spot trackers and re-set all
the position keys as if it was a supervised spot tracker, ie at the higher
(interpolated) image resolution controlled by the Track menu.

The fine-tuning workflow adds a step as follows:


1. Run Auto-tracker on the Summary panel
2. Click the Fine-tune trackers item on the Track menu.
3. Check the parameters on the fine-tune panel, then hit Run.
4. Go to the Solver panel and click Go! to solve the shot.
You can turn on the Fine-tune during auto-track checkbox on the Fine-
tune trackers dialog or summary panel to have fine tuning done during auto-
tracking. Or, you can do an automatic track and solve, then decide to fine-tune
and refine the solve later: the work-flow is up to you.

Controlling Fine-Tuning
When you fine-tune, SynthEyes will modify each auto-tracker so that there
is only one key every 8 frames (by default), then run the supervised tracker at all
the intermediate frames.
There are several options you can control when starting fine-tuning:
The spacing between keys
The size of the trackers
The aspect ratio of the trackers (usually square, 1.0)
The horizontal and vertical search sizes
The shots current supervised-tracker filter interpolation mode.
Whether all auto trackers will be tuned, or those that are currently
selected (whether they are automatic, or a previously-unlocked
automatic tracker, which would not otherwise be processed).

139
FINE-TUNING THE TRACKERS

Whether you want the trackers to remain auto-trackers, or be


changed to be considered gold supervised trackers.
You should carefully set these parameters based on your experience at
supervised tracking and some test tracks. Very static and slowly changing shots
can use a larger spacing between keys; more dynamic shots, say with twists or
lighting changes, should use closer-together keys.
Since the supervised tracking will be starting from the known location of
the automatic tracker, the search size can be relatively small.
Note that if you leave the trackers as auto-trackers, then later convert
them to gold, the search size will be reset to a default value at that time. That is
not a significant reason; keeping them as automatic trackers is recommended.

Usage Suggestions
The fine-tuning process is not necessary on all shots. The automatic
tracker produces excellent results, and does some supervision of its own. Fine-
tuning may produce results that are indistinguishable from the original, or even a
little worse! Shots with a slow camera motion may deserve special attention.
You can do a quick test by selecting and fine-tuning a single tracker, then
comparing its track (using tracker trails) before and after fine-tuning using Undo
and Redo. (See the online tutorial.) If the fine-tuning is beneficial, then fine-tune
the remaining trackers.
After fine-tuning, be sure to check the tracker graphs in the graph editor
and look for isolated spikes. Occasional spikes are typical when a tracker is in a
region with a lot of repeating fine detail, such as a picket fence.
Keep in mind that though fine-tuning can help give you a very smooth
track, often there are other factors at play as well, especially film grain,
compression artifacts, or interlacing.

140
Pre-Solve Tracker Checking
When you are doing supervised tracking, you should check on the
trackers periodically before starting to solve the shot, to verify that you have
sufficient trackers distributed throughout the shot.
You can also check on the trackers after automatic tracking, before
beginning to solve the shot. (On simpler shots you can have the automatic
tracker proceed directly from tracking to solving.)
This section describes ways to examine your trackers before solving. It
introduces the SynthEyes Graph Editor . After solving, other techniques and
tools are available, including the tracker clean-up tool.

Tip: automatic tracker tooltips have gray backgrounds; supervised


trackers have gold backgrounds.

The SimulTrack view can also be helpful for checking the trackers: select
some all of the trackers and scrub or play through the shot carefully. If you are
working on a stereo project, open two SimulTrack windows simultaneously, and
turn on Stereo Spouses item on one of them, to be able to see both sides of
matching stereo trackers simultaneously.

Checking the Tracker Trails


The following procedure has proven to be a good way to quickly identify
problematic trackers and situations, such as frames with too few useful trackers.
1. Go to the camera view
2. Turn off View/Show Image on the main or right-click menu.
3. Scrub through the shot using the time bar. Look for
1. regions of the image without many trackers,
2. sections of the shot where the entire image does not have many
trackers,
3. trackers moving the wrong way from the others.
4. Turn on View/Show tracker trails on the main or right-click menu.
5. Scrub through the shot using the time bar. Look for
funny hooks at the beginning or end of a track, especially at the
edges of the image,
zig-zag discontinuities in the trails.
Your mind is good at analyzing motion paths without the images
perhaps even better because it is not distracted by the images themselves. This
process is helpful in determining the nature of problematic shots, such as shots
with low perspective, shots that unexpectedly have been shot on a tripod, tripod
shots with some dolly or sway, and shots that have a little zoom somewhere in

141
PRE-SOLVE TRACKER CHECKING

the middle. Despite the best efforts and record-keeping of on-set supervisors,
such surprises are commonplace.

Checking Tracker Lifetimes


You can overview how many trackers are available on each frame
throughout the shot with the tracks view of the Graph Editor.
The graph editor can be a floating window, launched from the Window
menu, it can be accessed by it's room , or it can be included as part of
other viewport configurations .

After you open the graph editor, make sure it is in the tracks view , if
youve been playing earlier. If the shot is supervised tracking, make click on the
sort order button from sort alphabetic to sort by time . If you have

resized the window you may want to reset the horizontal scaling also.
Next click on the two buttons at lower right of the panel until they look like

this , which selects squish mode, with no keys, with the tracker-count
background visible (it starts out visible). The graph editor on one example shot
looks like this:

142
PRE-SOLVE TRACKER CHECKING

Each bar corresponds to one of the trackers; Tracker4 is selected and


thicker. The color-coded background indicates that the number of trackers is
problematic at left in the yellow area, OK in the middle, and safe on the right.

Tip: you can get the same colored background for the timebar, to
indicate whether you have enough trackers or not even when the
graph editor is closed. Change View/Timebar background to Tracker
count. The mode at startup is controlled by a preference in the User
Interface area.

Warning: if you have many trackers and/or frames, the colored


background in the graph editor or timebar can take a while to compute,
reducing performance. You can turn it off in such cases.

You can configure the safe level on the preferences. Above this limit
(default 12), the background will be white (gray for the dark UI setting), but below
the safe limit, the background will be the safe color (configurable as a standard
preference), which is typically a light shade of green: the number of trackers is
OK, but not high enough to hit your desired safe limit.

143
PRE-SOLVE TRACKER CHECKING

This squished view gives an excellent quick look at how trackers are
distributed throughout the shot. The color coding varies with for tripod-mode
shots and for shots with hold regions. Zero weighted trackers do not count.

Hint: When the graph editor is in graph mode , you can look at a
direct graph of the number of valid trackers on each frame by turning
on the #Normal channel of the Active Trackers node.

If there are unavoidably too few trackers on some frames, you can use the
Skip Frames track on the Features Panel to proceed.
The graph editor is divided into three main areas: a hierarchy area at top
left, a canvas area at top right, and a tool area at the bottom. You can change the
width of the hierarchy area by sliding the gutter on its right. You can partially or
completely close the tool area with the toolbox at left. A minimal view is
particularly handy when the graph editor is embedded in a viewport layout.
In the hierarchy area, you can select trackers by clicking their line. You
can control-click to toggle selections, or shift-drag to select a range. The scrollbar
at left scrolls the hierarchy area.
You can also select trackers in the canvas area in squish mode, using the
same mouse operations as in the hierarchy area.
The icons next to the tracker name provide quick control over the tracker
visibility, color, lock status, and enable.

Warning: you cannot change the enable, or much else, of a tracker


while it is locked!

The small green swatch shows the display color of a tracker or mesh.
Double-clicking brings up the color selection dialog so you can change the
display color. You can shift-click a color, and add all trackers of that color to the
current selection, control-click the swatch of an unselected tracker to select only
trackers of that color, or control-click the swatch on a selected tracker to unselect
the trackers of that color.
Jumping ahead, the graph editor hierarchy also shows any coordinate-
system lock settings for each tracker:
x, y, and z for the respective axis constraints;
l (lower-case L) when there is a linked tracker on the same object;
i for a linked tracker on a different object (an indirect link);
d for a distance constraint;
0 for a zero-weighted tracker;
p for a pegged tracker;
F for a tracker you specified to be far;
f for a tracker not requested to be far, but solved as far for cause.

144
PRE-SOLVE TRACKER CHECKING

Introduction to Tracker Graphs


The graph editor helps you find bad trackers and identify the bad portions
of their track. The graph editor has a very extensive feature set that we will begin
to overview; for full details see the graph editor reference. We wont get to the
process of how to find the worst ones until the end of the section, when you
understand the viewport.

To begin, open the graph editor and select the graphs mode .
Selecting a tracker, or exposing its contents, causes its graphs to appear.

Note: the Number Zone is typically displayed in between the hierarchy


portion on the left and the graphs on the right. It shows the current
value of each individual channel, but isn't shown here for clarity.

In this example, a tracker suddenly started jumping along fence posts,


from pole to pole on three consecutive frames. The red curve is the horizontal U
velocity, the green is the vertical V velocity, and the purple curve is the tracker
figure-of-merit (for supervised trackers). You can see the channels listed under
Tracker15 at left. The green circles show which channels are shown; zoom,
pan, and color controls are adjacent. Double-clicking will turn on or off all the
related channels.
There are a variety of different curves available, not only for the trackers
but for other node types within SynthEyes.
The graph editor is a mult-curve editorany number of completely
different kinds of curves can be displayed simultaneously. There is no single set
of coordinate values in the vertical direction because the zoom and pan can be
different for each kind of channel. To determine the numeric value at any

145
PRE-SOLVE TRACKER CHECKING

particular point on a curve, put the mouse over it and the tooltip will pop up with
the set of values.
The graph editor displays curves for each node that is exposed (its
channels are displayed; Enable, U. Vel, V. Vel, etc above).
The graph editor also displays curves for all selected nodes (trackers,
cameras, or moving objects) as long as the Draw Curves for Selected Nodes

button is turned on. This gives you quite a bit of quick control over what is
drawn, and enables you to compare a single tracker or cameras curves to any
other tracker as you run through them all, for example.
You zoom a channel by dragging the small zoom icon . The zoom
setting is shared between all channels with the same type. For example, the U
and V velocity channels are the same type, as are the X, Y, and Z position
channels of the camera. But the U velocity and U position are different types. If
you click on the small Zoom icon, the other zoom icons of the same type will
flash.
The zoom setting is also shared between nodes of the same type:
zooming or panning on one tracker affects the other trackers too. All related
channels will zoom also, so that the channels remain comparable to one another.
This saves time and helps prevents some incorrect thought patterns.
The pan setting is also shared between nodes, but not between
channels: the U velocity and V velocity can be separated out. When you pan,
youll see a horizontal line that is the zero level of the channel. It will snap
slightly to horizontal grid lines, making it easier to make several different curves
line up to the same location. You can later check on the zero level by tapping the
zoom or pan icons.
There are two kinds of auto-zooms, activated by double-clicking the zoom
or pan icons. The zoom double-click auto-zooms, but makes all channel of the
same type have the same zero level. The pan double-click auto-zooms, but pans
the channels individually. As a result, the zoom double-click keeps the data
more organized and easier to follow, but the pan double-click allows for a higher
zoom factor, because the zero levels can be different.
For example, consider zooming an X position that runs 0 to 1, and a Y
position that runs 10 to 12.
If we pan double-click, the X curve will run full-screen from 0 to 2, and Y
will run full-screen from 10 to 12. Note that X is not 0 to 1, because it must have
the same zoom factor as Y. X will only occupy the bottom half of the screen.
If we zoom double-click, X will run from 0 to 12 full screen, and Y will run
from 0 to 12 full screen. The range and zero locations of both curves will be the
same, and well be better-able to see the relationship between the two curves.
But if we want to see details, the pan-double-click is a better choice.

146
PRE-SOLVE TRACKER CHECKING

There is no option to have X run 0 to 1 and Y run 10 to 12, by design.


Both zoom and pan settings can be reset by right-clicking on the
respective icons.

Interpreting Figure of Merit

In this example, two trackers have been supervised-tracked with a Key


Every setting of 20 frames (but starting at different frames). The tracker Figure of
Merit (FOM) curve measures the amount of difference between the trackers
reference pattern and what is found in the image. You see it drop down to zero
each time there is a key, because then the reference and image are the same.
One tracker has a small FOM value that stays mostly constant. The other
tracker has a much larger FOM, and in part of the shot it is much larger. In a
supervised shot, the reason for that should be investigated.
You can use this curve to help decide how often to place a key
automatically. The 20 frame value shown above is plenty for those features. If
you see the following, you should reduce the spacing between keys.

147
PRE-SOLVE TRACKER CHECKING

Youll also be able to see the effect of the Key Smooth setting: the key
smoothing will flatten out a steadily increasing curve into a gently rounded hump,
which will reduce spikes in the final camera path.

Velocity Spikes
Heres an example of a velocity curve from the graph editor:

148
PRE-SOLVE TRACKER CHECKING

At frame 217, the tracker jumped about 3 pixels right, to a very similar
feature. At frame 218, it jumped back, resulting in the distinctive sawtooth pattern
the U velocity curve exhibits. If left as-is, this spike will result in a small glitch in
the camera path on frame 217.

You can repair it using the Tracker control panel in the main user
interface by going to frame 217. Jiggle back and forth a few frames with the S
and D keys to see whats happening, then unlock the tracker and drop down
a new key or two. Step around to re-track the surrounding frames with the new
keys (or rewind and play through the entire sequence, which is most reliable).
DeGlitch Mode

You can also repair the glitch by switching to the Deglitch mode of the
graph editor, then clicking on the first (positive) peak of the U velocity at frame
217. SynthEyes will compute a new tracker location that is the average of the
prior and following locations. For most shots, this will eliminate the spike.
If you see a velocity spike in one direction only, it will be more difficult to
correct: it means that the tracker has jumped to a nearby feature, and not come
back. You will have to put it back in its correct location and then play (track)
through the rest of the shot.
The deglitch tool can also chop off the first or last frame of a tracker, which
can be affected when an object moves in front, or a feature is moving offscreen.

149
PRE-SOLVE TRACKER CHECKING

Even if the last two or three frames are bad, you can click a few times and
quickly chop them off.
Finding Spikes Before Solving
Learn to recognize these velocity spikes directly. There are double spikes
when a tracker jumps off course and returns, single spikes when it jumps off
course to a similar feature and stays there, large sawtooth areas where it is
chattering between near-identical features (or needs a new position key for
reference), or big takeoff ramps where it gets lost and heads off into featureless
territory.

To help find these issues, the graph editor features the Isolate mode .
Left-click it to turn it on, then right-click it to select all the trackers (it does not
have to be on for right-clicking to work).
With all the trackers selected, you will usually see a common pattern for
most of the trackers, plus a few spots where things stick out. If you click the
mouse over the spikes that stick out, that tracker will be selected for further
investigation. You can push the left button and keep it down and move around
investigating different curves, before releasing it to select a particular one. It can
be quicker to delete extra automatic trackers, rather than repairing them.
After repairing each tracker, you can right-click the isolate button again,
and look for more. With two monitors, you can put the graph editor on one, and
the camera view on another. With only one monitor, it may be easiest to operate
the graph editor from the Camera & Graphs viewport configuration. Once you are
done, do a refine-mode solving cycle.

Hint: You can stay in Deglitch mode , and temporarily isolate by


holding down the control key. This gives a quick workflow for finding and
repairing glitches.

150
Nodal Tripod-Mode Shots
In a tripod-mode shot (also known as a nodal pan in technical terms), the
camera pans, tilts, rolls, perhaps zoomsbut does not translate. No 3-D range
information can be extracted. That is both a limitation and a benefit: without a
depth, these shots are the domain of traditional 2-D compositing, amenable to a
variety of tricks and gimmicks such as the Honey, I Shrunk the Kids effect.
In the "old days," effects shots used a camera on a tripod so that elements
could be composited in without the need for 3D analysis: at most a simple four-
corner pinning. No true 3D information was available or necessary.
Now, although SynthEyes is designed to handle shots where the camera
translates, producing full 3D information, shots still come in that only have pan,
tilt, roll, and zoom... and of course we still need to do a 3D insert into them.
You should always be on the lookout for tripod shots, even when you are
told "was shot on a crane," "was shot hand-held", etc. Unless the camera
translates, there's no 3D. If everything is far away on the horizon, your 6 feet of
camera motion doesn't amount to much either.
Fortunately you can easily learn to recognize these shots. It can be
simpler to learn if you first run an auto-track, then you turn off the image display
in the camera viewport, and just watch the movement of the trackersthat's all
you need, and all SynthEyes "sees." If the trackers all move together as a unit, it
is a tripod shot. If they move around differently, it is not (discount any on actors
or other moving objects; they should be deleted).
SynthEyes solves tripod shots for (only) the pan, tilt, and roll (optionally
zoom) using the Tripod solving mode. And it helps you orient the tripod-solve
scene into a 3-D workspace.
In a tripod shot, no distance (range) information can be determined, so all
tripod-shot trackers are automatically tagged as Far, meaning that they are
directions in space (like a directional light), not a point in space (which
corresponds to an omni light). For the purposes of display in the 3D viewports
and perspective view, Far points are located at a fixed distance from the camera
(forming the surface of a sphere if there are many).
Far points can also be generated from normal 3D solves, if a point is
determined to be far from the camera. No matter their origin, if you look in the 3D
viewports and see some points that are moving along with the camera, they are
either Far points, or you have set up a "moving object" in the shot to which the
tracker is attached.
Tripod shots are solved using Tripod Mode on the Solver Panel. Once you
have a tripod solve, you can add 3D elements into it very easily, and they will all
stick. That's the good news. The bad news is that the trackers can not help tell
you where to position your elements in 3D, you'll have to position them based on

151
NODAL TRIPOD-MODE SHOTS

your own best guess. If you have a pre-existing set model, you can use
alignment lines to help determine the camera placement.

Introducing Holds
Some shots are more complex, however: they contain both sections
where the camera translates substantially, and where the camera pans
substantially without translation. For example, the camera dollies down a track,
looking to the left, reaches the end of the track, spins 180 degrees, then returns
down the track while looking to the right.
Such a shot is complex because none of the trackers visible in the first
section of the shot are visible in the third portion. During the second panning-
tripod portion, all the trackers must be Far and can have no depths because the
camera never translates during their lifetime. Taken literally, and of course were
talking computers here, mathematically there is no way for SynthEyes to tell what
happened between the first and third sections of the shotthe camera could
have translated from here to Mars during the second section, and since the Far
points are infinitely far away, the tracking data would be the same.
Instead, we need to tell SynthEyes the camera is not translating during
the second section of the shot. We call this a hold, and there is a Hold button
for this on the Summary and Solver control panels. By animating the Hold
button, you can tell SynthEyes which range(s) of frames that the camera is
panning but not translating. SynthEyes calculates a single XYZ camera position
for each section of frames where the hold button is continuously onthough it
continues to calculate separate pan, tilt, and roll (and optionally zoom) for each
frame. (Note: you do not have to set up a hold region if the camera comes to a
stop and pans, but only a little, so that most of the trackers are visible both before
and after the hold region. That can still be handled as a regular shot.)
The Hold button can be animated on and off in any pattern: off-on-off as
above; off-on, a shot with a final panning section; on-off, a shot with an initial
panning section followed by a translation; on-off-on, a pan at each end of a dolly;
off-on-off-on, a translation, a pan, another translation, and a final pan; etc. There
is no requirement on what happens during each pan or each translate, they can
all be different. In effect, you are building a path with beads in it, where each
bead is a panning hold section.

Warning: at present, the Hold processing system is not able to


complete solving hold sequences that include two or more translating
sections. Use the Splice Paths script instead.

Preparing Trackers for Holds


It is crucial to maintain a smooth 3-D path in and out of a hold regionyou
do not want a big jump. To achieve this requires careful control over whether

152
NODAL TRIPOD-MODE SHOTS

trackers are far or not. The operations and discussions that follow rely heavily on

the graph editors tracks view of the world.


To begin with, a tracker must be configured as Far if the camera does not
translate within its lifetime (ie the trackers lifetime is contained within a hold
region). A tracker with a lifetime solely outside the hold region will generally not
be far (unless it is in fact far, such as a point out on the horizon).
Trackers that exist both inside and outside the hold region present some
more interesting questions, yet they are common, since the auto-tracker rightfully
does not care about the camera motionit is only concerned with tracking image
features.
If non-far trackers continue into a hold region, they will inevitably cause
the best XYZ position of the hold region to separate from the last XYZ position
before the start of the hold region. The additional tracking information will not
exactly match the prior data, and frequently the hold region contains a rapid pan
that tends to bias the tracking data (including a rolling shutter in the camera). A
jump in the path will result.
To prevent this, SynthEyes only pays attention to non-far trackers during a
short transition region (see the Transition Frms. setting on the Solver panel ).
Inside the transitions at each end of a hold region, non-Far trackers are ignored;
their weight is zero to the solve. This ensures that the path is smooth in and out
of the hold region.
This causes an apparent problem: if you take an auto-tracked shot, turn
on the hold button, then inside the hold region, there will be no operating trackers
(and the Lifetimes panel will show those frames as reddish). There are no far
trackers, and no usable tracks in there! Your first instinct may be that SynthEyes
should treat the trackers as normal outside the hold region, and as far inside the
hold regionan instinct that is simple, understandable, and mathematically
impossible.
It turns out that the non-far and far versions of a tracker must be solved for
separately, and that the sensible approach is to split trackers cleverly into two
portions: a non-far portion, and one or more far portions. The lifetimes of the
trackers are manipulated to smoothly transition in and out of the hold region, and
smooth paths result.

Hold Tracker Preparation Tool


To easily configure the trackers appropriately, SynthEyes offers the Hold
Region Tracker Preparation Tool, opened from the Windows menu. This tool
gives you complete control over the process if you want, but will also run
automatically with default settings if you start automatic tracking from the
Summary Panel after having set up one or more hold regions.
The tool operates in a variety of modes. Far trackers are not affected.

153
NODAL TRIPOD-MODE SHOTS

The default Clone to Far mode takes each tracker and makes a clone. It
changes the clone to be a Far tracker, with a lifetime that covers the hold region,
plus a selectable number of frames before and after the hold region (Far
Overlap). The overlap can help maintain a stable camera pointing direction into
and out of the hold region, but you need to adjust this value based on how much
the camera is moving before and after the hold region. If it is moving rapidly,
keep Far Overlap at zero.
If the Combine checkbox is off, it makes a new tracker for each hold
region; if Combine is on, the same Far tracker covers all hold regions. For typical
situations, we recommend keeping Combine off.
The Clone to Far mode will cover the holes in coverage. The original
trackers will continue to appear active throughout the hold region. If you find this
confusing, you can run a Truncate operation from the Preparation tool: it will turn
off the trackers during the hold region. However, this will make it more difficult if
you later decide to change the hold region.
The Hold Preparation tool can also change trackers to Far with Make Far
(all of them, though usually you should tell it to do only some selected ones). It
will change them to Far, and shut them down outside the hold region (past the
specified overlap).
The Clone to Far operation creates many new trackers. If you already
have plenty, you may wish to use the Convert Some option. It will convert a
specified percentage of the trackers to Far (tightening up their range), and leave
the rest untouched. This will often give you adequate coverage at little cost,
though Clone is safer.

Usage Hints
You should play with the Hold Preparation tool a bit, setting up a few fake
hold regions, so you can see what the different modes do. The Undo button on
the Hold Preparation Tool is there for a reason! It will be easier to see what is
happening if you select a single tracker and switch to Selected mode, instead of
changing all the trackers.
After running the Hold Preparation operation (Apply button), you may want
to switch to the Sort by Time option in the graph editor.
If you need to change the hold region late in your workflow, it is helpful if
the entire tracking data is still available. If you have run a Truncate, the tracking
data for the interior of the hold regions will be gone and have to be re-tracked.
For that reason, the Truncate operation should be used sparingly, perhaps only
when first learning.
If you have done some tracker preparation, then other things, then need to
redo the preparation, use the Select By Type item on the Script menu to select
the Far trackers, then delete them. Make sure not to delete any Far trackers you
have created specially.

154
NODAL TRIPOD-MODE SHOTS

If you look back to the initial description of the hold feature, you will see
that the camera motion during a time of Far trackers is arbitrary it could be to
Mars and back. We introduced the hold only as a useful and practical
interpretation of what likely happened during that time.
Sometimes, you will discover that this assumption was wrong, that during
that big pan, the camera was moving. It might be a bump, or a shift, etc. After
you have solved the shot with Holds, you can sequentially convert the holds to
camera locks, hand-animating whatever motion you believed took place during
the hold. You should do this late in the tracking process, because it requires you
to lock in particular coordinates during each motion. The key difference between
holds and locks is this context: a hold says that the camera was stationary at
some coordinates still to be determined, while the lock will force you to declare
exactly which coordinates those are.
You may also need to use camera or tracker locks if you have exact
knowledge of the relationship between different sections of the path. For
example, if the camera traveled down a track, spun 90 degrees, then raised
directly vertically, the motion down the track and vertically are unlikely to be
exactly perpendicular. You can use the locks to achieve the desired result,
though the details will vary with the situation.
The Hold Tracker Preparation Tool presents plenty of options, and it is
important to know what the whole issue is about. But, in practice the setup tool is
a snap to use and can be run automatically without your intervention if you set up
the hold region(s) before auto-tracking. You can also adjust the Hold Tracker
Preparation tool settings at that time, before tracking. The settings are saved in
the file for batch processing or later examination.

155
Lenses and Distortion
Match-moving aims to electronically replicate what happened during a
live-action film or video shoot. Not only the camera motion must be determined,
but the field of view of the shot as well. This process requires certain
assumptions about what the camera is doing: at its simplest, that light travels in a
straight line, but also a number of nitty-gritty little details about the camera:
whether there is distortion in the lens, how big the film or sensor is, whether it is
centered on the axis of the lens, the timing of when pixels are imaged, and many
more smaller issues. It is very easy to take these for granted, and under ideal
conditions they can be ignored. But often, they will contribute small systematic
errors, or even cause outright failure. When the problems become larger, it is
important to recognize them and be able to fix them, and SynthEyes provides the
tools to help do that.
You should always be on the lookout for lens distortion, and be ready to
correct it. Most zoom lenses will exhibit very substantial distortion when set to
their widest field of view (shortest focal length).
Similarly, you should be on the lookout for de-centering errors, especially
on long traveling shots and on shots with substantial distortion that you are
correcting.

Focal Length vs Field Of View


Since cameras are involved, often customers are constantly concerned
with focal length. They write them down, they try to decide if a value from
SynthEyes is correct or not.

Important: A focal length value is useless 99% of the time unless


you also know the plate width of the image (typically in millimeters, to
the hundredth of a millimeter). Unfortunately, this value is rarely
available at all, let alone at a sufficient degree of accuracy. It takes a
careful calibration of the camera and lens to get an accurate value.
Sometimes an estimate can be better than nothing; read on.

SynthEyes uses the field of view value (FOV) internally, which does not
depend on plate size. It provides a focal length only for illustrative purposes. Set
the (back plate) film width using the Shot Settings dialog. Do not obsess over the
exact values for focal length, because finding the exact back plate width is like
trying to find the 25 on an old 25 television set. Its not going to happen. Lens
manufacturers will tell you that the rated value is only intended to be a guideline,
something like +/-5%

Zoom, Fixed, or Prime Lens?


During a single shot, the camera lens either zooms, or does not. Often,
even though the camera has a zoom lens, it did not zoom. You can get much
better tracking results if the camera did not zoom.

157
LENSES AND DISTORTION

Select the Lens panel . Click


Fixed, Unknown if the camera did not zoom during the shot (even if it is a
zoom lens)
Fixed, with Estimate if the camera did not zoom during the shot, and you
have a good estimate of the camera field of view, or both the focal length
and plate width.
Zooming, Unknown if the camera did zoom
Known if the camera field of view, fixed or zooming, has been previously
determined (more on this later).
If you are unsure if the camera zoomed or not, try the fixed-lens setting
first, and switch to zoom only if warranted. Generally, if you solve a zoom shot
with the fixed-lens setting, you will be able to see the zooms effect on the
camera path: the camera will suddenly push back or in when it seems unlikely
that the real camera made that motion. Sometimes, this may be your only clue
that the lens zoomed a little bit.

Important: Never use Known mode solely because someone wrote


down the lens setting during shooting. Like the turn-signal of an
oncoming car, it is only a guess, not something you can count on. Do
not set a Known focal length unless it is truly necessary.

You may have the scribbled lens focal length from on-set production. If
you also know the plate size, you can use the Fixed, with Estimate setting to
speed up the beginning of solving a bit, and sometimes to help prevent spurious
incorrect solutions if the tracking data is marginal. The mode is also useful when
you are solving several shots in a row that have the same lens setting: you can
use the field of view value without worrying about plate size. In either case, you
should rewind to the beginning of the shot and either reset any existing solution,
or select View/Show Seed Path, then set the lens field of view or focal length to
the correct estimated value. SynthEyes will compute a more accurate value
during solving.
It can be worthwhile to use an estimated lens setting as a known lens
setting when the shot has very little perspective to begin with, as it will be difficult
to determine the exact lens setting. This is especially true of object-mode
tracking when the objects are small. The Known lens mode lets you animate the
field of view to accommodate a known, zooming lens, though this will be rare. For
the more common case where the lens value is fixed, be sure to rewind to the
beginning of the shot, so that your lens FOV key applies to the entire shot.
When a zoom occurs only for a portion of a shot, you may wish to use the
Filter Lens F.O.V. script to flatten out the field of view during the non-zooming
portions, then lock it. This eliminates zoom/translation coupling that causes
noisier camera paths for zoom shots. See the online tutorial for more details. You
can also set up animated filter controls using the post-solve filtering to selectively
filter more during the stationary non-zooming portion.

158
LENSES AND DISTORTION

Introduction to Lens Distortion


SynthEyes has two main places to deal with distortion: early, before
tracking, in the image preparation subsystem; and later, during solving. Each
approach has its own pros and cons.
The early approach, in image prep, is controlled from the Lens tab of the
image preparation dialog. It lets you set distortion coefficients to remove the
distortion from the source imagery (or later add it). The distortion is removed
before it reaches the trackers, the camera view, and the perspective view. But
you must already know the coefficients, or fiddle to find them.
The image preprocessor can also accept lens presets, if you have
calibrated the lens or obtained a preset elsewhere. The presets can specify fish-
eye lenses or other complex distortion patterns.
The late approach to lens distortion, controlled by the main Lens panel,
allows the solving engine to (optionally) determine a most likely distortion value.
This approach uses only a single distortion parameter that is appropriate only for
minor distortion, not severe moustache distortion or fisheye lenses. The imagery
you see in the camera view will be the distorted (original source) images, with the
tracker locations adjusted to match up in the camera view, but not the
perspective view. Usually you are going to want to produce some undistorted
footage once you determine the distortion, at the least for temporary use.

Determining Distortion With Check Lines


If your scene has long, straight, lines, check to see if they are truly straight
in the image: click Add Line at the bottom of the Lens panel and draw an
alignment line along the line in the image (select No Alignment). If the lines
match all along their length, the image is not distorted.
If the image is distorted, you can adjust the lens panels Lens Distortion
spinner until the lines do match; add several lines if possible. Create lines near
the four edges of the image, but stay away from the corners, where there is more
complex distortion. You will also see a lens distortion grid for reference
(controlled by an item on the View menu).

Calculating Distortion While Solving


If your shot lacks straight lines to use as a reference, turn on the
Calculate Distortion checkbox on the lens panel and it will be computed during
3-D solving. Usually you should solve the shot without calculating distortion
(perhaps just a guess), then turn on Calculate Distortion. When calculating
distortion, significantly more trackers will be necessary to distinguish between
distortion, zoom, and camera/object motion.

Distortion, Focal Length, and the Field of View


When using the image preprocessor to correct distortion, you can adjust
the scale so that the undistorted image is exactly the full width of the frame, if you

159
LENSES AND DISTORTION

would like to be able to compare a SynthEyes focal length to an on-set focal


length. Note that this will not affect the match itself.

Cubic and Quartic Distortion Correction


The basic distortion coefficient on the Lens panel and image
preprocessors Lens tab can encompass a moderate amount of distortion.
However, with wider aspect ratios if 16:9 and 2.35, higher-order (more complex)
distortion becomes significant, especially in the corners of the image. If you shoot
a lens distortion grid (see the web site) and correct the distortion at the top
middle and bottom middle of the image, you might see that the corners are not
corrected due to the more complex distortion.
The image preprocessor has two additional parameters that you can use
to tweak the corners into place, after fixing the basic distortion: cubic (x to the
3rd) and quartic (x to the 4th) distortions. The quartic affects the corners more
strongly than the cubic.. You may have to go back and forth between the
parameters a few times to get a good match. You can start with the quadratic
and quartic, or work your way from quadratic to cubic to quartic. The cubic
parameter will usually have the opposite sign of the main distortion (ie one is
positive, the other negative), and the quartic yet the opposite sign.

Lens Distortion Profiles


SynthEyes can use stored information about known, pre-calibrated, lenses
from special files with the file extension .lni (lens information), or from "image
map" files that look like colorful gradients. These can be stored in the Lens sub-
folder of the scripts folder, for frequently used profiles such as for a camera you
own. There are two preset folders, a system set and a user-specific set. Lens
presets can also be read directly from a specified preset file anywhere in the file
system, for example located with the image files for project-specific profiles.

Note: Having a large number of lens distortion presets will increase


SynthEyes's startup time. One-time-use distortion files and maps
should be stored with the source imagery, rather than cluttering up the
presets.

The .lni lens information files contain a table of values mapping from the
correct radius of any image point to the distorted radius. These tables can be
generated by small scripts, including a default fish-eye lens generator (which has
already been run to produce the two default fisheye lens files), and a polynomial
generator, which accepts coefficients from Zeiss for their Ultra-Prime lens series.
These distortion maps can be either relative, where the distortion is
independent of the physical image size, or absolute, where the distortion is
described in terms of millimeters. The relative files are more useful for
camcorders with built-in lenses, the absolute files more useful for cameras with
removable prime lenses.

160
LENSES AND DISTORTION

The absolute files require an accurate back-plate-width in order to use the


table at all. Do not expect the lens calibration table to supply the value, because
the lens (ie a detachable prime lens) can be used with many different cameras!
For assembled camcorders, typically with relative files, the lens file can
supply optional nominal back-plate width and field of view values, displayed
immediately below the lens selection drop-down on the image preprocessors
lens tab. You can apply those values as you see fit.
When you select or open a preset, some additional information about the
setup may be loaded into the image preprocessor as well, especially the required
padding values.
If you change an lni file (by re-writing it with a script, for example), you
should hit the Reload button on the Lens tab, while that lens file is selected. If
you add new files, or update several, use Find New Scripts on the main File
menu.

(What About 2-D Grid Corrections?)


When you are correcting distortion on a shot, you may see an asymmetry,
where there is more distortion on one side of the shot, and less in another. If you
correct one side, the other goes out of whack.
You might think or hear about grid-based distortion correction, to rubber-
sheet morph the different parts of the image individually into their correct places.
This seems a simple approach to the problem, and it is, sort of! But it is WRONG.
The actual cause is de-centeringyou are correcting the distortion using
the wrong optic center, which results in an apparent asymmetry. If you use a
grid-type correction, you will likely fix the images, but not the imaging geometry,
and the entire match-move will come out wrong. When you fix the centering, the
distortion will go away properly without the need for an asymmetric grid-based
correctionand the match-move will come out right in the end. SynthEyes fixes
de-centering by padding the image, as described in the section on that topic.

Accurate Presets from Lens Grids

MOVED! There's now much more information and new tools in the
Camera Calibration Manual, found from the Help Menu in SynthEyes.
The existing material is still a good overview and usable, so it is
retained here for reference.

If you are able to shoot a specific lens calibration grid immediately before
or after principal photography, SynthEyes offers a tool to generate accurate lens
presets automatically.
We have a special 36"x27" lens calibration grid that will make this easy.

161
LENSES AND DISTORTION

Here's the process in detail.


1. Print out the 36"x27" automatic lens distortion grid from our
website. Be sure to use matte paper. Do not attempt to use a letter-
size version: the camera will have to be too close or zoomed in,
either of which will change the distortion. Mount on a flat surface
you should take significant care that it is smoothed flat and pulled
tight, or laminate it to a piece of foam-core or equivalent.
2. Shoot a short shot of the lens distortion grid during principal
photographysome 3-5 seconds with the camera aimed at the
center of the grid, but intentionally circling around the center, so
that camera aim goes over to the next dot in the grid pattern
instead. The camera should be close enough that the shot is "all
grid" with grid extending all the way to the edge of the frame and
beyond, with none of the text or boundary visible. The movement
should be slow, to avoid motion blur and rolling shutter distortion.
Important: your calibration will apply only for that particular zoom
setting on the lens. You should calibrate for each setting you use.
The center reference circle is for your reference during shooting
only and does not have any significance during later processing.
3. In SynthEyes, click Shot/Create Lens Grid Trackers to create an
array of supervised symmetric trackers, one for each dot. Verify the
created trackers, deleting any non-grid trackers, and creating any
missing ones in the interior area of the grid. (It is possible to do this
manually also. If the grid pattern is not sufficiently regular, you can
assign grid coordinates manually by giving every tracker specific
names that give the grid coordinates x,y ie 3,5 2,6, etc.)
4. Track them through the entire shot. You may need to increase the
search area, depending on the amount of motion, though if you do,

162
LENSES AND DISTORTION

the motion is probably too rapid. Be sure to watch the trackers in


the four corners, which are the ones most prone to problems,
especially with fisheye lenses. You can edit trackers to turn them
back on if they have gone off-screen, then reappeared, but
generally that should not be necessary. If there are some frames
that are motion-blurred too much, click Skip Frame on the Features
panel for each, to cause those frames to be ignored.
5. After tracking, control/command-A and then Lock all the trackers.
6. Rewind to the beginning of the shot, or whichever frame has the
most trackers visible.
7. Click Shot/Process Lens Grid to start the analysis of the data from
the trackers.
8. Answer a question about whether you want the one-pass or two-
pass workflow. The one-pass flow produces compact normal-
looking images that can be delivered as-is; the more complex two-
pass flow lets you ultimately composite with and deliver the original
distorted imagery.
9. Answer a question about whether you want a full high-order
analysis (which will create a lens preset), or a simpler analysis
using only the quadratic and cubic terms on the image pre-
processor's lens distortion panel. If the lens has little distortion, start
with the simpler analysis.
10. Answer a question about whether or not you want to calculate the
lens's optic center. You should answer YES most of the time,
answering NO only when there is very little distortion, or the lens
has primarily pin-cushion distortion (which the centering calculation
is not designed for).
11. If you want high-order analysis, it will be stored in a lens preset.
You will need to give SynthEyes a lens preset name and file name
in which to store itthe file must be in your script storage area,
typically within the LensGrid folder (not in the application's script
area, which users do not have access to in modern operating
systems).
12. Processing may take some substantial time for the full analysis,
especially if there are many frames and you do not have a desktop
machine with a substantial number of cores. (Note: if the solving
process encounters problems, it will take even longer. You can
cancel it, though it will also take some time to reach a cancellable
point.)
13. When complete, SynthEyes will display an RMS error number
indicating the error level obtained, and the computed optic center

163
LENSES AND DISTORTION

position. The error will likely be over 1 pixel for wide angle lenses,
under 1 pixel for normal lenses.
For a small amount of distortion, and a one-pass workflow, the simple analysis
will often be fine. For more complex distortion (wider angles lenses) or if you
need to do the two-pass workflow, you will probably want the full high-order
analysis.
If the lens is very wide angle, the two-pass undistorted image may have
very long tails in the corner, if the corners of the sensor reach or exceed the
vanishing circle of the lens (where the image is entirely black outside that circle).

This is a consequence of trying to image a sphere with a flat image plane.


You may have to switch to the one-pass approach instead.
After the lens distortion analysis is complete, you should save the SNI
fileit contains the results, which include not only the lens settings (on the image
pre-processor's lens tab), the newly-created lens preset (if requested), but also
cropping values on the Cropping tab, a zoom value on the Adjust tab, and new
resolution values on the Output tab.
The easiest way to use your newly-created lens calibration SNI file is to do
a Shot/Change Shot Images to the actual shot you need to undistort, then use
Output/Save Sequence to write the undistorted sequence to disk. You can then
do a File/New, open the newly-created undistorted sequence, and track away.
If you need to, you can look inside the LNI file to see a set of the extracted
parameters again, including the Cropping and scale settings, as well as the RMS
error value.

Match-moving with Lens Distortion


Merely knowing the amount of lens distortion and having a successful 3-D
track is generally not sufficient, because most animation and compositing

164
LENSES AND DISTORTION

packages are not distortion-aware, or have different distortion calculation


schemes. Similarly, if you have configured some correction for earlier image de-
centering or cropping (ie padding) using the Image Preprocessing system, your
post-tracking workflow must also reflect this.
When distortion and cropping are present, in order to maintain exactly
matching 3-D tracks, you will need to have the following things be the same for
SynthEyes and the compositing or animation package:
Undistorted shot footage, padded so the optic axis falls at the center,
An overall image aspect ratio reflecting the effects of padding and
the pixel aspect ratio,
A field of view that matches this undistorted, padded footage, or,
A focal length and back plate width that matches this footage,
3-D camera path and orientation trajectories, and
3-D tracker locations.
If a shot lines up in SynthEyes, but not in your compositing or animation
software, checking these items is your first step.
Since SynthEyes preprocesses the images, or mathematically distorts the
tracker locations, generally the down-stream software application will not receive
matching imagery unless care is taken as described below to generate matching
imagery. Some exporters will export commands to tell the downstream package
how to reproduce the distortion, for example Fusion and After Effects (with the
appropriate SynthEyes-supplied plugin installed, see the After Effects installation
section).
SynthEyes has a script, started by the "Lens Workflow" button on the
Summary or Lens panels, that simplifies workflow when dealing with distorted
imagery. It uses a simple approach to the setup. You can do whatever it does
manually or via your own (modified) script if you need to do something different.
Basic Lens Distortion Workflows
There are two fundamentally different approaches to dealing with distorted
imagery:
1) deliver undistorted imagery as the final shot (one pass)
2) deliver distorted imagery as the final shot (two pass)
Delivering undistorted imagery as a final result is almost certainly the way
to go if you are also stabilizing the shot, are working with higher-resolution film or
RED scans being down-converted for HD or SD television, or where the
distortion is a small inadvertent result of a suboptimal lens.
Delivering distorted imagery is the way to go if the distortion is the
directors original desired look, or if the pixel resolution is already marginal, and
the images must be re-sampled again as little as possible to maximize quality. It
is called two-pass because the CG footage (generated in your 3-D application,
such as Blender, Cinema 4D, Lightwave, or Maya) must be run back through

165
LENSES AND DISTORTION

SynthEyes (or a different application) to apply the distortion to the CG imagery


before it can be composited with the original footage..
Usually you will want to have the same 8/16/float bit depth for the output
imagery as for the input. In the following discussion, you'll see that the default
settings will cause this to occur. (In some cases, you may not want it to.)

Important: if you have configured the disk cache preference to always


convert all footage to 8 bit for compact storage, that setting will make it
impossible to preserve the full depth and you must turn it off. See the
section on 8-bit vs native disk caching for information.

Delivering Undistorted Imagery


After determining the lens calibration, you will use the image
preprocessing system to produce an undistorted version of the shot.
Determine lens distortion via calibration, checklines, or solving with
Calculate Distortion.
Save then Save As to create a new version of the file.
(recommended).
Click Lens Workflow on the Summary panel (or start the
Lens/Lens Workflow script).
In most cases, ensure that the "Use input's bit depth" checkbox is
turned on: if the input is 16 bit or floating point, you will likely want
the same bit depth on output. If you have specifically set the bit
depths on the shot setup panel, turn this off. (If the bit depth
changes, the RAM cache will flush and reload automatically.)
Select the Final output option Undistorted(1) and hit OK. The
script will zoom in slightly so that there are no uncovered black
areas in the output image.
Click Save Sequence on the Summary panel or Output tab of the
image preprocessor (which lets you change resolution if you need
to). Write out a new version of the undistorted imagery.
Save then Save As to create a new version of the file.
(recommended).
On the edit menu, select Shot/Change Shot Images.
Select the Switch to saved footage option on the panel that pops
up, hit OK.
You will now be set up to work with the undistorted (fixed) footage.
If you tracked and solved initially to determine the distortion (or
without realizing it was there), the trackers and solve has been
updated to compensate for the modified footage.

166
LENSES AND DISTORTION

You can track, solve, add effects, etc, all using the final undistorted
imagery.
If you need to do something different, or want to do more of the step
manually, here is what is happening behind the scenes.
The Lens Workflow script performs the following actions for you: transfers
any calculated distortion from the lens panel to the image preprocessor, turns off
the distortion calculation for future solves, changes the scale adjustment on the
image prep Adjust tab to remove black pixels, selects Lanczos interpolation,
updates the tracker locations (ie Apply to Trkers on the Output tab), adjusts the
field of view, and adjusts the back plate width (so focal length will be unchanged).
When you do the Change Shot Images with the Switch to saved footage
option, SynthEyes resets the image preprocessor to do nothing: if the lens
distortion and other corrections have already been applied to the modified
images, you do not want to perform them a second time once you switch to the
already-corrected images.
The "Clear related controls" setting on the Lens Workflow panel clears the
lens distortion, scale, and lens preset on the Lens tab, and the Crop settings on
the Cropping tab (both tabs on the image preprocessor).
Delivering Distorted Imagery
In this workflow option, you create and track undistorted imagery,
generate CG effects, re-distort the effects, then composite the distorted version
back to the original imagery.
Determine lens distortion via calibration, checklines, or solving with
Calculate Distortion turned on.
Save then Save As to create a new version of the file.
(recommended).
Click Lens Workflow on the Summary panel (or start the
Lens/Lens Workflow script).
In most cases, ensure that the "Use input's bit depth" checkbox is
turned on: if the input is 16 bit or floating point, you will likely want
the same bit depth on output. If you have specifically set the bit
depths on the shot setup panel, turn this off. (If the bit depth
changes, the RAM cache will flush and reload automatically.)
Some codecs, such as Cineform, require that the image width be a
multiple of 16 pixels, otherwise they will crash SynthEyes. To
accommodate them, turn on the "Width=16x" checkbox to force the
width to be a multiple of 16.
Select the Final output option Redistorted(2) and hit OK. The
script will pad the image so that the output contains every input

167
LENSES AND DISTORTION

pixel. The margin value will include a few extra for good measure,
adjust as desired.
Click Save Sequence on the Summary panel or Output tab of the
image preprocessor (which lets you change resolution if you need
to). Write out a new version of the undistorted imagery.
Important: Save then Save As to create a new version of the file,
call it Undistort for this discussion.
On the edit menu, select Shot/Change Shot Images.
Select the Switch to saved footage option, hit OK.
You will now be set up to work with the undistorted (fixed) footage.
If you tracked and solved initially to determine the distortion (or
without realizing it was there), the trackers and solve has been
updated to compensate for the modified footage.
Track, solve, export, work in your 3-D app, etc, using the
undistorted imagery.
Render 3-D effects from your 3-D app (which match the undistorted
imagery, not the original distorted images). You should render
against black with an alpha channel, not against the undistorted
images.
Re-open the Undistort file you saved earlier in SynthEyes.
Do a Shot/Change Shot Images, select Re-distort CGI mode on
the panel that pops up; select the rendered shot as the new footage
to change to. (This updates a number of resolution and aspect
settings to facilitate applying rather than removing distortion.)
Use Save Sequence on the Summary panel or Image
preprocessors output tab to render a re-distorted version of the CG
effect.
Composite the re-distorted imagery with the original imagery.
Obviously this is a more complex workflow than delivering undistorted
images, but it is a consequence of the end product desired.
Undoing the Lens Workflow Script
Generally we recommend that you save a copy of the SynthEyes file
before running the Lens Workflow script, in case you need to make later changes
to the original scene and solve.
You can use the regular Undo while SynthEyes is still open. However, if
you need to revisit a file where the lens workflow has been run, and no pre-
workflow version is available, you'll need to use the "Undo earlier run" option on
the Lens Workflow script itself.

168
LENSES AND DISTORTION

Unlike a regular undo, which literally restores early versions of various


pieces of data, the undo on the lens workflow is computational in nature: it
computes what the scene must have been before the lens workflow was run.
Accordingly, it's important that you not manually change the lens settings
before running the lens workflow script's undo: it must have access to the original
settings in order to successfully undo the original undistortion operation.
The Undo algorithm is intended for use with normal setups, with padding-
corrected centering and solver-calculated distortion. Unusual setups may require
operator assistance. The most important datathe 2D tracker data should
always be correctly restored, as long as there were no untimely image
preprocessor changes.

Lens Distortion Interchange via Image Distortion Maps


It can be convenient to have lens un-distortion and re-distortion applied in
your favorite compositing package, rather than having to bring it through
SynthEyes (especially to distort additional non-image channels). Doing so
requires that the compositing package be able to exactly match what SynthEyes
is doing (and a SynthEyes exporter smart enough to tell it how to do so).

Tip: The same approach works for 360VR stabilization maps!

While many software packages have "lens distortion" modules,


unfortunately the technical details behind those modules are far from
standardizedprobably most packages are unique! This poses a problem when
the lens distortion calculated by one program (here usually SynthEyes) must be
transmitted and used by one or more other programs.
Fortunately there is now a way to overcome this. Even better, the details
are generally handled automatically by compatible SynthEyes exporters. We
provide more details here so that you can understand what is being done, and be
able to use this technique in combination with other software as well.

Note: The After Effects exporter uses a custom SynthEyes-supplied


lens un-distortion and re-distortion plugin to handle distortion and does
not use image distortion maps; see the section on distortion in After
Effects.

While most lens distortion (and un-distortion) is described by mathematical


formulas that get increasingly complex as they get more exact (see for example
https://www.ssontech.com/content/lensalg.html), it is possible to "bake" the lens
distortion into an image distortion map (ID map), regardless of how complex the
formulas are.
If you need to, you can can create these image distortion maps not only in
SynthEyes but in other software by distorting a reference image; the distorted

169
LENSES AND DISTORTION

image has enough information to allow the distortion to be reproduced


elsewhere.
This scheme is an excellent candidate for wide use throughout the visual
effects industry, and would save everyone quite a bit of time and aggravation.
The main drawback is that these images can require significant time to
compute, and require quite a bit of storage when animated distortion is present.
The Reference Image
The reference image has two independent gradients: a horizontal gradient
in the red channel from left to right, and a vertical gradient in the green channel
from bottom to top.

When the image is distorted, you get something like this (with a box
around it to make the boundary apparent):

170
LENSES AND DISTORTION

Software can use this image to reproduce the same distortion on other
images. Each pixel in the output image is selected from the input image by using
an X coordinate determined from the red channel, and a Y coordinate determined
from the green channel. The ability to do this can often be found in existing
compositing nodes.

TECHNICAL DETAILS (for consistency with other developers). The


map's colors are defined so that red=0.00 is the left edge of column 0
of the source, red=1.00 is the right edge of column w-1 and similarly
green=0.00 is the bottom edge of row 0, and green=1.00 is top of row
h-1. To pull from the lower-left pixel of an HD image, red=0.5/1920,
green=0.5/1080. This definition is required for consistency and to
permit the use of a map at one resolution with other images. In some
apps the red and green channels must be slightly adjusted; Fusion can
be used for a quick undistort/redistort test of the difference this makes.

You can make your own reference image if you like using Photoshop... or
simply by opening a shot in SynthEyes, not setting up any lens distortion, and
writing the distortion map.
Bit Depth, Color Space, and Resolution
Some care must be taken when creating image distortion maps (or
references). Eight-bit and half-float image formats must not be used, as they
are insufficiently precise to specify an exact pixel location. Sixteen-bit images are
OK, but full floating-point images are best. (SynthEyes will only permit you to
generate 16-bit or floating images, eliminating incompatible formats.)
Likewise, it is crucial to avoid any color-space processing when image
maps are saved, and when the image map is read. (This may be impossible in

171
LENSES AND DISTORTION

Photoshop.) Any color-space processing will create hidden distortions that may
be very difficult to detect! Be sure to select the "linear" option.
In the distorted image above, note that the output image is no longer full
frame (for illustration). The corrected image may also have tails in the corner.
Generally the output image (and map) should be larger than the original
undistorted image, so that there is no loss of effective resolution in the distortion
and undistortion process. The SynthEyes Lens Workflow script manages this.
Unmapped Pixels and Blue vs Alpha
In the example above, some pixels in the output image do not correspond
to pixels in the original undistorted imagethere is nowhere to pull those pixels
from. There are several possible values to place into such pixels.
White, no alpha (R=G=B=1, no A) This is a good choice, because
no valid pixel can be whitethe blue channel is zero for all valid
pixels. The image is smaller, since no alpha channel must be
stored. The alpha channel can be recreated for compositing, if
needed, as 1-b.
Black, with alpha. (R=G=B=A=0) Alpha is 1.0 for all used pixels.
Alpha is available for compositing, but the image is larger due to
the alpha. Software must pay attention to alpha, since black pixels
could be valid or invalid depending on alpha. This option is pre-
multiplied alpha: PMA.
White, with alpha. (R=G=B=1, A=0) Combination of the above,
where used and unused pixels are easy to distinguish. Requires
alpha storage. These images are non-pre-multiplied alpha
(nonPMA) , so compositing packages may need to know that.
Unclipped. OpenEXR only! Any of the above, but with no clipping
of out-of-range values. See below.
SynthEyes has preferences for each allowable image-map format (DPX,
EXR, PNG, SGI, TIF) to configure which of the options above will be used for that
specific image format.
The unclipped option is available only for OpenEXR images; it allows the
red and green channels to go under zero or above one. (Blue is always used as
a out-of-range marker.) With this option, compositing software that has pixels
available in overscan/margin areas can bring those pixels into the primary area.
Inverse Maps
In a two-pass lens workflow, we need two image maps: the map for un-
distorting the original footage, and a map for re-distorting CG effects to match the
original footage.
While it is possible to utilize the un-distortion map for re-distorting images,
it is much more complex and time-consumingand less accurate and reliable
than the very simple process used to use the map in the intended fashion.

172
LENSES AND DISTORTION

Accordingly, SynthEyes always produces both un-distortion and


redistortion maps. It does that directly from the underlying mathematics to
maximize speed and accuracy. (It is possible to access an inversion routine, for
use with maps from third parties, from within a Sizzle script, see the Sizzle
manual.)
Writing Image Distortion Maps
Once you've determined the lens distortion inside SynthEyes, you can
create the forward and inverse image distortion maps using the Shot/Write
Distortion Maps menu item.
You'll be prompted for the location and name of the (un-distortion) image
map to be written. The re-distortion image is always written also; the file name is
the same, but with "Redistort" tacked onto the name. (This suffix has a
preference setting.)
You'll have to select the file type to be written, either DPX, OpenEXR,
PNG, SGI, or TIFF, all of which support either the 16-bit or floating-point format
required for image maps.
There is a preference for each file type in the Save/Export section. The
preference selects 16-bit or floating images (if both are supported), and whether
to use RGB only, RGB + non-pre-multiplied alpha, or RGB + pre-multiplied alpha.
There is also a preference that allows the outer boundary of the map to be
extended slightly to minimize potential artifacts around the edge of the resulting
image.
Exporters and Future Work
The Fusion 7 composition export uses the image distortion map system.
You should run the Lens Workflow script (accessed from the Summary panel) to
set up for a full lens distortion pipe to be built in Fusion. The full un-distortion/re-
distortion pipe is always built, just delete the re-distortion portion if you are using
the single-pass workflow.
At present, an experimental Nuke export is available, but it has issues
dealing with the different-size images produced after un-distortion. We'll need
some Nuke users to help straighten that out.
The image distortion map describes the distortion on a particular frame. It
is possible to set up animated (un)distortions using the SynthEyes image
preprocessor (potentially useful for shots with changing focus or zoom), but
animated distortions are not supported in current export scripts.
The underlying capability to support animated zooms is there, however, by
writing an image sequence instead of a single shot. This might be considered for
Fusion and/or Nuke if there is enough interest (the performance of writing these
images can be improved if sequences are regularly being written).
Note that SynthEyes can read image distortion maps produced by other
software (even manually) and use them to drive its own image preprocessing

173
LENSES AND DISTORTION

system, in lieu of the usual distortion coefficients etc. This is configured from the
Lens tab of the image preprocessor. However, there's no support for animated
distortion maps, largely because of the complexity of efficiently managing the
reading and discarding of the maps in the heavily multi-threaded image
preprocessor.

Lens De-Centering
What we would call a lens these dayswhether it is a zoom lens or prime
lens, or fisheye lenstypically consists of 7-11 individual optical lens elements.
Each of those elements has been precisely manufactured and aligned, and by
the nature of this, they are all very round in order to work properly. Then they are
stacked up in a tube, which is again very round (along with gears and other
mechanisms) to form the kind of lens we buy in the store.
The important part of this explanation is that a lens is very round and
symmetric and has a single well-defined center right down the middle. You can
picture a laser beam right down the exact center of each individual lens of the
overall lens, shooting in the front and out the back towards the sensor chip or
film.
With an ideal camera, the center beam of the lens falls exactly in the
middle of the sensor chip or film. When that happens, parallel lines converge at
infinity at the exact center of the image, and as objects get farther away, they
gravitate towards the center of the image.
While that seems obvious, in fact it is rarely true. If you center something
at the exact center of the image, then zoom in, youll find that the center goes
sliding off to a different location!
This is a result of lens de-centering. In a video camera, de-centering
results when the sensor chip is slightly off-center. That can be a result of the
manufacturers design, but also because the sensor chip can be assembled in
slightly different positions within its mounting holes and sockets. In a film camera,
the centering (and image plate size) are determined solely by the film scanner!
So the details of the scanning process are important (and should be kept
repeatable).
De-centering errors creates systematic errors in the match-move when left
uncorrected. The errors will result in geometric distortion, or sliding. Most
rendering packages can not render images with a matching de-centering,
guaranteeing problems. And like the example zooming in earlier, the de-centered
lens can result in footage that doesnt look right.
It is fairly easy to determine the position of the lens center using a zoom
lens. See the de-centering tutorial on the web site. Even if you will use a prime
lens to shoot, you can use a zoom lens to locate the lens center, since the lenses
are repeatable, and the error is determined by the sensor/film scan.

174
LENSES AND DISTORTION

NOTE: The camera calibration system can determine the lens center
from lens grids when there is sufficient distortion. See the manual for
that and for another method based on intentionally vignetting the
camera.

Once the center location is determined, the image preprocessor can


restore proper centering. It does that by padding the sides of the image to
produce a new largerbut centeredimage. For starters, that larger image is
subject to lens distortion correction, possible stabilization, then saved to disk.
The CG renders will match this centered footage. At the end, the padding will be
removed.
This means that your renders will be a little larger, but there does not have
to be anything in the padded portions, so they should not add much time. Higher
quality input that minimizes de-centering will reduce costs.
As a more advanced tactic, the image preprocessor can be used to
resample the image and eliminate the padding, but this can only be done after
initial tracking, when a field of view has been determined, and it is a slightly
skewed resample that will degrade image quality slightly (the image
preprocessors Lanczos resampling can help minimize that).

Working with Zooms and Distortion


Most zoom lenses have the most distortion at their widest setting. As you
zoom in, the distortion disappears and the lens becomes more linear. This poses
some interesting issues. It is not possible to reliably compute the distortion if it is
changing on every frame. Because of that, the lens distortion value computed
from the main SynthEyes lens panel is a single fixed value. If you apply the
distortion of the worst frames to the best frames, the best frames will be messed
up instead.
The image prep subsystem does allow you to create and remove
animated distortions. You will need to hand-animate a distortion profile by using a
value determined with the alignment line facility from the main Lens panel, and
taking into account the overall zoom profile of the shot. If the shot starts at
around a 60 deg field of view, then zooms in to a 20 degree field of view, you
could start with your initial distortion value, and animate it by hand down to zero
by the time the lens reaches around 40 deg. If there are some straight lines
available for the alignment line approach throughout, you can do something fairly
exact. Otherwise, you are going to need to cook something up, but you will have
some margin for error.

Warning: at present, none of the SynthEyes exporters can export an


animated distortion setup to other software. It is a possible future
option for Fusion and/or Nuke exports.

Note: Future SynthEyes versions may automate a more detailed


method of handling zooms with distortions. Contact support if you want

175
LENSES AND DISTORTION

to experiment with a script that approximates that approach a bit more


painfully for now.

You can save the corrected sequence away and use it for subsequent
tracking and effects generation.
This capability will let you and your client look good, even if they never
realize the amount of trouble their shot plan and marginal lens caused.

Summary of Lens Issues


Lens distortion is a complex topic with no easy make it go away button
possible. Each individual cameras distortion is different, and distortion varies
with zoom, with focal distance, iris setting, and details of the cameras sensor
technology. We can not supply you with lens calibration data to repair your
shoot, or the awful-looking shots your client has just given you. You should
carefully think through your workflow and understand the impact of lens issues
and what they mean for how you track and how you render.
While SynthEyes provides the tools to enable you to handle a variety of
complex distortion-related tasks, calibrating for lens distortion and centering
should be kept on the to-do list during the shoot. Without that, analysis will be
less accurate and more difficult or even impossible. As always, an ounce of
prevention is worth a pound of cure.

176
Running the 3-D Solver
With trackers tracked, and coordinates and lens setting configured, you
are ready to obtain the 3-D solution.
Note: see the section on Advanced Solving Using Phases (Pro version,
not Intro) for information on how to configure more complex solves.

Solving Modes
Switch to the Solve control panel. Select the solver mode as follows:
Auto: the normal automatic 3-D mode for a moving camera, or a moving
object.
Refine: after a successful Auto solution, use this to rapidly update the
solution after making minor changes to the trackers or coordinate system
settings.
Tripod: camera was on a camera, track pan/tilt/roll(/zoom) only.
Refine Tripod: same as Refine, but for Tripod-mode tracking.
From Seed Points: use six or more known 3-D tracker positions per
frame to begin solving (typically, when most trackers have existing
coordinates from a 3-D scan or architectural plan). You can use Place
mode in the perspective view to put seed points on the surface of an
imported mesh (or vertex of a lidar mesh). Turn on the Seed button on the
coordinate system panel for such trackers. You will often make them locks
as well.
From Path: when the camera path has previously been tracked,
estimated, or imported from a motion-controlled camera. The seed
position, and orientation, and field of view of the camera must be
approximately correct.
Indirect: to estimate based on trackers linked to another shot, for
example, a narrow-angle DV shot linked to wide-angle digital camera stills.
See Multi-shot tracking.
Individual: when the trackers are all individual objects buzzing around,
used for motion and facial capture with multiple cameras.
Disabled: when the camera is stationary, and an object viewed through it
will be tracked.
The solving mode mostly controls how the solving process is started: what
data is considered to be valid, and what is not. The solving process then
proceeds pretty much the same way after that, subject to whatever constraints
have been set up.

Automatic-Mode Directional Hint.


When the solver is in Automatic mode, a secondary drop-down list
activates: a hint to tell SynthEyes in which direction the camera moved
specifically between the Begin and End frames on the Solver Panel. This

177
RUNNING THE 3-D SOLVER

secondary dropdown is normally in Undirected (automatically-determined) mode


also. However, on difficult solves you can use the directional hint (Left, Right,
Upwards, Downwards, Push In, Pull Back) to tell SynthEyes where to
concentrate its efforts in determining a suitable solution. Here it has been
changed:

World Size
Adjust the World Size on the solver panel to a value comparable to the
overall size of the 3-D set being tracked, including the position of the camera.
The exact value isnt important. If you are shooting in a room 20 across, with
trackers widely dispersed in it, use 20. But if you are only shooting items on a
desktop from a few feet away, you might drop down to 10.

Important: the world size does not control the size of the scene, that
is the job of the coordinate system setup.

There is a checkbox between the text "World Size" and the spinner itself.
When the checkbox is checked, adjusting the spinner sets the world size for all
objects; when it is off, the spinner affects only the current tracker host object. The
checkbox is checked by default as that is most common and useful. When the
world sizes are not all the same, the spinner value is underlined in red, ie marked
as a key, for information. (The world size cannot be animated.)
Get Your Inner Geek On: SynthEyes divides all coordinate values by the
world size internally as it works. With the default world size of 100, a coordinate
value of 64 will be internally processed as 0.64. Why bother? If SynthEyes needs
to square the value, 64 becomes 4096, while 0.64 becomes 0.4096. If we need
to add one unit, we get either 4097 or 0.4196. The world size can be too big, as
well as too small. A world size of 10000 would turn 64 into 0.0064, and squaring
that would be a tiny 0.00004096. By normalizing all the values by a reasonable
world size, SynthEyes ensures that all the values it is working on stay near 1.0,
maintaining the accuracy of its calculations (computers use only approximate
arithmetic). And yes, SynthEyes does a LOT of calculations.
Choose your coordinate system to keep the entire scene near the origin,
as measured in multiples of the world size. If all your trackers will be 1000 world-
sizes from the origin (for example, near [1000000,0,0] with a world size of 1000),
accuracy might be affected. The Shift Constraints tool can help move them all if
needed.

178
RUNNING THE 3-D SOLVER

As you see, the world size does not affect the calculation directly at all.
Yet a poorly chosen world size can sabotage a solution. If you have a marginal
solve, sometimes changing the world size a little can produce a different solution,
maybe even the right one.
The world size also is used to control the size of some things in the 3-D
views and during export: we might set the size of an object representing a tracker
to be 2% of the world size, for example.

Go!
Youre ready, set, so hit Go! on the Solver panel. SynthEyes will pop up a
monitor window and begin calculating. Note that if you have multiple cameras
and objects tracked, they will all be solved simultaneously, taking inter-object
links into accounts. If you want to solve only one at a time, disable the others.
The calculation time will depend on the number of trackers and frames,
the amount of errors in the trackers, the amount of perspective in the shot, the
number of confoundingly wrong trackers, the phase of the moon, etc. For a 100-
frame shot with 120 trackers, a 2-second time might be typical. With hundreds or
thousands of trackers and frames, some minutes may be required, depending on
processor speed. Shots with several thousand frames can be solved, though it
may take some hours.
It is not possible to predict a specific number of iterations or time required
for solving a scene ahead of time, so the progress bar on the solving monitor
window reflects the fraction of the frames and trackers that are currently included
in the tentative solution it is working on. SynthEyes can be very busy even
though the progress bar is not changing, and the progress bar can be at 100%
and the job still is not done yet though it will be once the current round of
iterations completes.

During Solving
If you are solving a lengthier shot where trackers come and go, and where
there may be some tracking issues, you can monitor the quality of the solving
from the messages displayed.
As it solves, SynthEyes is continually adjusting its tentative solution to
become better and better (iterating). As it iterates, SynthEyes displays the field
of view and total error on the main (longest) shot. You can monitor this
information to determine if success is likely, or if you should stop the iterations
and look for problems.
SynthEyes will also display the range of frames it is adding to the solution
as it goes along. This is invaluable when you are working on longer shots: if you
see the error suddenly increase when a range of frames is added, you can stop
the solve and check the tracking in that range of frames, then resume.

179
RUNNING THE 3-D SOLVER

You can monitor the field of view to see if it is comparable to what you
think it should be either an eyeballed guess, or if you have some data from an
on-set supervisor. If it does not seem good to start, you can turn on Slow but
sure and try again.
Also, you can watch for a common situation where the field of view starts
to decrease more and more until it gets down to one or two degrees. This can
happen if there are some very distant trackers which should be labeled Far or if
there are trackers on moving features, such as a highlight, actor, or automobile.
If the error suddenly increases, this usually indicates that the solver has
just begun solving a new range of frames that is problematic.
Your processor utilization is another source of information. When the
tracking data is ambiguous, usually only on long shots, you will see the message
Warning: not a crisp solution, using safer algorithm appear in the solving
window. When this happens, the processor utilization on multi-core machines will
drop, because the secondary algorithm is necessarily single-threaded. If you
havent already, you should check for trackers that should be far or for moving
trackers.

After Solving
Though having a solution might seem to be the end of the process, in fact,
its only the middle. Heres a quick preview of things to do after solving, which
will be discussed in more detail in further sections.
Check the overall errors
Look for spikes in tracker errors and the camera or object path
Examine the 3-D tracker positioning to ensure it corresponds to the
cinematic reality.
Add, modify, and delete trackers to improve the solution.
Add or modify the coordinate system alignment
Add and track additional moving objects in the shot
Insert 3-D primitives into the scene for checking or later use
Determine position or direction of lights
Convert computed tracker positions into meshes
Export to your animation or compositing package.
Once you have an initial camera solution, you can approximately solve
additional trackers as you track them, using Zero-Weighted Trackers (ZWTs).

180
RUNNING THE 3-D SOLVER

RMS Errors
The solver control panel displays the root-mean-square (RMS) error for
the selected camera or object, which is how many pixels, on average, each
tracker is from where it should be in the image. [In more detail, the RMS average
is computed by taking a bunch of error values, squaring them, dividing by the
number of error values to get the average of their squares, then taking the
square root of that average. Its the usual way for measuring how big errors are,
when the error can be both positive and negative. A regular average might come
out to zero even if there was a lot of error!]
The RMS error should be under 1 pixel, preferably under 0.5 for well-
tracked features. Note that during solving, the popup will show an RMS error that
can be larger, because it is contains contributions from any constraints that have
errors. Also, the error during solving is for ALL of the cameras and objects
combined; it is converted from internal format to human-readable pixel error
using the width of the longest shot being solved for. The field of view of that shot
is also displayed during solving.
There is an RMS error number for each tracker displayed on the
coordinate system and tracker panels. The tracker panel also displays the per-
frame error, which is the number being averaged.

Effects of Rolling Shutter


As described earlier, rolling shutter causes pervasive distortions on
images and the resulting solves. SynthEyes allows you to correct for the solve for
those effects, producing results corresponding to an idealized camera. It does
that by approximately removing the effect of the rolling shutter from the tracker
data before solving.
The downside is that after the solve, the tracker data is no longer being
corrected, and there will be apparent errors in the 3D position of the tracker as
visible in the camera viewport. Any test mesh insertions are rendered using the
idealized non-rolling camera also, and accordingly will not match the original
imagery to the extent of whatever rolling shutter effects are present.
Although final renders produced with rolling shutter simulation will match
nicely, these effects definitely make it harder to assess the quality of the solve,
and the source of any problems in it. We may be able to mitigate them in the
future.

Checking the Lens


You should immediately check the lens panels field of view, to make sure
that there is a plausible value. A very small value generally indicates that there
are bad trackers, severe distortion, or that the shot has very little perspective (an
object-mode track of a distant object, say).

181
RUNNING THE 3-D SOLVER

Solving Issues
If you encounter the message "Can't find suitable initial frames", it means
that there is limited perspective in the shot, or that the Constrain button is on, but
the constrained trackers are not simultaneously valid. Turn on the checkboxes
next to Begin and End frames on the Solver panel, and select two frames with
many trackers in common, where the camera or object rotates around 30
degrees between the two frames. You will see the number of trackers in common
between the two frames, you want this to be as high as possible. Make sure the
two frames have a large perspective change as well: a large number of trackers
will do no good if they do not also exhibit a perspective change. Also, it will be a
good idea to turn on the "Slow but sure" checkbox.
You may encounter "size constraint hasn't been set up" under various
circumstances. If the solving process stops immediately, probably you have no
trackers set up for the camera or object cited. Note that if you are doing a moving
object shot, you need to set the cameras solving mode to Disabled if you are not
tracking it also, or you will get this message.
When you are tracking both a moving camera and a moving object, you
need to have a size constraint for the camera (one way or another), and a size
constraint for the object (one way or another). So you need TWO size
constraints. It isn't immediately obvious to many people why TWO size
constraints are needed. This is the related to a well-known optical illusion, relied
on in shooting movies such as "Honey, I Shrunk the Kids". Basically, you can't
tell the difference between a little thing moving around a little, up close, and a big
thing moving around a lot, farther away. You need the two size constraints to set
the relative proportions of the foreground (object) and background (camera).
The related message Had to add a size constraint, none provided is
informational, and does not indicate a problem.
If you have SynthEyes scenes with multiple cameras linked to one
another, you should keep the solver panels Constrain button turned on to
maintain proper common alignment.
See also the Troubleshooting section.

182
3-D Review
After SynthEyes has solved your scene, youll want to check out the paths
in 3-D, and see what an inserted object looks like. SynthEyes offers several
ways to do this: traditional fixed 3-D views, including a Quad orthogonal isometric
view, camera-view overlays, user-controlled 3-D perspective window, error curve
mini-view, preview movies, and velocity vs time curves.

Quad View
If you are not already in Quad view, switch to it now on the toolbar. You
will see the camera/object path and 3-D tracker locations in each view. You can
zoom and pan around using the middle mouse button and scroll wheel. You can
scrub or play the shot back in real-time (in sections, if there is insufficient RAM).
See the View menu for playback rate settings.

Error Curve Mini-View


The error curve mini-view is at the bottom right of the user interface. It
shows key information about the solve at a glance, and quickly identify areas of
the shot requiring further work.
With no trackers selected (control/command-D), it shows the average
error on each frame, and the overall scene error (from the summary panel). In
this case the numeric value is labeled "HPIX".
With one or more trackers selected, the maximum error of any of the
selected trackers is shown for each frame, as well as the overall maximum error.
Here, the maximum error is labeled "hpix" (to make the type of display more
apparent).
When a non-hybrid GeoH-tracked object is selected, the error is a
percentage value from 0% (good) to 100% (bad).
Both the curve and numeric value are color-coded green, yellow, or red,
according to preferences set in the User Interface section.

Important: the maximum curves identify short-term hotspots that


cause glitches, but they may have little effect on overall average solve
quality. Fixing all hotspots may leave large overall errors, eg from lens
distortion. Shots that are good on average may have isolated bad
spots that create small visible glitches in final renders. So you should
examine both average and maximum errors!

The current frame is identified with a small dotted vertical bar. If you see a
hotspot, you can click or drag in the error curve mini-view to go (approximately)
to that frame. Use the graph editor to locate particular issues (keep reading).
The error curve mini-view shows the playback range (ie the portion
between the two small green and red triangles in the main timebar). You can

183
3-D REVIEW

adjust the playback range to zoom the error curve display into a specific portion.
Right-double-clicking the error curve mini-view will reset the playback range to
the entire shot.
In large shots (thousands of frames and trackers), it may take some time
to generate the error curve mini-view. You can turn on the "Error View only for
sole tracker" preference (again in the User Interface section), and then the
display will be shown only when a single tracker is selectedwith the exception
that if you select all of the trackers, then the average information will be shown
(ie equivalent to having no trackers selected when this checkbox is off).

Camera View Overlay


To see how an inserted object will look, switch to the 3-D control panel.

Turn on the Create tool (magic wand). Select one of the built-in mesh types,
such as Box or Pyramid. Click and drag in a viewport to drag out an object.
Often, two drags will be required, to set first the position and breadth, then a
second drag to set the height or overall scale. A good coordinate-system setup
will make it easy to place objects. To adjust object size after creating it, switch to
the scaling tool . Dragging in the viewport, or using the bottommost spinner,
will adjust overall object size. Or, adjust one of the three spinners for each
coordinate axis size.
When you are tracking an object and wish to attach a test object onto it
(horns onto a head, say), switch the coordinate system button on the 3-D Panel
from World to Object.
Note: the camera-view overlay is quick and dirty, not anti-aliased like the
final render in your animation package will be (it has jaggies), so the overlay
appears to have more jitter than it will then. You can sometimes get a better idea
by zooming in on the shot and overlay as it plays back (use Pan-To-Follow).

Warning: temporary inserts like this may exhibit large errors if rolling-
shutter-compensation is turned on, and there is substantial movement.
That is because the insert is not "rolled" to match the source image.

Shortly, well show how to use the Perspective window to navigate around
in 3-D, and even render an antialiased preview movie.

Checking Tracker Coordinates


If SynthEyes finds any trackers that are further than 1000 times the world
size from the origin, it will not save them as solved. You can use the Script
menus Select By Type script to locate and select Unsolved trackers. You can
change them to Zero-weighted to see where they might fall in 3-D, and prevent
them from affecting future solves.

184
3-D REVIEW

Frequently these trackers should either distant horizon points that should
be changed to Far, corrected, or deleted if they are on a moving object or the
result of some image artifact. Such points can also arise when a tracker is visible
for only a short time when the camera is not moving. The Clean up trackers
dialog can do this automatically.
Note: the too-far-away test can cause trouble if you have a small world
size setting but are using measured GPS coordinates. You should offset the
scene towards the origin using the Shift Constraints script.
You should also look for trackers that are behind the camera, which can
occur on points that should be labeled Far, or when the tracking data is incorrect
or insufficient for a meaningful answer.
After repairing, deleting, or changing too-far-away or behind-camera
trackers, you should use the Refine mode on the Solver panel to update the
solution, or solve it from scratch. Eliminating such trackers will frequently provide
major improvements in scene geometry.

Checking Tracker Error Curves


After solving, the tracker 3-D error channel will be available in the Graph
Editor. It is important to understand the 3-D error: it is the distance, in pixels, on
each frame, between the trackers 2-D position, and the position in the image of
the solved 3-D tracker position. Lets work this through. The solver looks at the
whole 2-D history of a tracker to arrive at a location such as X=3.2, Y=4.5, Z=0.1
for that tracker. On each frame, knowing the cameras position and field of view,
we can predict where the tracker should be, if it really is at the calculated XYZ.
Thats the position at which the yellow X is displayed in the camera view after
solving. The 3-D error is the distance between where the tracker actually is, and
the yellow X where it should be. If the tracking is good, the distance is small, but
if the tracking has a problem, the tracker is away from where it should be, and
the 3-D error is larger. Obviously, given this definition, theres no 3-D error
display until after the scene has been solved.
You should check these error curves using the fundamentals described
earlier in Pre-Solve Tracker Checking, but looking at the Error channel. Here

weve used isolate mode to locate a rather large spike in the blue error
curve of one of the trackers of a shot.

185
3-D REVIEW

This glitch was easy to pick outso large the U and V velocities had to be

moved out of the way to keep them clearly visible. The deglitch tool easily
fixes it.
You can look at the overall error for a tracker from the Coordinate System

panel . This is easiest after setting the main menus View/Sort by Error,
unselecting all the trackers (control/command-D), then clicking the down arrow
on your keyboard to sequence through the trackers from worst towards best. In
addition to the curves in the graph editor, you can see the numeric error at the

bottom of the tracker panel : both the total error, and the error on the current
frame (also visible in the graph editor's Number Zone). You can watch the current
error update as you move the tracker, or set it to zero with the Exact button.
For comparison, following is a tracker graph that has a fairly large error; it
tracks a very low contrast feature with a faint moving highlight and changing
geometry during its lifespan. It never has a very large peak error or velocity, but
maintains a high error level during much of its lifespan, with some clearly visible
trends indicating the systematic errors it represents.

186
3-D REVIEW

And finally, a decent tracker with a typical error level:

The vertical scale is the same in these last three graphs. (Note that in the
rd
3 one, the current time is to the left, before frame 160 or so, hence the blue
arrow.)

187
3-D REVIEW

You can sort the trackers within the graph editors Active Trackers node by

changing Sort Alphabetic to Sort By Error .


The SimulTrack view can also be helpful in quickly looking at many or all
of the trackers, especially in Sort By Error mode. The error curve for the tracker
is shown in each tile (as long as right-click/Show Error in the SimulTrack view is
on).
Do not blindly correct apparent tracking errors. A spike suggesting a
tracking error might actually be due to a larger error on a different tracker that
has grossly thrown off the camera position, so look around.

Check the Radar Display


Tracker 'radar' is a visualization tool to show where and when there are
errors on every tracker: a visual display of the tracker error curves. On the main
view menu, or the camera view right-click menu, turn on Show Tracker Radar.
The display will rapidly focus your eye on problem trackers and frames.
The radar display self-normalizes to the overall scene error level, so that it
always makes apparent the most significant problem areas. You can never
make all the radar circles small! As you reduce the errors, the criteria become
stricter.

Check for a Smooth Camera Path


You should also check that the camera or object path is satisfactorily
smooth, using the camera nodes in the graph editor. Weve closed the Active
Trackers node, and exposed the Camera & Objects node and the Camera01
node within it. Were looking at subset of the velocities of the camera: the X, Y,
and Z translation velocities.

188
3-D REVIEW

Theres a spike around frame 215-220, to find it, expose the Active

Trackers, select them all (control/command-A), and use Isolate mode


around that range of frames. The result:

189
3-D REVIEW

Weve found the tracker that causes the spike, and can use the deglitch

tool , or switch back to the tracker control panel and camera viewport,
unlock the tracker, correct it, then re-lock it.

Tip: In the capture above, the selected tracker is not visible in the
hierarchy view. You can see where it is in the scroll bar, thoughit is
located at the white spot inside the hierarchy views scroll bar. Clicking
at that spot on the scroll bar will pan the hierarchy view to show that
selected tracker.

If that is the last glitch to be fixed, switch to the Solve control panel, and
re-solve the scene using Refine mode.

190
3-D REVIEW

You can also use the Finalize tool on the tracker control panel to
smooth one or more trackers, though significant smoothing can cause sliding. If
your trackers are very noisy, check whether film grain or compression artifacts
are at fault, which can be addressed by image-preprocessor blur, verify that the
interlace setting is correct, or see if you should fine-tune the trackers.
Alternatively, you can fix glitches in the object path by using the deglitch

tool directly on the camera or moving objects curves, because it works


on any changeable channel. You can also move the object using the 3-D
viewports and the tools on the 3-D panel, by repositioning the object on the
offending frame.

Warning: If you fix the camera path, instead of the tracker data, then
later re-solve the scene, corrections made to the camera path will be
lost, and have to be repeated. It is always better to fix the cause of a
problem, not the result.

Path Filtering
If you have worked on the trackers to reduce jitter, but still need a
smoother path (after checking in your animation package), you can filter the
computed camera or object path using the Path Filtering dialog, launched from
the Window menu or the Solver panel.

Warning: filtering the path increases the real error, and causes
sliding. Remember that your objective is to produce a clean insert in
the image, not produce an artificially smooth camera trajectory that
works poorly.

Path filtering is legitimately useful in several difficult situations: first, in


object tracking, when the object is small and may have only a few trackers;
second, when there is little perspective or few or noisy trackers, such that there is
ambiguity between camera position and orientation; and third, when objects
must be inserted that come very close to the camera, where jitter in the solution
is very visible.
The path filtering controls permit you to animate the frequency and
strength of the filtering applied, and apply it selectively to only some axes. If the
camera was on a dolly, you can apply it selectively to the camera height (Z or Y).
For object tracking, distance filtering is often the way to go, as it is the
object/camera distance that winds up with the most jitter, and that is what is
filtered (or locked on the locking control panel).
When path filtering is set up, it is (and must be) re-applied after each
solve, whether the solve is "from scratch" or a refine operation.
For a more subtle workflow, do the following:

191
3-D REVIEW

Solve the shot with no filtering in place


Open the Path Filtering dialog,
Configure filtering for the translation, distance, or height axis only,
Select the "To Seed Path" option,
Click the Apply Filter Now button,
Turn off filtering on all axes,
Open the Solver Locking panel,
Turn on locks for the axes that you just filtered above,
Run a Refine solver cycle.
The advantage of this workflow is that it applies filtering to one or more
axes, then gives you the best solution for the remaining, unfiltered, axes taking
into account the results on the filtered axes.
The filter frequency and strength are animated parameters. When you
configure filtering, you should animate the parameters to reduce filtering
(increase the frequency) at times when the path is changing rapidly, or hits
actual bumps. You should increase filtering (reduce the frequency) when the
camera or object is nearly stationary, or is moving smoothly. You can animate
the strength to blend in and out of filtering, or to make the filtering less effective.

192
Cleaning Up Trackers Quickly
SynthEyes offers the Clean Up Trackers dialog (on the Track menu) to
quickly identify bad trackers of several types. The dialog helps improve tracking,
identifying elements that cause glitches or may be creating systematic errors; it is
not a way to rescue bad tracking data. You can also have tracker cleanup run
automatically during an AUTO track & solve operation, but see the warning
below.

You can only use the dialog after successfully solving the scene. It will not
run before the scene is solved, because it operates by analyzing both 2-D and 3-
D information. You can open it before tracking and solving, in order to check and
set the cleanup parameters if you have automatic cleanup selected for AUTO.
If you run Clean Up Trackers on a grossly incorrect solution, error data will
be grossly wrong and tracker cleanup may delete trackers that are good, and
keep trackers that are wrong!

Warning: Although tracker cleanup can be run automatically during


AUTO operation using the checkbox on the Summary panel, do not do
that unless you are sufficiently expert to know in advance that the track
and solve is likely to be immediately good. As just described, if the
initial solve isn't good, tracker cleanup will do much damage, not help.
In that event, just Undo the tracker cleanup portion of the AUTO cycle.

193
CLEANING UP TRACKERS QUICKLY

Warning: the tracker cleanup dialog is not aware of the effects


introduced by rolling-shutter-compensation. It will report trackers at the
top and bottom of the image as having large 3D errors, even if they
would be a good match if rolling shutter compensation was taken into
account. We will seek to address that in the future.

This dialog has a generally systematic organization, with a few exceptions. Each
category of trackers has a horizontal row of controls, and the number of trackers
in that category is in parentheses after the category name. A tracker can be a
member of several categories.
Down the left edge, a column of checkboxes control whether or not the
category of trackers will be fixed. Mostly, trackers are fixed by deleting them, but
after you have identified them, you can also adjust them manually if that is
appropriate.
When clicked on, the Select buttons in the middle select that category of
trackers in the viewport. They flash as they are selected, making them easier to
find. At the top of the panel, notice that the Clean-up dialog can work on all the
trackers, or only the selected ones. It records the selected trackers as you open
the panel, and they are not affected by selecting trackers with these buttons.
At right are a column of spinners that determine the thresholds for whether
a tracker is considered to be far-ish, short-lived, etc. The initial values of these
thresholds are good starting points but not the last word.
Part of the clean-up trackers dialog fun is to select a category of trackers,
and start changing the threshold up and down, and see how many trackers are
affected, and where they are. Its a quick way to learn more about your shot.
The following sections provide some more information about how to
interpret and use the panel. For full details, see the tracker clean-up reference.
Bad Frames
The bad-frames category locates individual frames on each tracker where
the 3-D error is over the threshold, if the hpix radio button is selected, or it finds
the top 2% of errors or whatever, if the % radio button is selected.
If you click the Show button, SynthEyes clears out the tracking results for
each bad frame. The intent is that you can see the overall pattern of bad frames,

by having the graph editor open in tracks mode , with squish-no keys

active. Each bad frame will be marked in red.


If you click the bad-frames Select button, the trackers with bad frames
are selected in the viewport. This makes the tracks thicker in the squish view,
which is also helpful.
If you turn on the Delete checkbox for Bad Frames, there are two choices
for how to handle that: Disable and Clear. The clear option does what happens

194
CLEANING UP TRACKERS QUICKLY

during Show: it clears out the tracking results so it looks like the frame was
tracked, but the feature was not found, resulting in the red section in the squish
view. The disable option re-keys the trackers enable track, so that there is no
attempt to track on those previously-bad frames. There will be no track at all on
those frames in the squish view.
As a result, the Clear choice is better when you want to see where there
were problems, and potentially go back and fix the problems. The Disable option
is better when you want to permanently shut down the trackers on those spots.
Be aware that though a frame of a tracker may be bad, and you are better
off without it and the glitch it causes, having a missing frame can also create
glitchesthe higher the overall error (and poorer the fit), the larger the glitch
caused by a missing frame. A missing frame on a fairly unique tracker close to
the camera will cause a bigger glitch than a missing frame on a tracker far from
the camera that is one of many similar trackers. Manually repairing the bad
frames will always produce the best results.
Far-ish Trackers
The tracker clean-up dialog detects trackers with too little parallax for an
adequate distance measurement. Consider a tracker 1 meter from the camera,
and the camera moving 10 cm to its right. The position of the tracker in the two
images from the camera will be different, tens or hundreds of pixels apart. What if
the tracker is 1 kilometer from the camera, and the camera moved the same 10
cm? The tracker may be located in exactly the same pixel in both images, and no
distance can be determined to the tracker 1 km away.
Accordingly, far-ish-ness (you will not find this in the dictionary) can be
measured in terms of the numbers of pixels of perspective, and the threshold is
the number of pixels of perspective change produced by the camera motion over
the lifespan of the tracker.
As you slowly increase the far-ish-ness threshold, youll see trackers
further and further from the camera become labeled as far.
But, you may also find a few trackers close to the camera that are also
labeled Far-ish, even at a low threshold. What has happened, that these nearby
trackers are far-ish? Simple: either they are shortlived, or the camera was not
translating much during their lifetime. For example, there may be many far-ish
trackers there is a tripod-type hold region during a shot.
Far-ish trackers can not really be fixed. If the same feature is tracked
earlier or later in the shot, a short-lived tracker might be combined with its longer-
lived siblings. But otherwise, they may only be made Far (in which case their
solve will only produce a direction), or they may be deleted.
High-Error Trackers
To be a high-error tracker, a tracker must meet either of two criteria:

195
CLEANING UP TRACKERS QUICKLY

the percentage of a trackers lifespan that consists of bad frames must be


over a threshold, or
the average hpix RMS error over the lifespan of the tracker, as appears for
the tracker on the Coordinate panel, must be over a threshold.
The percentage threshold appears to the right of the high-error trackers
line as usual. The hpix error threshold appears underneath it, to the right of the
Unsolved/Behind category, an otherwise empty space because that category
requires no thresholds.
As an example of the first criterion, consider a tracker that is visible for 20
frames. However, 8 of those frames are bad frames as defined for that
category. The percentage of bad frames is 8 out of 20, or 40%, and at the
standard threshold of 30% the tracker would be considered high-error and
eligible for deletion. Typically such trackers have switched to an adjacent nearby
feature for a substantial portion of their lifespan.
Unlocking the User Interface
The clean-up trackers dialog is modal, meaning you cannot adjust any
other controls while the dialog is displayed. However, it is often helpful to adjust
the user interface with the dialog open, for example to configure the graph editor
or to locate a tracker in the viewports.
The clean-up dialog does offer a frame spinner along the bottom row,
which allows you to rapidly scrub through the shot looking for particular trackers.
The dialog also offers the Unlock UI button, which temporarily makes the
dialog modeless, permitting you to adjust other user-interface controls, bring up
new panels, etc.
The keyboard accelerators do not work when Unlock UI is turned on. You
need to use the main menu controls instead.
The selected trackers list processed by Clean Up Trackers is reloaded
each time you turn off Unlock UIif you are using the Selected Trackers option,
but need to change which trackers those are, you can unlock the user interface
and change them. But, you must have turned off all the Select buttons first, or
they will affect what happens.

196
Setting Up a Coordinate System
You should tell SynthEyes how to orient, position, and size the trackers
and camera path in 3-D. Historically, people learning tracking have had a hard
time with this because they do not understand what the problem is, or even that
there is a problem at all. If you do not understand what the problem is, what you
are trying to do, it is pretty unlikely you will understand the tools that let you solve
it. The next section is an attempt to give you a tangible explanation. Its silly, but
please read carefully! Please also be sure to check out the tutorials on the web
site about coordinate systems.

Observation: many new users who say they are having trouble setting
up a coordinate system instead do not have a correct solve for the
following reason. Frequently, first "test shots" are handheld, with the
camera pointing in different directions but no real camera translation.
They are nodal tripod shots, with no 3D information present, so any
attempted 3D solve will be incorrect and impossible to set up a
coordinate system on (use Tripod Mode to address them). Other
frequent new-user issues include severe lens distortion, and trackers
on moving objects such as actors or cars. Be sure to look at the 3D
views to establish that the solve is qualitatively correct before
attempting to set up a coordinate system!

SynthEyes and the Coordinate Measuring Machine


Pretend SynthEyes is a 2D-to-3D-converting black box on your desk that
manufactures a little foam-core architectural model of the scene filmed by your
shot. This little model even has a little camera on a track showing exactly where
the original camera went, and for each tracker, a little golf pole and flag with the
name of the tracker on it.
Obviously SynthEyes is a pretty nifty black box. One problem, though: the
foam-core model is not in your computer yet. It fell out of the output hopper, and
is currently sitting upside-down on your desk.
Fortunately, you have a nifty automatic coordinate measuring machine,
with a little robot arm that can zip around measuring coordinates and putting
them into your computer.
You open the front door of the coordinate measuring machine and see the
inside looks like the inside of a math teachers microwave oven, with nice graph-
paper coordinate grids on the inside of the door, bottom, top, and sides, and you
can see through the glass sides if you look carefully. Those are the coordinates
measured by the machine, and where things will show up at inside your
animation package. The origin is smack in the middle of the machine.
So you think Great, casually throw your model, still upside-down, into the
measuring machine, and push the big red button labeled Good enough! The
machine whirs to life and seconds later, your animation package is showing a

197
SETTING UP A COORDINATE SYSTEM

great rendition of your scenesitting cock-eyed upside down in the bottom of


your workspace. That is not what you wanted at all, but hey! Thats what you got
just throwing your model into the measuring machine all upside down.
You open up the door, pull out your model, flip it over, put it back in, and
close the door. Looking at the machine a little more carefully, you see a green
button labeled Listen up and push it. Inside, a hundred little feet march out a
small door, crawl under the model, and lift it up from the bottom of the machine.
Since it is still pretty low, you shout A little higher, please. The feet cringe
a littlemaybe the shouting wasnt necessarybut the little feet lift your model a
bit higher. Thats a good start, but now More to the right. Even some more.
Youre making progress, it looks like the model might wind up in a better place
now. You try Spin around X and sure enough the feet are pretty clever. After
about ten minutes of this, though the model is starting to have its ground plane
parallel to the bottom of the coordinate measuring machine, youve decided that
the machine is really a much better listener than you are a talker, and you have
learned why the red button is labeled Good enough! Giving up, you push it, and
you quickly have the model in your computer, just like you had positioned it in the
machine.
Hurrah! Youve accomplished something, albeit tediously. This was an
example of Manual Alignment: it is usually too slow and not too accurate, though
it is perfectly feasible.
Perhaps you havent given the little feet enough credit.
Vowing to do better, you try something trickier: Feet, move Tracker37 to
the origin. Sure enough, they are smarter than you thought.
As you savor this success, you notice the feet starting to twiddle their toes.
Apparently they are getting bored. This definitely seems to be the case, as they
slowly start to push and spin your model around in all kinds of different directions.
All is not lost, though. It seems they have not forgotten what you told
them, because Tracker37 is still at the origin the entire time, even as the rest of
the model is moving and spinning enough to make a fish sea-sick. Because they
are all pushing and pulling in different directions, the model is even pulsing
bigger and smaller a bit like a jellyfish.
Hoping to put a stop to this madness, you bark Put Tracker19 on the X
axis. This catches the feet off guard, but once they calm down, they sort it out
and push and pull Tracker19 onto the X axis.
The feet have done a good job, because they have managed to get
Tracker19 into place without messing up Tracker37, which is still camped at the
origin.
The feet still are not all on the same page yet, because the model is still
getting pushed and pulled. Tracker37 is still on the origin, and Tracker19 is on
the X axis, but the whole thing is pulsing bigger and smaller, with Tracker19
sliding along the axis.

198
SETTING UP A COORDINATE SYSTEM

This seems easy enough to fix: Keep Tracker19 at X=20 on the X axis.
Sure enough, the pulsing stops, though the feet look a bit unhappy about it. [You
could say Make Tracker23 and Tracker24 15 units apart with the same effect,
but different overall size.]
Before you can blink twice, the feet have found some other trouble to get
into: now your model is spinning around the X axis like a shish-kebab on a
barbecue rotisserie. Youve got to tell these guys everything!
As Tracker5 spins around near horizontal, you nail it shut: Keep Tracker5
on the XY ground plane. The feet let it spin around one more time, and
grudgingly bring your model into place. They have done everything you told
them.
You push Good enough and this time it is really even better than good
enough. The coordinate-measuring arm zips around, and now the SynthEyes-
generated scene is sitting very accurately in your animation package, and it will
be easy to work with.
Because the feet seemed to be a bit itchy, why not have some fun with
them? Tracker7 is also near the ground plane, near Tracker5, so why not Put
Tracker7 on the XY ground plane. Now youve already told them to put Tracker5
on the ground plane, so what will they do? The little feet shuffle the model back
and forth a few times, but when they are done, the ground plane falls in between
Tracker5 and Tracker7, which seems to make sense.
That was too easy, so now you add Put Tracker9 at the origin. Tracker37
is already supposed to be at the origin, and now Tracker9 is supposed to be
there too? The two trackers are on opposite sides of the model! Now the feet
seem to be getting very agitated. The feet run rapidly back and forth, bumping
into each other. Eventually they get tired, and slow down somewhere in the
middle, though they still shuffle around a bit.
As you watch, you see small tendrils of smoke starting to come out of the
back of your coordinate measuring machine, and quickly you hit the Power
button.

Back to Reality
Though our story is far-fetched, it is quite a bit more accurate than you
might think. Though well skip the hundred marching feet, you will be telling
SynthEyes exactly how to position the model within the coordinate system.
And importantly, if you dont give SynthEyes enough information about
how to position the model, SynthEyes will take advantage of the lack of
information: it will do whatever it finds convenient for it, which rarely will be
convenient for you. If you give SynthEyes conflicting information, you will get an
averaged answerbut if the information is sufficiently conflicting, it might take a
long time to provide a result, or even throw up its hands and generate a result
that does not satisfy any of the constraints very well.

199
SETTING UP A COORDINATE SYSTEM

There are a variety of methods for setting up the coordinates, which we


will discuss in following sections:
Using the 3-point method
Using the automatic Place tool
Manual Alignment
Configuring trackers individually
Alignment Lines
Constrained camera path
Aligning to an existing mesh
Using Phases (not in Intro version)
The three-point method is the recommended approach, as it quickly
produces the most controlled and accurate results. The Place tool quickly
guesses at something usable automatically, but the setup it produces has no
specific relationship to what you want, nor is it accurate: without care, anything
you add will slide, and it is your fault, not ours! (That's true in general, actually.)
Manual alignment is slow and generally not very accurate, though it allows you to
get what you ask for, for sure.
The alignment line approach is used for tripod-mode and even single-
frame lock-off shots. The constrained camera path methods (for experts!) are
used when you have prior knowledge of how the shot was obtained from on-set
measurements.
You must decide what you want! If the shot has a floor and you have
trackers on the floor, you probably want those trackers to be on the floor in your
chosen coordinate system. Your choice will depend on what you are planning to
do later in your animation or compositing package. It is very important to realize:
the coordinate system is what YOU want to make your job easier. There is no
correct answer, there is no coordinate system that SynthEyes should be picking if
only it was somehow smarterThey are all the same. The coordinate measuring
machine is happy to measure your scene for you, no matter where you put it!
You dont need to set a coordinate system up, if you dont want to, and
SynthEyes will plough ahead happily. But picking one will usually make inserting
effects later on easier. You can do it either after tracking and before solving, or
after solving.

Hint: if you will be exporting to a compositing package, they often


measure everything, including 3-D coordinates, in terms of pixels, not
inches, meters, etc. Be sure to pick sizes for the scene that will work
well in pixels. While you might scale a scene for an actor 2m tall, if you
export to a compositor and the actor is two pixels tall that will rarely
make sense.

After you set up a coordinate system and re-solve the scene, it is a good
idea to check that everything went OK using the Constrained Points View. Each
constraint you added will be listed, along with the error: the difference between

200
SETTING UP A COORDINATE SYSTEM

what you are asking for, and SynthEyes was able to give you. Normally the error
values in the right-hand column should be zero, or very small compared to the
size of the scene. If there are large errors, it indicates that the constraints are
self-conflicting, ie that you are (often indirectly) telling something to be in two
different locations simultaneously.

Three-Point Method
Heres the simplest and most widely applicable way to set up a coordinate
system. It is strongly recommended unless there is a compelling reason for an
alternative. SynthEyes has a special button to help make it easy. Well describe
how to use it, and what it is doing, so that you might understand it, and be able to
modify its settings as needed.
Switch to the Coordinate System control panel. Click the *3 button; it will
now read Or. Pick one tracker to be the coordinate system origin (ie at X=0, Y=0,
Z=0). Select it in the camera view, 3-D viewport, or perspective window. On the
coordinate system panel, it will automatically be changed from Unconstrained to
Origin. Again, any tracker can be made the origin, but some will make more
sense and be more convenient than others.

Important: Zero-weighted trackers must not be used to set up a


coordinate system, or as part of distance constraints. Constraints on
ZWTs are permitted, but affect only that ZWT.

The *3 button will now read LR (for left/right). Pick a second tracker to fall
along the X axis, and select it. It will automatically be changed from
Unconstrained to Lock Point; after the solution it will have the X/Y/Z coordinates
listed in the three spinners. Decide how far you want it to be from the origin
tracker, depending on how big you want the final scene to be. Again, this size is
arbitrary as far as SynthEyes is concerned. If you have a measurement from the
set, and want a physically-accurate scene, this might be the place to use the
measurement. One way or another, decide on the X axis position. You can guess
if you want, or you can use the default value, 20% of the world size from the
Solver panel. Enter the chosen X-axis coordinate into the X coordinate field on
the control panel.
The *3 button now reads Pl. Pick a third point that should be on the
ground plane. Again, it could be any other trackerexcept one on the line
between the origin and the X-axis tracker. Select the tracker, and it will be
changed from Unconstrained to On XY Plane (if you are using a Z-Up coordinate
system, or On XZ Plane for Y-up coordinates). This completes the coordinate
system setup, so the *3 button will turn off.
The sequence above places the second point along the X axis, running
from left to right in the scene. If you wish to use two trackers aligned stage front
to stage back, you can click the button from LR (left/right) to FB (front/back)
before clicking the second tracker. In this case, you will adjust the Y or Z
coordinate value, depending on the coordinate system setting.

201
SETTING UP A COORDINATE SYSTEM

You might wonder which trackers get selected to be constrained:


Tracker37 or Tracker39, etc. You should pick the trackers that create the
coordinate system that you want to see in the animation/compositing package,
the coordinate system that makes your later work easier.
To provide the most accurate alignment, you should select trackers
spread out across the scene, not lumped in a particular corner. You should also
use trackers with low error (from the Tracker or Coordinate System panels) that
are comparatively long-lived through the shot.
Depending on your desired coordinate system, you might select other axis
and plane settings. You can align to a back wall, for example. For the more
complex setups, you will adjust the settings manually, instead of using *3.
You can lock multiple trackers to the floor or a wall, say if there are
tracking marks on a green-screen wall. This is especially helpful in long traveling
shots. If you are tracking objects on the floor, track the point where the object
meets the floor; otherwise youll be tracking objects at different heights from the
floor (more on this in a little). If you add additional constraints, you should be sure
to verify that they are OK (configured right, and match the real world) using the
Constrained Points View.

Place Tool: Automatic coordinate system setup


SynthEyes offers a tool that can set up a coordinate system automatically:
one click, or even none. It's a nifty algorithm that looks at all your trackers to
make a decision. But, before you start celebrating and decide to ignore the rest
of the "Setting Up a Coordinate System" chapter totally, here's the bad news:
if the automatically-chosen coordinate system matches what you want to
do, it is mainly because you are lucky,
most trackers that are "on the ground" are really on objects that are sitting
on the ground, and therefore somewhat above it,
many buildings and sets are rarely exactly flat, and the ground almost
never so,
the automatically-chosen coordinate system will tend to be a good rough
approximation to a plane of interest.... but it will never be exactly right for a
particular use.... therefore
the automatically-chosen coordinate system will effectively cause sliding,
unless you refer to nearby trackers to place your objects.
To say this another way: the Place Tool makes it easier to create a
coordinate system, but harder to insert objects accurately.
Nevertheless, we know that some people have trouble understanding
coordinate systems at all, and wanted to have a plan "B" for them. Even if it
doesn't give you exactly what you want, it can give you a starting point for
manual alignment.

202
SETTING UP A COORDINATE SYSTEM

The Place Tool examines the relative positioning of all the available
trackers, and is intended for automatic tracks, which produce many trackers, not
for supervised tracks and not for object tracks, which have too few trackers for
the Place tool to analyze. Outdoor scenes may have too much of a jumble and
too little structure for the Place too, so any choice will be just as bad or good as
any other. Nor is the Place tool suitable for Tripod-mode nodal tracks, which
aren't 3D solves. It will work fine with stereo tracks, however. And you can use
the Place tool for scenes with camera and object tracks, to set up the camera
track: the object will be carried along, though you will still have to manually set up
a coordinate system for the object itself.
You can find the Place tool on the Summary Panel: it's the button named
Place next to Coord. When you click the button, SynthEyes will analyze your
trackers, determine a coordinate system, then reposition the entire camera path
and trackers accordingly. (It moves meshes too, unless you have turned off
Whole affects meshes on the right-click menu of the 3-D or perspective
viewports.)
The Place tool can also run automatically after you click the full AUTO

track and solve button , if the Run auto-place checkbox is checked.


(The first time you click AUTO, you will be prompted for whether you want auto-
place or not.) If the Place tool has run automatically, but you want to use the
three-tracker method, go ahead!
The Place algorithm proceeds through five main stages: plane, rotation,
origin, scale, and results.
In the plane stage, it looks for a large collection of trackers that form a flat
plane, which might then be made into a ground plane, a back or side wall. Keep
in mind that SynthEyes examines only the position of the trackers, it does not
understand what the trackers are on. SynthEyes decides whether to use the
plane as a ground, back, or side plane based on its relative orientation to the
camera.
In the rotation stage, it rotates the scene around in the plane, looking for a
rotation that causes many of the trackers to line up. For example, if SynthEyes
has found a ground plane, as it rotates the scene it may find that many trackers
line up when the axes are parallel to trackers on the back wall of the set.
In the origin stage (which itself has two steps), having decided on the
plane and rotation, SynthEyes decides where the origin should go, by looking
along each coordinate axis for a spot where many trackers are clustered
together. That spot becomes the zero coordinate for that axis.
In the scale stage, SynthEyes enforces any pre-existing distance
constraints, or if there are none, it adjusts the median scene size to a fixed
nominal value.

203
SETTING UP A COORDINATE SYSTEM

IMPORTANT: If you have a measured distance from the real set that
you wish to use to set the size of the scene, while still doing an
automatic placement, you must set up the distance constraint
before hitting Place!

In the results stage, each tracker that is part of the plane is set up with a
Lock to the coordinates calculated for it, and it is selected, so you can tell which
trackers were used for the plane.
So to summarize by example, consider an ideal case where the shot is of
the inside of a room towards the back right corner, with trackers on the floor, a
back wall, and some on the right side wall. SynthEyes picks the trackers on the
floor as the ground plane (Z=0 with Z-up coordinates), spins the scene around so
that the trackers on the back wall become parallel to the X axis, finds that clump
of trackers on the back wall and sets them to be at Y=0, and finds the clump of
trackers on the right wall and puts them at X=0. So the overall scene origin is in
the back right corner of the room, which is a nice choice.
If there were more trackers on the back wall, SynthEyes might use the
back wall as the plane, spin the scene around the back wall until the floor was
level, then place the origin the same way, so we'd still get the same result.
Of course, the real world is always more complex. Often there are multiple
solutions that are just as good as each other, and the Place tool may give you a
solution that isn't what you are looking for.
The Place Tool offers a simple solution: click the Place button again! By
design, the tool randomizes the solutions, so each time you click the button, you'll
get a different solution. If there's really only one good answer, you'll see it almost
all the time; if there isn't a good choice you may see radically different solutions
each time. Be sure to look carefully in the Quad view to assess the coordinate
system. If you like it, do NOT hit Place again! Each Place is different; you can get
back only by using Undo.
You can control which trackers will be considered as potential members of
the plane: if you hold down the SHIFT key while clicking the Place button, only
the currently-selected trackers will be considered. You can use this feature when
you want the plane to definitely come from one particular part of the scene where
you want to do an insert, for example. You can also use it after an initial Place, if
you think the chosen plane is a good choice, but contains a few inappropriate
trackers. Shift-click those trackers to de-select them, then shift-click the Place
button to recalculate using only the remaining trackers.
When the Place tool runs, it produces a whole set of coordinate system
locks, on each tracker used for the ground (or wall) plane. This system of
constraints allows the same coordinate system to be re-established, even if you
subsequently solve the scene, not only with a Refine-mode solve, but even
starting over from a Automatic-mode solve (note that that's different from the
AUTO button on the Summary panel). If the tracking data has changed around

204
SETTING UP A COORDINATE SYSTEM

significantly, you'll still get the best approximation to the previous coordinate
system.
If you want to adjust the placement created by the Place tool, you can use
the Manual Alignment method described below. You might want to adjust the
position of the plane if it is actually a little above the true ground level. As you do
that, the manual adjustment will be adjusting not only the tracker and camera
positions, but the lock coordinates as well so everything stays consistent.
Once you've used the Place tool and decided on the coordinate system,
stick with it! You want to keep your existing coordinate system, or at least
something very close to it, once you have positioned objects in the scene, either
in SynthEyes or downstream in your animation package. You don't want to have
to re-do any manual adjustments or the object positioning if you correct a bump,
for example (though you might have to tweak them slightly).
To avoid interfering with your existing Place-generated coordinate system
do not hit the Place button again! (Click Undo if you do by mistake!)
do not turn on the Constrain checkbox on the Solve panel, as that
will force the trackers to have their old coordinates exactly,
distorting the scene.
If you update the trackers and re-solve, you'll see small errors in the
Constrained Points view, but they are OK. If they bother you and you want to set
them back to zero, use the Select By Type script with the "Constrained" box
checked to re-select all the ground-plane trackers, then on the Coordinate
System panel, click the Set Seed button. It will update the trackers to use their
current solve coordinates as the Lock-to values, so there will be no error.

Important: to avoid sliding when inserting an object, you must position


the object relative to the trackers near to it. If you see the object sliding
with respect to a spot in the imagery, create a tracker on that spot, and
use that tracker as a basis to reposition the object. Sliding is a user
error!

If the Place tool has run automatically (or manually) and you decide you
want to set up a coordinate system using the three-tracker method instead (and
we'd rather you did), that's no problem. Click the Coords button on the Summary
panel or *3 on the Coordinate System panel, and it will first delete all the existing
constraints created by the Place tool. You can proceed to click on your three
trackers to create your coordinate system normally.

Manual Alignment
You can manually align the camera and solved tracker locations if you
like. This technique is the usual approach for tripod-mode shots. You can also
use manual alignment after using the Place tool on regular camera tracks,

205
SETTING UP A COORDINATE SYSTEM

though it is more accurate to set up a coordinate system using the three-point


method above.
To align manually, switch to the 3-D control panel and the Quad or Quad
Perspective view. Turn on the Whole button on the 3-D control panel, which will
select the currently-active camera or moving object (typically Camera01) in the
viewports and also turn on the selection-lock button . There's no requirement
for the selection-lock to be on; it is usually convenient, so Whole turns it on
automatically. Don't forget to turn it off (when you turn Whole off, Lock will turn off
as well).

Then, use the move , rotate , and scale tools to reposition the
camera/object using the viewports. As you do this, not only the camera/object will
move, but its entire trajectory and its trackers as well. With the selection lock
turned on, you can scale or rotate around any location in the 3D environment.
Retaining Manual Adjustments
Without user action, a manual alignment will not be retained if you re-solve
the shot, either from scratch (Automatic mode) or incrementally (via Refine). The
solver is affected only by constraints, and it has no way to tell what you were
doing with your manual alignment. If you manually align, add some 3D objects to
the scene, then adjust some trackers and re-solve, you will have to re-align
manually, which is frequently difficult to reproduce exactly.
Fortunately, you can avoid this problem, with a little foresight.
First, if you used the Place Tool, then manually adjust, your work will have
already been done for you: the Place Tool establishes a set of constraints that
will reposition the scene the same way after a re-solve.
If you did not use the Place Tool, then you must establish the constraints
manually. (This process works for normal, object, or tripod shots.)
Select several reliable trackers that are distributed throughout the scene,
perhaps six to ten,
open the coordinate system panel,
click Set Seed,
change the Lock type drop-down to Lock Point,
click the Seed button to turn it off if you like; it does not matter here.
If you re-solve the scene again, the scene will be re-aligned to match up
these trackers, and therefore the rest of the scene, as well as possible.
Use on Moving Objects
You can use the same technique for moving-object shots, discussed later.
If you click the World button to change to Object coordinates, you can re-align
the objects coordinate system relative to the objects trackers (much like you
move the pivot point in a 3-D model). As you do this, the object path will change
correspondingly to maintain the overall match.

206
SETTING UP A COORDINATE SYSTEM

It is also useful to stay in world coordinates and scale a moving object


about its camera, either in the viewports or using the Uniform scale spinner on
the 3D Panel. With this method, you can adjust an object's scale so that the
object's position, and even whole path, are located in the right position compared
to the remainder of the 3-D environment.
Impact on Meshes
By default, meshes will be carried along when you use Whole. However,
you can turn off Whole affects meshes, on the 3-D viewport or perspective-
view right-click menus, and meshes will not be moved. Then, you can import a 3-
D model (such as a new building), then reposition the camera and trackers
relative to the buildings (fixed) position.

Warning: Whole affects meshes used to be off by default (before


1508), now it is on by default, since that is the more usually expected
behavior.

Size Constraints
As well as the position and orientation of your scene, you need to control
the size of the reconstructed scene. There are four general ways to do this:
With a distance (size) constraint between two points.
Have two points that are locked to (different) xyz coordinates, such as an
origin (0,0,0) and a point at (20,0,0), as in the recommended three-tracker
method described above,
From knowledge of the camera path (sometimes! carefully! if you know
what you are doing!), or,
With an inter-ocular constraint for stereo shots.
If you want to use one collection of trackers to position and align the
coordinate system, but use an on-set measurement between two other trackers,
you can use a distance constraint.

Reminder: SynthEyes uses unit-less numbers. When you enter 20


units, you could call it 20 meters, 20 feet, 20 miles, etc. SynthEyes
does not care; it is up to you.

Suppose you have two non-ZWT trackers, A and B, and for example want
them 20 units apart. You set up the distance constraint as follows.
1. Open the coordinate system control panel.
2. Select tracker A, ALT-click (Mac: Command-click) on tracker B to set it as
the target of A. In the coordinate system panel, you'll see tracker B's name
now in the Target Point button. Note: if you have set the preferences to
"no middle mouse button" then you must hold ALT/Command and right-
click to link, since ALT/Command-left would be interpreted as a pan.
3. Set the distance (Dist.) spinner to 20. (You can remove a distance
constraint by right-clicking the Target Point button.)

207
SETTING UP A COORDINATE SYSTEM

If you set up a distance constraint and have or will also use the *3 tool,
use different trackers for the distance constraint and the *3 setup. Then, select
the second point, which is locked to 20,0,0, and change its mode from Lock Point
to On X Axis (On Y Axis for front/back setups). Otherwise, you will have set up
two size constraints simultaneously, and unless both are right, you will be
causing a conflict.
Note that your size constraint does not do anything immediately: it is an
instruction to the solver, and will have no effect until you solve or re-solve (ie in
Refine mode) the scene.
You can set up coordinate systems with *3 and use those points for
distance constraints, but you'll have to understand how to set them up directly, as
described in the next section.

Configuring Constraints Directly


Each tracker has individual constraints that can be used to control the
coordinate system, accessed through the Coordinate System Control panel. The
*3 button automatically configures these controls for easy use, but you can
manually configure them to achieve a much wider variety of effectsif you keep
in mind the mental picture of the busy little feet in the coordinate measuring
machine. Those feet do whatever you tell them, but are happy to wreak havoc in
any axis you do not give them instructions for.
As examples of other effects you can achieve, you can use the Target
Point capability to constrain two points to be parallel to a coordinate axis, in the
same plane (parallel to XY, YZ, or XZ), or to be the same. For example, you can
set up two points to be parallel to the X axis , two other points
to be parallel to the floor, and a fifth point to be the origin.
Suppose you have three trackers that you want to define the back wall (Z
up coordinate system).
1) Go to the coordinate system control panel
2) If the three trackers are A, B, and C, select B, then hold down
ALT (Mac: Command) and click A.
3) Change the constraint type from Unconstrained to Same XZ
plane.
4) Select C, and ALT-click (Command) on A, and set it to Same XZ
Plane also.
This has nailed down the translation, but rotation only partiallythe feet
will be busy. You also need to specify another rotation, since B and C can spin
freely around A so far (or around the Y axis about any point in the plane).

208
SETTING UP A COORDINATE SYSTEM

You might have two other trackers, D and E, that should stack up
vertically. Select E and Alt/Command-Click tracker D and set it to Parallel to Z
Axis (or X axis if they should be horizontal).

Note: if you have set the preferences to "no middle mouse button" then
you must hold ALT/Command and right-click to link, since
ALT/Command-left would be interpreted as a pan.

Tracker/Tracker Constraints in Various Views


Constraints between trackers can be seen as a gray line in various
viewports as follows:
Camera view: Shows tracker/tracker constraints that go between two trackers
on the same object only,
Perspective view: When the view is locked to the camera, all constraints on
that shot are shown; the 2-D positions of trackers will be used if they are not
solved yet. When the view is not locked to any camera, all constraints are
shown, as long as both endpoints have been solved in 3D.
3D Viewports (Top, Left, etc): All constraint lines are shown, as long as both
endpoint trackers have been solved in 3D.
Constrained Points View: see the section below.
You can use the intentional difference in the camera view, 3D View, and
perspective view's locked vs unlocked behavior to locate links among shots that
go the wrong way, for example.
Details of Lock Modes
There are quite a few different constraint (lock) modes that can be
selected from the drop-down list. Despite the fair number of different cases, they
all can be broken down to answering two simple questions: (1) which coordinates
(X, Y, and/or Z) of the tracker should be locked, and (2) to what values.
The first question can have one of eight different answers: all the
combinations of whether or not each of the three coordinate axes is locked,
ranging from none (Unconstrained) to all (Lock Point). Rather than listing each of
the combinations of which axes are locked, the list really talks about which axis is
NOT locked. For example, an X Axis lock really locks Y and Z, leaving X
unconstrained. Locking to the XZ plane actually locks only Y. The naming
addresses WHAT you want to do, not HOW SynthEyes will achieve it.
The second question has three possible answers: (a) to zero, (b) to the
corresponding Seed and Lock spinner, or (c) the corresponding coordinate from
the tracker assigned as the Target Point. Answer (c) is automatically selected if a
target point is present, while (a) is selected for On lock types, and (b) for Any
lock types. Use the Any modes when you have some particular coordinates you
want to lock a tracker to, for example, if a tracker is to be placed 2 units above
the ground plane.

209
SETTING UP A COORDINATE SYSTEM

Watch Out! If you select several trackers, some with targets, some
without, the lock type list will be empty. Either select fewer trackers, or
right-click the Target button to clear the target tracker setting from all
selected trackers.

Heres the total list:

Lock Mode Axes Locked To What

Unconstrained None Nothing

Lock Point X,Y,Z Spinners

Origin X, Y, Z Zero

On X Axis Y, Z Zero

On Y Axis X, Z Zero

On Z Axis X, Y Zero

On XY Plane Z Zero

On XZ Plane Y Zero

On YZ Plane X Zero

Any X Axis Y, Z Spinners

Any Y Axis X, Z Spinners

Any Z Axis X, Y Spinners

Any XY Plane Z Spinners

Any XZ Plane Y Spinners

Any YZ Plane X Spinners

Identical Place X, Y, Z Target

|| X Axis Y, Z Target

|| Y Axis X, Z Target

|| Z Axis X, Y Target

Same XY Plane Z Target

210
SETTING UP A COORDINATE SYSTEM

Lock Mode Axes Locked To What

Same XZ Plane Y Target

Same YZ Plane X Target

Configuring Constraints for Tripod-Mode Shots


When the camera is configured in tripod mode, a simpler coordinate-
system setup can be used. In tripod mode, no overall sizing is required, and no
origin is required or allowed. The calculated scene must only be aligned, though
even that is not always necessary.
The simplest tripod alignment scheme relies on finding two trackers on the
horizon, or at least that youd like to make the horizon. Of the two, you assign
one to be the X axis, say, by setting it up as a Lock to the coordinates X=100,
Y=0, Z=0, for the normal World Size of 100. If the world size was 250, the lock
point would be 250, 0, 0 : a Far tracker should always be locked to coordinates
where X squared plus Y squared plus Z squared equals the world size squared.
It is not necessary for the constraint to work correctly, but for it to be displayed
correctly.
With one axis nailed down, the other tracker only needs to be labeled On
XY plane, say (or XZ in Y-Up coordinates).
If you have an estimate for the field of view, you can preposition the
camera, then right-click the Nudge Tool on the Coordinate System panel to
create seed/lock coordinates that lineup exactly with the 2D tracker position on
the current frame. Two will constrain the initial camera orientation.
Tip: if you have a tripod shot that pans a large angle, 120 degrees or
more, small systematic errors in the camera, lens, and tracking can accumulate
to cause a banana-shaped path. To avoid this, set up a succession of trackers
along the horizon or another straight line, and peg them in place, or use a roll-
axis lock.
Constrained Points View
After you have set up your constraints, you should check your work using
the Constrained Points viewport layout, as shown here:

This is the view with the recommended (front/back-variant) constraint


setup in Z-Up coordinates, as applied to a typical shot, after solving. Only

211
SETTING UP A COORDINATE SYSTEM

trackers with constraints are listed, along with what they are locked to
(coordinates or another tracker).
The Axes column shows what axes are locked, among X, Y, Z, or
D(distance). If an asterisk is present, the lock is to a tracker on a different
camera/object. If the letter m is present, there is a link to a different object, and
one or the other is a moving object, which is unusual and bears investigation.
The distance column has a value only if a distance constraint is present.
The solved position is shown, along with the 3-D error of the constraint.
For example, if a tracker is located at (1,0,0) but is locked to (0,0,0), the 3-D error
will be 1. It will have a completely different 2-D error in hpix on the coordinate
system panel.

Hint: You can adjust the width of the columns by dragging the gutters,
located immediately to the left of the column headers (Tracker, Locked
To, Distance, etc), which extend the entire height of the viewport.

Hint: Change the constrained-points view sort order by clicking on


Tracker or Error on the header line.

The constrained points view lets you check your constraints after solving,
giving you the resulting 3-D errors, or check your setup before solving, without
any error available yet. You can select the trackers directly from this view and
tweak them with the coordinate system panel displayed.
Upside-down Cameras: Selecting the Desired Solution
Many coordinate system setups can be satisfied in two or more different
ways: completely different camera and tracker positions that are camera
matches, and satisfy the constraints.
To review, the most basic 3-point setup consists of a point locked to the
origin and a point locked to a specific coordinate on the X axis, plus a third point
locked to be somewhere on the ground plane (XY plane for Z-up). This setup
can be satisfied two different ways. If you start from one solution, you can get the
other by rotating the entire scene 180 degrees around the X axis. If the camera is
upright in the first solution, it will be upside-down in the second. The third point
will have a Y coordinate that is positive in one, and negative in the other (for Z-
Up coordinates).
If you take this basic 3-point setup, and chase the setup of the second
point from a lock to (20,0,0) to On X Axis, and add a separate distance (scale)
constraint, there are now four different possible solutions: the different
combinations of the second points X being positive or negative, and the third
points Y coordinate being positive or negative.
SynthEyes offers two ways to control which solution is used. Without
specific instructions, SynthEyes uses the solution where the camera is upright,
not upside-down. That handles the most common case, but if you need the

212
SETTING UP A COORDINATE SYSTEM

camera upside-down, or have a setup with four solutions, you need to be more
specific.
SynthEyes lets you specify whether a coordinate should be positive or
negative (a polarity), for each coordinate of each constrained tracker. The
Coordinate System Control panel has buttons next to the X, Y, and Z spinners.
The X button, for example, sequences from X to X+ to X-, meaning that X can
have either polarity, that X must be positive, or that X must be negative.
If there are two solutions, you should set up a polarity for an axis of one
point; if there are four solutions, set the polarity for one axis of two points. For
example, set a polarity for Y of the 3rd tracker, and an X polarity for the 2 nd (on-
axis) tracker.
Subtleties and Pitfalls
The locks between two trackers are inherently bidirectional. If you lock A
to B, do not lock B to A. Similarly, avoid loops, such as locking A to B, B to C,
and C to A.
If you want to lock A, B, C, and D all to be on the same ground plane with
the same height, say, it is enough to lock B, C, and D all to A.
When you choose coordinates, you should keep the scene near the origin.
If your scene is 2000 units across, but it is located 100000 units from the origin, it
will be inconvenient to work with, and runs the risk of numeric inaccuracy. This
can happen after importing scene coordinates based on GPS readings. You can
use the Track/Shift Constraints tool to offset the scene back towards the origin.
Similarly, you should avoid scenes that are too small, for example where
all the numbers are 0.00000123, 0.000001245, etc. These numbers will all be
truncated to zeros by many other programs, destroying your scene.

Alignment Versus Constraints


With a small, well-chosen, set of constraints, there will be no conflict
among them: they can all be satisfied, no matter the details of the point
coordinates. This is the case for the 3-tracker recommended method.
However, this is not necessarily the case: you could assign two different
points to both be the origin. Depending on their relative positions, this may be
fine, or a mistake.
SynthEyes has two main ways to approach such conflicts: treating the
coordinate system constraints as suggestions, or as requirements, as controlled
by the Constrain checkbox on the Solver control panel.
For a more useful example, consider a collection of trackers for features
on the set floor. You can apply constraints telling SynthEyes that the trackers
should be on the floor, but some may be paint spots, and some may be pieces of
trash a small distance above the floor.

213
SETTING UP A COORDINATE SYSTEM

With the Constrain box off, SynthEyes solves the scene, ignoring the
constraints, then applies them at the end, only by spinning, rotating, and scaling
the scene. In the example of trackers on a floor, the trackers are brought onto an
average floor plane, without affecting their relative positions. The model is
fundamentally not changed by the constraints.
On the other hand, with the Constrain checkbox on, the constraints are
applied to each individual tracker during the solving process. Applied to trackers
on a floor, the vertical coordinate will be driven towards zero for each and every
such tracker, possibly causing internal conflict within the solving process.
If you have tracked 3 shadows on the floor, and the center of one tennis
ball sitting on the floor, you have a problem. The shadows really are on the floor,
but the ball is above it. If all four height values are crunched towards zero, they
will be in conflict with the image-based tracking data, which will be attempting to
place the tennis ball above the shadows.
You can add poorly chosen locks, or so many locks, that solving becomes
slower, due to additional iterations required, and may even make solving
impossible, especially with lens distortion or poor tracking. By definition, there will
always be larger apparent errors as you add more locks, because you are telling
SynthEyes that a tracker is in the wrong place. Not only are the tracker positions
affected, but the camera path and field of view are affected, trying to satisfy the
constraints. So dont add locks unless they are really necessary.
Generally, it will be safer to leave the Constrain checkbox off, so that
solving is not compromised by incorrectly configured constraints. You will want to
turn the checkbox on when using multiple-shot setups with the Indirectly solving
method, or if you are working from extensive on-set measurements. It must be
on to match a single frame.
Pegged Constraints
With the constraints checkbox on, SynthEyes attempts to force the
coordinate values to the desired values. It can sometimes be helpful to force the
coordinates to be exactly the specified value, by turning on the Peg button on
the trackers Coordinate system panel.
Pegs are useful if you have a pre-existing scene model that must be
matched exactly, for example, from an architectural blueprint, a laser-rangefinder
scan, or from global positioning system (GPS) coordinates. Pegging GPS
coordinates is especially useful in long highway construction shots, where overall
survey accuracy must be maintained over the duration of the shot.
Pegs are active only when the Constrain checkbox is on, and you can only
peg to numeric coordinates or to a tracker on a different camera/object, if the
trackers camera/object is Indirectly solved. You can not peg to a tracker on the
same camera/object, this will be silently ignored.
The 3-D error will be zero when you look at a pegged tracker in the
Constrained Points view. However, the error on the coordinate system or tracking

214
SETTING UP A COORDINATE SYSTEM

panel, as measured in horizontal pixels, will be larger! That is because the peg
has forced the point to be at a location different than what the image data would
suggest.
Constrain Mode Limitations and Workflow
The constrain mode has an important limitation, while initially solving a
shot in Automatic solving mode: enough constrained points must be visible on
the solving panels Begin and End frames to fully constrain the shot in position
and orientation. It can not start solving the scene and align it with something it
can not see yet, thats impossible!
SynthEyes tries to pick Begin and End frames where the constrained
points are simultaneously visible, but often thats just not possible when a long
shot moves through an environment, such as driving down a road. The error
message Cant locate satisfactory initial frames will be produced, and solving
will stop.
In such cases, the Constrain mode (checkbox) must be turned off on the
solving panel, and a solution will easily be produced, since the alignment will be
performed on the completed 3-D tracker positions.
You can now switch to the Refine solving mode, turn on the Constrain
checkbox, and have your constraints and pegs enforced rigorously. As long as
the constraints arent seriously erroneous, this refine stage should be quick and
reliable.
Heres a workflow for complex shots with measured coordinates to be
matched:
1. Do the 2-D tracking (supervised or automatic)
2. Set up your constraints (if you have a lot of coordinates, you can
read them from a file).
3. Do an initial solve, with Constrain off.
4. Examine the tracker graphs, assess and refine the tracking
5. Examine the constrained points view to look for gross errors
between the calculated and measured 3-D locations, which are
usually typos, or associating the 3-D data with the wrong 2-D
tracker. Correct as necessary.
6. Change the solver to Refine mode
7. Turn on the Constrain checkbox
8. Solve again, verify that it was successful.
9. Turn on the Peg mode for tracker constraints that must be achieved
exactly.
10. Solve again
11. Final checks that pegs are pegged, etc.

215
SETTING UP A COORDINATE SYSTEM

With this approach, you can use Constrain mode even when constrained
trackers are few and far between, and you get a chance to examine the tracking
errors (in step 4) before your constraints have had a chance to affect the solution
(ie possibly messing it up, making it harder to separate bad tracking from bad
constraints.)
Note: if you have survey data that you are matching to a single frame, you
must use Seed Points mode, must make each tracker a seed, and you must turn
on Constrain.

Tripod and Lock-off Shot Alignment


Tripod-mode shots provide special issues for alignment, since by their
nature, a full 3-D solution is not available. Tripod shot tracking provides the pan,
tilt, and roll of the camera versus time, and the direction to the trackers, but not
the distance to the trackers. So if you need to place objects in the shot in 3-D, it
can be difficult to know where to place them. The good news is that wherever
you put them, they will stick, so the primary concern is to locate items so that
they match the perspective of the shot.
There are a number of methods available:
the single-frame alignment subsystem on the lens panel, which will be
described in detail in this section;
the mesh Pinning Tool described in the Geometric Hierarchy Tracking
manual, for when you have a mesh model for some object (perhaps a
box or cylinder, but potentially a more complex imported model) that
appears in the scene;
using the known 3-D coordinates (ie from a survey) of many features
visible in the scene, which turns the tripod/lock-off shot into a regular 3-
D solve, and accordingly won't be described further here.
Single Frame Alignment
SynthEyess Lens Control Panel contains a perspective-matching tool to
help, with the requirement that your shot contain several straight lines.
Depending on the situation, two or more lines must be parallel. Heres an
example (well tell how to set it up in a later section):

216
SETTING UP A COORDINATE SYSTEM

There are parallel lines under the eaves and window, configured to be
parallel to the X axis. Vertical (Z) lines delineate edges of the house and door
frame. The selected line by the door has been given a length to set the overall
scale.
The alignment tool gives you camera placements and FOV for completely
locked-off shots, even a single still photograph such as this.
What Lines Do I Need?
The line alignment solver can be used after a shot has been solved and a
lens field of view (FOV) determined; it might be used without a solve, with a
known FOV; or it might be used to determine the lens FOV. In each case it will
determine the camera placement as well.
If the FOV is known, either from a solve or an on-set measurement, you
will need to set up at least two lines, which must be parallel to two different
coordinate axes in 3-D (X, Y, or Z). This means they must not be parallel to each
other (because then they would be parallel to the same axis). You may have any
number of additional lines.
When the FOV is not known, you must define at least three lines. Two of
them must be parallel to each other and to a coordinate-system axis. The third
line must be parallel to a different coordinate system axis. You may have
additional lines parallel to any of the three coordinate system axes.
Note: SynthEyes permits unoriented lines to be used to help find the lens
distortion. Unoriented lines do not have to be aligned with any of the desired

217
SETTING UP A COORDINATE SYSTEM

coordinate system axesbut do not count at all towards the count of lines
required for alignment.
Whether the FOV is known to start or not, two of the lines on different
axes must be labeled as on-axis, meaning that the scene will be moved around
until those lines fall along the respective axis. For example, you might label one
line as On X Axis and another as On Y Axis. If you do not have enough on-axis
lines, SynthEyes will assign some automatically, though you should review those
choices.
The intersection of the on-axis lines will be the origin of the coordinate
system. In the example above, the origin will be at the bottom-right corner of the
left-most of the two horizontal windows above the door. As with tracker-based
coordinate system setup, there is no correct assignmentthe choice is up to
you to suit the task at hand.
To maximize accuracy, parallel lines should be spread out from one
another: two parallel lines that are right next to each other do not add much
independent information. If you bunch all the lines on a small object in a corner of
the image, you are unlikely to get any usable results. We can not save you from
a bad plan!
It is better if the lines are spread out, with parallel lines on opposing sides
of the image, and even better if they are not parallel to one another in the image.
For example, the classis image of railroad tracks converging at the horizon
provides plenty of information.
Also, be alert for situations where lines appear to be parallel or
perpendicular, but really are not. For example, wooden sets may not really be
geometrically accurate, as that is not normally a concern (they might even have
forced perspective by design!). Skyscrapers may have slight tapers in them for
structural reasons. The ground is usually not perfectly flat. Resist the temptation
to eyeball some lines into a shot whenever possible. Though plenty of things
are pretty parallel or perpendicular, keep in mind that SynthEyes is using exact
geometry to determine camera placement, so if the lines are not truly right, the
camera will come out in a different location because of it.
Operating the Panel
To use the single-frame alignment system, switch to the Lens Control
panel. Alignment lines are displayed only when this panel is open.
Go to a frame in your sequence that nicely shows the lines you plan to use
for alignment. All the lines must be present on this single frame, and this frame
number will be recorded in the At nnnf button at the lower-left of the lens panel.
You can later return to this frame just by clicking the button. If you later play with
some lines on a different frame, and need to change the recorded frame number,
right-click the button to set the frame number to the current frame.
Click on the Add Line button, then click, drag, and release in the camera
view to create a line in the image. When you release, a menu will appear,

218
SETTING UP A COORDINATE SYSTEM

allowing you to select the desired type of line: plain, parallel to one of the
coordinate axes, on one of the coordinate axes, or on an axis, with the length
specified. Specify the type desired, then continue adding lines as needed. Be
sure you check your current coordinate-axis setting in SynthEyes (Z-Up, Y-Up, or
Y-Up-Left), so that you can assign the line types correctly. You should make the
lines as long as possible to improve accuracy, as long as the image allows you to
place it accurately.
Lines that are on an axis must be drawn in the correct direction: from
the negative coordinate values to the positive coordinate values. For example,
with SynthEyes in Z-Up coordinate mode, a line specified as On Z Axis should
be drawn in the direction from below ground to above ground. There will be an
arrow at the above ground end, and it should be point upwards. But dont worry if
you get it wrong, you can click the swap-end button <-> to fix it instantly.
It does not matter in what direction you draw lines that are merely parallel
to an axis, not on it. The arrowhead is not drawn for lines parallel to the axis.
To control the overall sizing of the scene, you can designate a single on-
axis line to have a length. Again, this line must be on an axis, not merely parallel
to it. After creating the line, select one of the on-axis with length types. This will
activate the Length spinner, and you can dial in the desired length.
Before continuing to the solution, be sure to quickly zoom in on each of
the alignment lines endpoints, to make sure they are placed as accurately as
possible. (Zooming into the middle will tell you if you need to engage the lens
distortion controls, which will complicate your workflow.) You can move either
endpoint or the whole line, and adjust the line type at any time.
After you have completed setting up the alignment lines, click the Align!
button. SynthEyes will calculate the camera position relative to the origin you
have specified, and if the scene is not already solved and parallel lines are
available, SynthEyes will also calculate the field of view.
A total alignment error will be listed on the status line at the bottom of the
SynthEyes window. The alignment error is measured in root-mean-square
horizontal pixels like the regular solver. A value of a pixel or two is typical. If you
do not have a good configuration of lines, an error of hundreds of pixels could
result, and you must re-think.
SynthEyes will take the calculated alignment and apply it to an existing
solution, such that the camera and origin are at their computed locations on the
frame of reference (indicated in the At nnnf button, for example ).
Suppose you are working on, and have solved, a 100-frame tripod-mode shot.
You have built the alignment lines on frame 30. When you click Align!, SynthEyes
will alter the entire path, frames 0-99, so that the camera is in exactly the right
location on frame 30, without messing up the camera match before or after the
frame.

219
SETTING UP A COORDINATE SYSTEM

Meshes will be affected by the alignment. To keep them stationary, so that


they can be used as references, turn off Whole affects meshes on the 3-D
viewport or perspective-views right-click menus.
You should switch to the Quad view and create an object or two to verify
that the solution is correct.
If the quality of the alignment lines you have specified is marginal, you
may find SynthEyes does not immediately find the right solution. To try
alternatives, control-click the Align! button. SynthEyes will give you the best
solution, then allow you to click through to try all the other (successively worse)
solutions. If your lines are only slightly off-kilter, you may find that the correct
solution is the second or maybe third one, with only a slightly higher RMS error.
Advanced Uses and Limitations
Since the single-frame alignment system is pretty simple to understand
and use, you might be tempted to use it all the time, to use it to align regular full
3-D camera tracking shots as well. And in fact, as its use on tripod-mode shots
suggests, we have made it usable on regular moving camera and even moving-
object shots, which are an even more tempting use.
But even though it works fine, it probably is not going to turn out the way
you expect, or be a usable routine alternative to tracker constraints for 3-D shots.
First, theres the accuracy issue. A regular 3-D moving-camera shot is
based on hundreds of trackers over hundreds of frames, yielding many hundreds
of thousands of data points. By contrast, a line alignment is based on maybe ten
lines, hand-placed into one frame. There is no way whatsoever for the line-based
alignment to be as accurate as the tracker solutions. This is not a bug, or an
issue to be corrected next week. Garbage in, garbage out.
Consequently, after your line-based alignment, the camera will be at one
location relative to the origin, but the trackers will be in a different (more correct)
position relative to the camera, so. The trackers will not be located at the origin
as you might expect. Since the trackers are the things that are locked properly to
the image, if you place objects as you expect into the alignment-determined
coordinate system, they will not stick in the imageunless you tweak the
inserted objects position to make them match better to the trackers, not the
aligned coordinate system.
Second, there is the size issue. When you set up the size of the alignment
coordinate, it will position the camera properly. But it will have nothing to say
about the size of the cloud of trackers. You can have the scene aligned nicely for
a 6-foot tall actor, but the cloud of trackers is unaffected, and still corresponds to
30 foot giants. To have any hope of success using alignment with 3-D solves,
you must still be sure to have at least a distance constraint on the trackers. This
is even more the case with moving-object shots, where the independent sizing of
the camera and object must be considered, as well as that of the alignment lines.

220
SETTING UP A COORDINATE SYSTEM

The whole reason that the alignment system works easily for tripod and
lock-off shots is that there is no size and no depth information, so the issue is
moot for those shots.
To summarize, the single-frame alignment subsystem is capable of
operating on moving-camera and moving-object shots, but this is useful only for
experts, and probably is not even a good idea for them. If you send us a your
scene file at tech support looking for help on that, we are going to tell you not to
do it, to use tracker constraints instead, end of story.
But, you should find the alignment subsystem very useful indeed for your
tripod-mode and lock-off shots!

Using 3-D Survey Data


Sometimes you may be supplied with exact 3-D coordinates for a number
of features in the shot, as a result of hand measurements, laser (lidar) scans, or
GPS data for large outdoor scenes. You may also be supplied with a few ruler
measurements, which you can apply as size constraints; we wont discuss that
further here, but will focus on some aspects of handling 3-D coordinates. The full
details continue in following sections.

Important: If the shot contains a number of unordered frames, rather


than a video shot, be sure to use SynthEyes's Survey Shot capability.
SynthEyes normally assumes that each frame is only slightly changed
from the previous shot, as in movies. If that assumption is not true, ie
for a collection of random stills, Survey Shot mode is required. This is
different than the setup of the XYZ coordinate data discussed here.

First, given a lot of 3-D coordinates, it can be convenient to read them in


automatically from a text file, see the manuals section on importing points, which
will help you specify tracker locations.
You can also use File/Import/Mesh to read in tables of XYZ data as lidar
files (.XYZ) to produce mesh objects; they can include per-vertex RGB
information as well. You can snap trackers to those vertices, or add facets using
the perspective view mesh edit capabilities. See the Lidar Meshes section for
important information about reading lidar data.
You can use the Place mode of the Perspective view to lock trackers onto
an imported mesh or lidar file, see Placing Seed Points and Objects.
SynthEyes gives you several options for how seriously the coordinate data
is going to be believed. Any 3-D data taken by hand with a measuring tape for an
entire room should be taken as a suggestion at best. At the other end of the
spectrum, coordinates from a 3-D model used to mill the object being tracked, or
laser-surveyed highway coordinates, ought be interpreted literally.
Trackers with 3-D coordinates, entered manually or electronically, will be
set up as Lock Points on the Coordinate System panel, so that X, Y, and Z will be

221
SETTING UP A COORDINATE SYSTEM

matched. Trackers with very exact data will also be configured as Pegs, as
described later.
If the 3-D coordinates are measured from a 2-D map (for a highway or
architectural project), elevation data may not be available. You should configure
such trackers as Any Z (Z-up coordinates) or Any Y (Y-up coordinates), so that
the XY or XZ coordinates will be matched, and the elevation allowed to float.
If most of your trackers have 3-D coordinates available to start (six or
more per frame), you can use Seed Points solving mode on the Solver control
panel. Turn on the coordinate system panels Seed button for the trackers with 3-
D coordinates. This will give a quick and reliable start to solving. You must use
Seed Points and Constrain modes on the solver panel if you are matching a
single frame from survey data.
You can use the Nudge Tool (spinner) on the Coordinate System panel to
adjust seed locations along the depth axis, towards and away from the camera.
Right-clicking the Nudge tool will snap the seed point onto the line of sight of the
2D tracker position on the current frame. (This works for Far trackers also.) If you
have visions of doing this from multiple frames, check out ZWTs instead, which
will be faster and more accurate.
For more information on how to configure SynthEyes for your survey data,
be sure to check the section on Alignment vs Constraints.

Constraining Camera Position and Motion


Sometimes you may already know some or all of the path of the camera,
for example,
it may be available from a motion controlled camera,
the camera motion may be mechanically constrained by a camera
dolly,
you may have measured some on-set camera data to determine
overall scene sizing, or
you may have already solved the camera path, then hand-edited it for
cleanup, or
the shot may have little perspective, and you wish to edit the path to
create a smooth non-jittering depth curve.
SynthEyes lets you take advantage of this information to improve a
solution or help set up the coordinate system, using the trajectory lock controls at
the bottom of the Solver Control Panel, the Hard and Soft Lock Control dialog,
and the cameras seed path information.

Warning: using camera position, orientation, and field of view locks is


a very advanced topic. You need to thoroughly understand
SynthEyes and the coordinate system setup process, and have

222
SETTING UP A COORDINATE SYSTEM

excellent mental visualization skills, before you are ready to consider


camera locks. Under no circumstances should they be considered
a way to compensate for inadequate basic tracking skills!!!

Concept and Terminology


SynthEyes allows you to create locks on path (X, Y, and/or Z translation),
rotation (pan, tilt, and roll), overall distance from the camera to the origin or
object to the camera, and field of view. You can lock one or more channels, and
locks are animated, so they might apply to an entire shot, a range of frames, or
one frame.
Each path lock forces the camera to, or towards, the cameras seed path.
The seed path is what you see before solving, or after clearing the solution. You
can see the seed path at any time using the View/Show seed path menu
control, or the button on the Hard and Soft Lock Control dialog.
Similarly the field of view lock forces the field of view towards the seed
field of view track, visible before the first solve or by using Show Seed Path.
The overall distance constraint forces the camera/origin or object/camera
distance towards an (animatable) value set from the Solver Locking panel.
Locks may be hard or soft. Hard locks force the camera to the specified
values exactly (except if Constrain is off), similar to pegged trackers. Soft locks
force the camera towards the specified value, but with a strength determined by
a weight value.
Locks are affected by the Constrain checkbox on the solver panel, similar
to what happens with trackers. With Constrain off, locks are applied after solving,
and do not warp the solution. All soft locks are treated as hard locks. With
Constrain on, locks are applied before and during solving, soft locks are treated
as such, and locks do warp the solution. Field of view locks are not affected by
Constrain, and are always applied.

Note: there are no hard overall distance constraints, they are always
soft. The Constrain checkbox must be on for them to be effective; they
are ignored otherwise. They can not currently be used as the only
means to set scene size.

Camera position locks are more useful than orientation locks; well
consider position locks separately to start with.
You can also constrain objects, but this is even more complex. A separate
subsystem, the stereo geometry panel, handles camera/camera constraints in
stereo shots.
Basic Operation
Set up generally proceeds as follows:

223
SETTING UP A COORDINATE SYSTEM

1. Go to the Solver Control Panel. Click the more button to bring up the
Hard and Soft Lock Control panel. Or, select Solver Locking from the
Window menu.
2. Turn on Show Seed Path.
3. Position and animate the camera as desired, creating a key on each
frame where you want the position to be constrained. The Get buttons
can help with this, by pulling positions from any existing solved path.
4. Turn on the L/R, F/B, and/or U/D buttons as appropriate depending on
the axes to be constrained these stand for left/right, front/back, and
up/down respectively.
5. Adjust the Constrain checkbox as needed. The camera position
constraints behave similarly to constraints on the trackers: if the
Constrain checkbox is on, they are enforced exactly during the solve,
but if the Constrain checkbox is off, they are enforced only loosely after
the completion of the solve. Loosely means that they are satisfied as
best as can be, without modifying the trajectory or overall RMS error of
the solution.
The result of this process is to make the camera match the X, Y, and/or Z
coordinates of the seed path at each key. This basic setup can be used to
accomplish a variety of effects, as described above and covered in more detail
below. At the end of the section, well show some even more exotic possibilities.

Hint: you can use Synthia or the graph editor's clipboard as additional
ways to move keys from the solved camera path to the seed camera
path.

Using a Camera Height Measurement


Suppose the camera is sitting on a moving dolly in a studio, and you
measured the camera (lenss) height above the floor, and you have some
trackers that are (exactly) on that floor. You can use the height measurement to
set up the scene size as follows:
1. Show the seed path: View/Show Seed Path menu item
2. At frame 0, position the camera at the desired height above the
ground plane: 2 meters, 48 inches, whatever.
3. Turn on the U/D button on frame 0, turn it back off at frame 1.
4. Set up a main coordinate system using 3 or more trackers on the
floor. Make sure to not create a size constraint in the process: if
using the *3 button on the Coordinate system panel or the Coord
button on the Summary panel, select the 2 nd (on-axis) tracker, and
in the Coordinate panel, change it from Lock Point (at 20,0,0) to On
X Axis or On Y Axis.

224
SETTING UP A COORDINATE SYSTEM

5. Solve with Go! on the Solver panel


Note that you can use whatever more complex setup you like in step 4, as
long as it completely constrains both the translation and rotation, but not the size.
WARNING: You might be tempted to think Hmmm, the camera is on a
dolly, so the entire path must be exactly 43 inches off the floor, let me set that
up! (by not turning U/D back off). But this is almost always a bad idea! The
obvious problem is that the dolly track is never really completely flat and free of
bumps. If the vertical field of view is 2 meters, and you are shooting 1080i/p
HDTV, then roughly your track must be perfectly flat to 1 millimeter or so to
have a sub-pixel impact. If your track is that flat, congratulations.
The conceptually more subtle, but bigger impact problem is this: a normal
tripod head puts the camera lens very far from the center of rotation of the
headroughly 1 foot or 0.25 meter. As you tilt the head, the position of the
camera increases and decreases up to that much in height! Unless your camera
does not tilt during the shot, or you have an extra-special nodal-pan head, the
camera height will change dramatically during the shot.
A Straight Dolly Track Setup
If your camera rides a straight dolly track, you can use the length of that
track to set the scale, and almost the entire coordinates system if desired. While
the camera height measurement setup discussed above is simpler, it is
appropriate mainly for a studio environment with a flat floor. The dolly track setup
here is useful when a dolly track is set up outdoors in an environment with no
clearly-defined ground planein front of a hillside, say.
For this setup, you should measure the distance traveled by the camera
head down the track, by a consistent point on the camera or tripod. For example,
if you have a 20 track, the camera might travel only 16 or so because there will
be a 2 dead zone at each end due to the width of the tripod and dolly. Measure
the starting/ending position of the right front wheel, say.
Next, clear any solved path (or click View/Show seed path), and animate
the camera motion, for example moving from 0,0,0 at the beginning of the shot to
16,0,0 at the end (or wherever it reaches the maximum, if it comes back).
You now have two main options: A) mostly tracker-based coordinate
setup, or B) mostly dolly-based coordinate setup, for side-of-hillside shots.
For setup A, turn on only the L/R camera axis constraint checkbox on the
first and last frames (only). The X values you have set up for the camera have
set up an X positioning for the scene, so when you set up constraints on the
trackers, they should constrain rotation fully, plus the front/back and up/down
directionsbut not the L/R direction since that would duplicate and conflict with
the camera constraint (unless you are careful and lucky).
For setup B, turn on L/R, F/B, and U/D on the first and last frames (only).
You should take some more care in deciding exactly what coordinate values you
want to use for each axis of the animated camera path, because those will be

225
SETTING UP A COORDINATE SYSTEM

defining the coordinate system. [By setting keys only at the beginning and end of
the shot, you largely avoid problems with the camera tilting up and downat
most it tilts the overall coordinate system from end to end, without causing
conflicting constraints.]
If the track is not level from end to end, you can adjust the beginning or
ending height coordinate of the tracker as appropriate. But usually we expect the
track to have been leveled from end to end.
With X, Y, and Z coordinates keyed at the beginning and end of the shot,
you have already completely constrained translation and scale, and have
constrained 2 of the 3 rotation axes. The only remaining unconstrained rotation
axis is a rotation around the dolly.
To constrain this remaining rotation requires only a single additional
tracker, and only its height measurement! On the set, you should measure the
relative height of a trackable feature compared to the track (usually this will be to
the base of the track, so you should also measure the height of the camera
versus the base). You can measure this height using a level line (a string and a
clip-on bubble level) and a ruler.
On the Coordinate System Control Panel, select the tracker and set it to
Any XY Plane and set the Z coordinate (for Z-up mode), or select Any XZ Plane
and set the Y coordinate (for Y-up mode).
Now youre ready to go! This setup is a valuable one for outdoor shots
where a true vertical reference is required, but the features being tracked are not
structured (rocks, bushes, etc).
Again, we recommend not trying to constrain the camera to be exactly
linear, though you can easily set this up by locking Y and Z to be fixed for the
duration of the shot, with single-frame locks on X at the beginning and end of the
shot. This setup forces the camera motion to be exactly straight, but moving in an
unknown fashion in X. Although the motion will be constrained, the setup will not
allow you to use fewer trackers for the solve.
Using a Supplied Camera Path
This section addresses the case where you have been supplied with an
existing camera translation path, either from a motion-controlled camera rig, or
as a result of hand-editing a previous camera solution, which can be useful in
marginal tracks where you have a good idea what the desired camera motion is.
After editing the path, you want to find the best orientation data for the given
path.
If you have an existing camera path in an external application (either from
a rig, or after editing in maya or max, for example), typically you will import it
using Filmbox or a standard or custom camera-path import script. Be sure that
the solved camera path is cleared first, so that the seed path is loaded.
If you have a solved camera path in SynthEyes, you can edit it directly.
First, select the camera, and hit the Blast button on the 3-D panel. This transfers

226
SETTING UP A COORDINATE SYSTEM

the path data from the solved path store into the seed path store. Clear the
solved path and edit the seed path.
Rewind and turn on all 3 camera axis locks: L/R, F/B, and U/D.
Next, configure the solvers seeding method. This requires some care.
You can use the Path Seeding method only if your existing path includes
correct orientation and field of view data. Otherwise, you can use the
Automatic method or maybe Seed Points. The Refine mode is not an option
since you have already cleared the solution to load the seed path, and dont have
orientation data anyway or youd use Path Seeding.
You can use Seed Points mode if you are editing the path in SynthEyes
but be sure to hit the Set All button on the Coordinate System Setup Control
panel before clearing the prior solution, so that the points are set up properly as
seeds. You should probably not make them locks, unless you are confident of
the positions already.
With the camera path locked to a complex path (other than a straight line),
no further coordinate system setup is required, or it will be redundant.
You can solve the scene first with the Constrain checkbox off, then switch
to Refine mode, turn on Constrain, and solve again. This will make it apparent
during the second solve whether or not you have any problems in your constraint
setup, instead of having a solution fail unexpectedly due to conflicting constraints
the first time.
Camera-based Coordinate System Setup
The camera axis constraints can be used in small doses to set up the
coordinate system, as weve seen in the prior sections. Typically you will want to
use only 1 or 2 keys on the seed path; 3 or more keys will usually heavily
constrain the path and require exact knowledge of the camera move timing.
Roughly, each keyed frame is equivalent in effect to a constrained tracker
located at the same spot. You should keep that in mind as you plan your setup,
to avoid under- or over-constraining the coordinate system.
Soft Locks
So far we have described hard locks, which force the camera exactly to
the specified values. Soft locks pull more gently on the camera path, for example,
to add stability to a section of the track with marginal tracking. In either case, for
a lock to be active, the corresponding lock button (U/D, L/R, pan, etc) must be
on.
The weight values on the Hard and Soft Lock dialog controls whether
locks are hard or soft. If the weight is zero (the default), it is a hard lock. A non-
zero weight value specifies a soft lock.
Weight values range from 1 to 120, with 60 a nominal neutral value.
However, we recommend that when creating soft locks, you start with a weight of
10, and work upwards through 20, 30, etc until the desired effect is obtained.

227
SETTING UP A COORDINATE SYSTEM

Weight values are in decibels, a logarithmic scale where an increase of 20


decibels is an increase by a factor of 10, and 6 decibels is a factor of two. So 40
is 10 times stronger than 20, 40 is 100 times stronger than 10, and 26 is twice as
strong as a weight of 20. (Decibels are commonly used for sound level
measurements.)
A lock can switch from hard to soft on a frame-by-frame basis, ie frames
0-9 can be hard, and 10-14 soft. You may need to key the weight track carefully
to avoid slow transitions from 20 down to 0, for example.
Soft locks are treated as such only when the Constrain check box is on: it
is the solver that distinguishes between hard and soft locks. If Constrain is off,
the locks will be applied during the final alignment, when they do not affect the
path at all, just re-orient it, so soft locks are treated the same as hard locks.
Note that the soft lock weight is not a path blending control. You might
naively be tempted to set up a nominal locked path, and try to animate the soft
lock weights expecting a smooth blend between the solved path and your
animated locked path. But that is not what will happen. The weight changes how
seriously SynthEyes takes your request that the camera should be located at the
specified positionbut it will affect the tracker positions and everything else as
well.
Overall Distance Locks
When a shot contains little perspective, virtually all the error will be in
depth, ie the distance from the camera to the rest of the scene (or from an object
to its camera). Even the tiniest amount of jitter in the tracking data can produce a
large amount of jitter in the depth: this is not a program problem, but a reflection
of the poor quality of the available information.
SynthEyes allows you to set up a soft lock (only!) on this depth value, the
distance from the camera to origin, or object to camera, using the Solver Locking
panel. The Constrain checkbox must be on for these constraints to be effective!
You can animate this desired distance based on the measured distances
from an original track, using the Get 1f button for the Overall Distance, typically
on a small number of frames.
In this fashion, you can force the depth value to have a very smooth
trajectory, while allowing a dynamic performance in the other axes. This often
corresponds relatively well to what happens on the set.
You can configure and check this constraint using the Distance channels
of the Camera: the Seed Path Distance value shows the value being constrained
to, the Solved Path value shows the actual value (whether or not a distance
constraint is active), and the Solved Velocity Distance value shows the velocity of
the actual value. The velocity value is helpful to determining whether or not the
constraint has been fully effective or not; if the velocity is not very smooth to
match the commanded value, you should increase the weight of the distance

228
SETTING UP A COORDINATE SYSTEM

constraint on the solver locking panel. Do that on the first frame, to avoid placing
weight keys in the middle of the shot (unless that is what you want).
Note that you must set up a coordinate system on the camera or object in
order to productively use these constraints, so that the origin is not changing on
each successive solving run. And depending on where the camera or object
origin falls, the distance curve may be simple and slowly changing, or if the origin
is too close to the camera or object the motion may be too dynamic and difficult
to work with.
While in theory an overall distance lock can be used to set the overall size
of the scene (with the Constrain checkbox off), that is not currently supported.
You must have other constraints to set the scene size, and you should makes
sure those constraints do not conflict with your overall distance constraints.
Orientation Locks
You can apply Pan, Tilt, and Roll rotation locks as well as translational
locks. They can be used for path editing and, to a lesser extent, for coordinate
system setup.
For example, a roll-angle constraint can be used to keep the camera
forced upright. That can be handy on tripod shots with large pans: small amounts
of lens distortion can bend the path into a banana shape; the roll constraint can
flatten that back out.
If the camera looks in two different directions with the roll locked, it
constrains two degrees of freedom: only a single pan angle is undetermined! For
example, if looks along the X axis then along the Y axis, both with roll=0. You
might want to think about that for a minute.
The perspective windows local-coordinate-system and path-relative
handles can help make specific adjustments to the camera path.
Inherently, SynthEyes is not susceptible to gimbal-lock problems.
However, when you have orientation locks, you are using pan, tilt, and roll axes
that do define overall north and south poles, and you may encounter some
problems if you are trying to lock the camera almost straight up or down. If this is
the case, you may want to change your coordinate system so those views are
along the +Y and Y axes, for example.
Object Tracking
You can also use locks on rigid-body moving objects, in addition to
cameras. However, there are several restrictions on this, because moving
objects are solved relative their hosting camera path, but the locks are world
coordinate system values.
If a moving object has path locks, then
1. the host camera must have been previously solved, pre-loaded, or pre-keyed,
and the camera solving mode set to Disabled,

229
SETTING UP A COORDINATE SYSTEM

2. the translation axis locks must either all be on, or all off, and
3. the rotation axis locks must either all be on, or all off.
Normally, when SynthEyes handles shots with a moving camera and
moving object, it solves camera and object simultaneously, optimizing them both
for the best overall solution. However, when object locks are present, SynthEyes
must be able to access the camera solution first, in order to be able to apply the
object locks.
With the camera path, SynthEyes changes the translation and rotation
axis lock values into a form usable for the object, but the individual axes are no
longer available, and either all must be constrained, or none. SynthEyes will
automatically turn all the enables on or off together if a moving object is active.
Object locks have very hard-to-think-about impacts on the local coordinate
system of the trackers within the object. Most object locks will wind up over-
constraining the object coordinate system.
We recommend that object locking be used only to work on the object
path, not to try to set up the object coordinate system.

Hint: a simple and reliable Overall Distance constraint will usually be


very effective in improving object paths.

While an overall distance constraint on a camera controls the distance


from the camera to the origin, an overall distance constraint on a moving object
constrains the distance from the moving object's local coordinate system origin to
the camera. This makes it especially important to set up a well-chosen
coordinate system for the moving object: the origin should be near the center of
rotation of the object, so that object rotation does not excessively impact the
overall distance.

Aligning to an Existing Mesh


You may have an existing mesh that is a 3-D model of the entire set. If
you do and you would like to use that existing mesh, here are some additional
ways to do it (in addition to using Place mode on the Perspective viewport or a
tabular list of 3-D locations).
If you don't have trackers, or don't want to use trackers, you can use the
perspective view's Pinning toolbar to literally pin the mesh to desired locations in
the image. It's a rough method and cannot determine overall scaling, so it is most
suited for tripod shots. For more information, see the Pinning Tool section of the
Geometric Hierarchy Tracking manual (from the Help menu).
If you do have trackers that correspond identically to vertices in your
model, you can use the following method. (It may be helpful to create these
trackers using offset tracking). Then, use the Add Links to Selected menu item in
the Linking section of the perspective right-click menu to create links between the
trackers and their corresponding vertices.

230
SETTING UP A COORDINATE SYSTEM

Or, use the Align Via Links dialog item on that same menu and select the
Align World to Mesh Position option.
Alternatively, turn off Whole affects meshes on the Viewport right-click
menu, turn on Whole, and move the camera around so that things are in the
roughly right place (for when there isn't any actual data).

Field of View/Focal Length Constraints


You can create constraints on the cameras field of view and focal length
in a similar fashion to path and orientation constraints. Field of view constraints
are enabled (and make sense) only when the camera lens is set to Zoom.

Warning 1: This topic is for experts. Do not use field of view


constraints on a shot unless you have a specific need encountered on
that shot. Do not use them just because focal length values were
recorded during shooting. FOV/FL values calculated by SynthEyes are
more accurate by definition than recorded values.

Warning 2: Do not use focal length values unless you have measured
and entered a very good value for the plate width. Use field of view
values instead.

The Known lens mode can also be viewed as a simple form of field of
view constraint: one that allows arbitrary animation of the field of view, but that
requires that the exact field of view be known and keyed in for the entire length of
the shot. We will not discuss this mode further, except to note that the same
effect, and many more, can also be achieved with field of view constraints.
As with path constraints, field of view constraints are created with a seed
field of view track, animated lock enable, and lock weight. See the Lens panel,
Solver panel, and lock control dialog.
Both hard and soft locks operate at full effect all the time, regardless of the
state of the Constrain checkbox on the solver panel.
As with path constraints, field of view constraints affect the solution as a
whole. If you have a spike in the field of view track on a particular frame, adding
a constraint on that single frame will not do what you probably expect. All the
trackers locations will be affected, and you will have the same spike, but in a
slightly different location. This is not a bug. Instead, you need to also key
surrounding frames. In all cases, identifying and correcting the cause of the spike
will be a better approach if possible.
If the lens zooms intermittently, you can determine an average zoom value
for each stationary portion of the shot, and lock the field of view to that value.
You can repeat this for each stationary portion, producing a smoother field of
view track.

231
SETTING UP A COORDINATE SYSTEM

Sometimes you may have a marginal zoom shot where you are given the
starting and ending zoom values (field of view or focal length), but you do not
know the exact details of the zoom in between. SynthEyes might report a zoom
from 60 to 120mm, but you know the actual values were 50 to 100mm. You can
address this by entering a one frame field of view constraint at the beginning and
end of the shot with the correct values. As long as your values are reasonably
correct in reality, the overall zoom curve should alter to match your values.
If only the endpoints change, but the interior remains at other values, then
SynthEyes has significant evidence to the contrary from your values, which most
likely indicates the values are wrong, the plate width is wrong, or that there is
substantial uncorrected lens distortion.

Spinal Path Editing


Since it can be tedious to repeatedly change coordinate system setups,
SynthEyes can dynamically recompute portions of a solve as you change certain
values.

Warning: this is a really advanced topic. It can be used quickly and


easily, especially Align mode, but it can just as quickly reduce your
solve to rubble. Were not kidding, this thing is complicated!

First, what is spinal editing and why is it called that? Spinal editing is
designed to work on an already-solved track, where you have an existing camera
or object path to manipulate. The path is the spine that we edit. It is spinal
because you can think of the trackers as being attached to it like ribs. If you
manipulate the spine, the ribs move in response. Youll be working on the spine
to improve or reposition it. The perspective windows local-coordinate-system
and path-relative handles can help make specific adjustments to the camera
path.
After you have completed an initial solve producing a camera path, you
can initiate spinal editing by launching the control panel with the Window/Spinal
Editing menu item. This will open a small spinal control panel. You can also
enable spinal editing with the Edit/Spinal aligning and Edit/Spinal solving menu
items, though then you lose the feedback from the control panel.
There are two basic modes, controlled by the button at top left of the
spinal control panel: Align and Solve.
Note that the recalculations done by spinal editing are launched only in
response to a specific relatively small set of operations:
dragging the camera or object in a 3-D viewport or perspective
view,
dragging the seed point (lock coordinates) of a properly-
configured tracker in a 3-D viewport or perspective view,

232
SETTING UP A COORDINATE SYSTEM

changing the field of view spinner on the lens control panel or soft-
lock panel.
changing the weight control on the spinal editing dialog.
In order for a trackers seed point to be dragged and used for spinal
alignment, it must be set to Lock Point mode.
Spinal Align Mode
In Align mode, your path is moved around as the coordinate system is
repeatedly updated, but the shape of the path and the relationship to the trackers
is not affected. The RMS error of the solve is unchanged. This can be a nice way
to help get that specific coordinate system alignment you want; it allows a
mixture of object and tracker constraints.
You can use a combination of locks on the camera and on trackers in the
scene. As you drag the camera or tracker, the alignment will be repeatedly
recalculated. Use the figure of merit value to keep track of whether you have an
overconstrained setup: the value is normally very small, such as 0.000002. If it
rises much above that, you dont have a minimal set of constraints (typically it
reaches 0.0200.050). That is not a problemunless you begin solving with the
Constrain checkbox on.
Note that all camera and object locks are treated as hard locks by the
alignment software.
Spinal Solve Mode
In Solve mode, you are changing the solve itself, generally by adding
constraints on the camera path, then re-solving. The RMS error will always get
worse! But it lets you interactively repair weak spots in your solve.
The spinal solve performs a Refine operation on your existing solution,
meaning that it makes small changes to that solution. If the constraints you add
after the initial solve, either directly or by dragging with the spinal solve mode,
change the solution too much, then you will get a solution that is the best
solution near the old solution rather than the best overall solution, which you
would obtain by starting the solve from scratch (ie Automatic solving mode).
To maintain interactive response rates, the spinal solve panel allows you
to terminate the refine operation earlyand while dragging youre just going to
be changing things again anyway. When you stop dragging, SynthEyes will
perform a last refine cycle to allow the refine to complete, although you can also
keep it from taking too long. After youve been moving around for a bit, especially
if your solves are not completing all the way, you can click the Finish button to
launch a final normal Refine cycle (Finish is the same as the Go button on the
solver panel).
Spinal editing might be used in especially subtle ways on long shots.
Match-moving inherently produces local rate of change measurements, and
small random errors (often amplified by small systematic effects such as lens

233
SETTING UP A COORDINATE SYSTEM

distortion or off-center optic axis) accumulate to produce small twists in the


geometry by the end of a long traveling shot. If you have GPS or survey data you
can easily fix this using a few locks. But survey data is not always available.
These accumulating errors can be particularly problematic when a long
shot loops back onto itself. Suppose a shot starts with a building site for a house,
showing the ground where it will be. The shot flies past the house, loops around,
then approaches from the side. However, the side view does not include the
ground, but only some other details not visible from the front. The inserted house
is now seen at an incorrect location, perhaps slanted a bit. The path needs to be
bent into shape, and spinal path editing can help you achieve that.
Please keep in mind that the results of these manipulated solves are
generally not the same result you would obtain if you started the solve again from
scratch in Automatic solving mode. You might consider re-starting the solve
periodically to make sure youre not doing a whole lot of work on a marginal
solution.
Using soft locks and spinal editing mode is a black art made available to
those who wish to use it, for whatever results can be obtained with it. It is a tool
that affects the solver in a certain way. There is no guarantee that it will do the
specific thing that you want at this moment. If it does not do what you think it
should be doing, it is not a bug.

Avoid Constraint Overkill


To recap and quickly give a word of warning, keep your coordinate system
constraints as simple as possible, whether they are on trackers or camera path. It
is a common novice error to assign as many constraints as possible to things that
are remotely near the floor, a wall, the ceiling, etc, in the mistaken belief that the
constraints will rescue some bad tracking, or cure a distorted lens.
Consequently, the first thing we do with problematic scene files in
SynthEyes technical support is to remove all the customers constraints, re-solve,
and look at the tracker graphs to locate bad tracks, which we usually delete.
Presto, very often the scene is now fine.
Stick with the recommended 3-point method until you have a decent
understanding of tracking, and a clear idea of why doing something else is
necessary to achieve the size, positioning, and orientation you need.
If you have a shot with no physical camera translationa nodal tripod
shotdo not waste time trying to do a 3-D solve and coordinate system
alignment. Many of the shots we see with I cant get a coordinate system
alignment are tripod shots erroneously being solved as full 3-D shots. Set the
solver to tripod mode, get a tripod solution, and use the line alignment tool to set
up coordinates.

234
SETTING UP A COORDINATE SYSTEM

Advanced Solving Using Phases (Pro, not Intro)


SynthEyes offers an advanced method for controlling the operation of the
Solver using a node-based approach. We call these solver nodes Phases. With
Phases, you can set up a recipe for how the solver should solve the shot, using
over 40 different kinds of Phases.
You should think about using phases whenever you need to follow a
series of steps to obtain a solve on a difficult shot.
You can also take advantage of the unique capabilities of the phases to
set up your coordinate system, in ways that are not possible otherwise.
Once you've set up a collection of phases, you can copy and paste them
from one SynthEyes file to another, or save them to a file on disk. SynthEyes
sets up a folder for you to use as a phase library.
In this User Manual, we'll provide an overview of how to use phases, and
the reference material provides details of the Phase View and the Phase Panel.
For details of all the different kinds of phases, see the separate Phase
Reference. You can find it on the SynthEyes's Help menu. The Phase Reference
also contains details of the information flowing through the solver pipeline.

Overview of Phases
Phases provide instructions to the solver. They are set up using the
Phases room, which opens the Phase View and the Phase Panel. Phases are
created by right-clicking in the phase view, and selecting the desired kind of
phase from the bottom section of the menu, where the phases are organized into
categories: Coordinates, Edit, Solver, Stereo, and Tracker. Each phase tells the
solver to run a different algorithm, affecting principally the camera or object
paths, the fields of view, or the tracker locations.
If you select Solver/Solve, you'll get

which is a Solve phase named Phase1. The red color indicates that it is selected,
the wide border indicates that it is the root phase, the small green triangle on the
left is its input pin, the small green triangle on the right is its output pin, neither of
which is connected to anything.
The phase control panel will look something like this:

235
SETTING UP A COORDINATE SYSTEM

Your scene will solve pretty much the same as before. As you'll note from the
checkboxes on the solve panel, you can turn on or off various parts of the normal
solve process.
Now, with the Solve phase still selected, right click and select
Coordinates/Set Horizon.

The Set Horizon phase is centered on the location that we initially right-clicked.
Since we're not super-exact, we just dragged it into place next to the solve phase
to make it look better. You could also use the arrow keys. An automatically-
added wire connects the new phase to the previously selected one. (We can wire

236
SETTING UP A COORDINATE SYSTEM

phases directly if needed.) The Set Horizon phase is selected, and it is also now
the root.
This configuration instructs SynthEyes to solve the scene, then give the
results, consisting of the camera path and tracker locations, to the Set Horizon
phase. Set Horizon will adjust the path and tracker locations, then, because it is
the root phase, its results will be stored back into the main SynthEyes scene file.
So when phases are present, SynthEyes looks for a root phase, and asks
it for its solution. The root phase will compute its solve based upon its inputs
which in turn compute based on their inputs, and so on. Eventually, some phases
aren't connected to anything, so they pull their input from the initial sceneie
from the main control panels.
The root solve thus triggers work by many different phases. If there is no
root set up, then the solve ignores ALL the phases, operating based only on the
main user interface controls.

Quick Example: Set Horizon Phase


We have a wharf-front shot that shows a sea horizon off in the distance. If
we want the sea ground plane to be accurately placed, for example to have a
ship sail out towards the horizon, we want the horizon correct. Here's a way to do
it.

Note the two selected trackers out on the horizon. We set up the Solve phase
feeding into the Set Horizon phase as above, then, with these two trackers
selected, click Store Trackers on the phase panel. This tells the Set Horizon
phase that these are the two trackers to be on the horizon. (There are a number
of other adjustments available if needed). If we solve the scene, we get a

237
SETTING UP A COORDINATE SYSTEM

perspective view like this, where the light blue horizon line passes through the
two trackers (you can see the effect of lens distortion alsothe horizon is not
straight):

Meanwhile, the Quad view looks like this:

The camera is up in the air; there is nothing to tell is where to go. Back to the
Phase view, connect a Slide Into Position phase onto the end, to produce:

238
SETTING UP A COORDINATE SYSTEM

Now, we select a tracker in the foreground parking lot, and click Store
trackers on the control panel for the Slide Into Position phase. We could have
selected several trackers if we wanted: this phase translates the entire scene so
that the average position of the stored trackers is located at the Wanted XYZ
coordinates. Leaving them at zero makes the selected tracker the origin.
We can obtain the updated solve quickly by double-clicking the red tab at
the upper right of Phase3. It is red to indicate that it has not been solved, unlike
the two solved green tabs of Phase1 and Phase2.

We've now specified the position and orientation of the scene. As with
setting up a coordinate system directly using tracker constraints, we should
always specify the position, orientation, and scale (so that if we solve again,
even after changing trackers, we will get basically the same results).
To set the scale, we add yet another phase, here a Tracker/Tracker
Distance Constraint (we could use Camera Height, Camera Travel, Distance to
Tracker, etc instead, if those made more sense) With a little neatening, dragging
the phases around, we get:

239
SETTING UP A COORDINATE SYSTEM

and again we can double-click the red tab to update the solve, producing a final
solve.
This is a much different way to set up a coordinate system! Which
approach you use, tracker constraints or phases, depends on your scene and
what you want from it.

Coordinate System Setup with Phases


As the previous example shows, you can use the coordinate system
phases to set up a coordinate system. The available coordinate phases have
some very powerful capabilities. We will not describe them all here in detail,
please see the Phase Reference on the SynthEyes Help menu, but will provide a
brief overview here.
As with a tracker-constraint-based coordinate system setup, each scene
should control the position, orientation, and scale of each camera and moving
object in it.

Note: Do not attempt to partially constrain orientation with tracker


constraints, then try to complete the constraints with phases.
Orientation constraints during solving must either be absent or
complete, ie constrain all 3 rotations. If you need to do this, add an
additional temporary constraint to complete the solve, then use the
phase(s) to adjust the solve as needed.

Here are some key phases you can use to set up a coordinate system.
Note that in a number of cases, some parts of the coordinate system must
already be set up for the phase to act as expected. If you think about what is
happening, the need for those cases should be self-evident.
Camera Height. Sets the camera scale to be a specific height on a specific
frame. The scene must already be properly oriented! Typically used when
the camera is on a dolly, so the height above floor level is known.

240
SETTING UP A COORDINATE SYSTEM

Distance to Tracker. Sets the camera scale. Use when you know a tape-
measure distance from the camera to a specific tracker on a specific
frame of the imagery, ie similar to a focus distance.
Set Heading. Spins the scene about a vertical axis so that two given trackers are
pointing in a given heading direction. The ground plane must already be
oriented (2 of 3 rotational axes). For example, two trackers along the
centerline of a road; from satellite imagery we see the road runs northeast.
Set Horizon. Two horizon trackers are made level on a given frame, and also the
heading is set to aim the average of them to the back of the scene.
Set Path 1f. Sets the position and orientation (or a subset thereof) of the camera
or object path on any specific frame to specific coordinates that you enter
(possibly by manually positioning the camera/object). Be sure to turn on
"Move whole scene." You must have a previous scene sizing constraint!
Travel of Cam/Obj. Scale the scene so that the camera or object has moved the
given distance between the two frames, for example, the length of a dolly
track.

Phases for Moving-Camera and -Object Shots


(This section jumps way ahead to discuss joint camera and object shots.
It's discussed there too, and is well worth reading twice, or more.)
When both the camera and object are moving, it is important to control the
scale of both the camera and object. The shot images themselves may not
provide sufficient information to do this, which is the basis of an optical illusion
commonly used for the optical effect behind movies such as "Honey, I Shrunk the
Kids." When on-set measurements are used, both the set and the moving object
must be measured. That doesn't always happen.
Some particular situations in these shots can be used to set the relative
sizing correctly. Essentially, these situations are ones that are "tells" that give
away the optical illusion. When these situations arise, special phases can be
used to adjust the relative scaling correctly.
Moving-Object Path. The special situation is that the moving object comes to a
stop for some portion of the shot, which the camera continues moving in a
sufficiently complex manner. Or, the moving object may follow a precisely
straight line for some portion of time. In either case, the phase
experimentally modifies the relative scaling until the desired objectivea
complete stop or straight lineis obtained.
Moving-Object Position. Here, the special situation is that some trackers on the
moving object come close to, or into alignment with, some trackers on the
camera, or at least with the coordinate system. For example: a thrown
object is tracked as it flies across the room, coming to rest on the ground
plane. The scale of the object is adjusted so that if lies on the ground
plane at the end of the shot.
These phases provide a powerful tool for accurately and automatically
setting the relative coordinate scaling when the relevant situations occur.

241
SETTING UP A COORDINATE SYSTEM

Introduction to Multi-step Solving


Phases offer a way to direct the solver to create complex multi-step
solving processes that can help produce higher-quality solves, or solves on more
difficult shots.
We'll provide an introduction to this with a zoom shot. In this case, the
shots begins at one field of view, holds a while, zooms out to a wider field of
view, then bumps back in to a slightly tighter field of view. Here is the graph
editor's view of the zoom channel and zoom velocity.

As you can see in the velocity curve frames 0-80, there is some jitter in the
zoom while the zoom is actually stationary. This is to be expected, as
matchmoving is a measurement process, but suppose we would like to eliminate
it. If we were to filter the zoom, we would unduly affect the frames where the
zoom is active. We could animate the filter frequency and strength for a while,
but there is a simpler approach.
The Flatten FOV phase will compute the average field of view over a
range of frames and replace the original FOV with that average, with several
blend frames at the in and out point. (There is a similar script.)
We create a Solve phase, connect it to a Flatten FOV phase, and
configure the Flatten FOV phase.

242
SETTING UP A COORDINATE SYSTEM

It will average frames 0 to 74 with a 3-frame transition at frame 74. Note: there
should be no Blend at frame 0 since that is the start of the shot. We have run it,
and the Flatten Phase has automatically computed the average FOV and stored
it in the Manual FOV parameter for reference also. The graph editor now looks
like this:

As you can see, the velocity curve is perfectly flat during the first part of the
shot. There is no jitter there at all. That was pretty good, let's do it again!

243
SETTING UP A COORDINATE SYSTEM

We've added on two additional Flatten FOV phase, and configured one for the
middle section of the shot, frames 98-111, and the last for frames 123-176.
Hint: it is easiest to position the current time at the desired start or end
frame, then click the respective Set 'in' frame or Set 'out' frame button.
As a result, the FOV channel is very flat, except when it is changing:

244
SETTING UP A COORDINATE SYSTEM

This is good, but it's not the end of the story. We have a very clean FOV
track now, but we want a consistent solve: the camera path and tracker positions
that correspond to the smooth zoom trajectory, not the jiggly one. We need to
refine the full solution to reflect the changed field of view channel.
We don't want the FOV channel to change further, since we've cleaned it
up, so we add a Set Lens Mode phase that changes the Lens mode from Zoom
to Known @ Solved. The "@ Solved" part means that we want to use the
incoming solved FOV channel as the "known" value that will be used for the later
solve.
We don't need to completely redo the solve, a Refine will do, so we add a
Set Solver Mode phase and set its solver mode to Refine. As you can see, the
controls on the main user interface have corresponding phases that allow you to
change the settings at any point in a whole pipeline of solver phases.
To actually run the second solve, we add a Solve phase with a final
Autoplace for good measure. The result looks like this:

But wait, there's more! If we solve this scene a second time, it will behave
a little unexpectedly. You can see that when you consider what happens at
Phase1.
Since Phase1 has no actual input, it takes its inputs from the scene
which is already configured for a Known Lens and Refine mode. The later solve
won't reproduce what happened the first time.
Because of this circularity, we need to pay some attention to what we
have changed, and effectively "put it back" at the beginning of the solve.
The phase "Clean Start" helps with this. It clears any existing solve data
(from the phase pipeline) and resets any Refine solver modes back to Automatic
(or Refine Tripod back to Tripod).

245
SETTING UP A COORDINATE SYSTEM

We also need to set the Lens mode back to the Zoom lens mode that we
want used for the initial solve.
To add these two phases, click in the empty portion of the phase view to
unselect all the phases, then create the Clean Start and then Set Lens Mode.
This will ensure that these two are not wired to the final solve. Instead, drag from
the output pin of the Set Lens Mode to the Input pin of the Phase1 solve, creating
a wire. Here's the final collection:

Although we have a fair number of steps, most of them don't do very


much:
Clean up before we get started
Set the lens mode to Zooming
Do an initial solve, producing a somewhat jittery zoom track
Flatten out the initial non-zooming section
Flatten out the middle non-zooming section
Flatten out the final non-zooming section
Set the lens mode to Known (@Solved)
Set the solver mode to Refine
Do the final solve, corresponding to the flattened zoom track.
If we change some of the tracking data, say, we can easily rerun this
process.
Furthermore, we can right-click and select Library/Save Phase File and
write this preconfigured set of phases into our phase library (or somewhere else)
so that in the future we can easily bring it in to solve a similar shot (after adjusting
the frame ranges of the Flatten FOV phases).

246
SETTING UP A COORDINATE SYSTEM

Solved vs Unsolved Phases


When you select Go! on the solver or Shift-G in the phase view, all the
phases are cleared (unsolved) and then re-solved starting at the root phase.
(Again, if there is no root, then the phases will not be used at all in the solve. This
can be helpful for quick experimentation.)
When you double-click on the solve tab at the upper-right corner of a
phase, it causes that phase to be run. The phases that it uses will only be run if
they have not previously been run, or have been changed since that time. If
those other phases have a valid solve (a green solved tab), then they will not be
solved again, which saves time.
The phases keep track of whether they have changed and need to be re-
run based on the parameters of the phase. If you make changes in the main
interface that turn out to affect a phase's operation, you need to make sure the
phase is cleared so that it will be re-run. (For example, a solve phase that does
not have a Set Solver Mode in front of it: if you change the solver mode in the
main Solver control panel, the solve phase's earlier solve data is no longer valid,
but that is not apparent to the phase.)
You can force all the phases to be cleared with right-click/Unsolve, or
clear a single phase by right-clicking it and selecting Unsolve this.
It is always safest to do a full solve via Shift-G (Run All) once you have set
up the appropriate Clean Start elements for what you are doing.
You can review the output of a series of phases, after you have closed the
popup solver dialog, using the Solver Output View.
If you have a complex series of phases and are not sure what the solve
pipeline contains at some intermediate phase, you can right-click it and select
Retrieve from this, which reads out the phase's solve data and loads the data into
scene. Note that the readout data consists primarily of path and position data, not
the details of the various mode controls. See the Phase Reference for details of
what is read out.
If you want to "hide" some trackers from a given Solve phase, so that they
do not influence, make them Zero-Weighted-Trackers (ZWTs), using the Tracker
Modes phase.

247
Post-Solve Tracking
This section describes several techniques for adding or manipulating
trackers once a 3-D solve has been obtained. With the solve already in place, the
additional 3-D information presents additional opportunities.
One basic feature is the Search from solved method for predicting where
a tracker will be in the image, once the tracker's 3-D location has been solved.
With this method enabled (by default) on the Track menu, the tracker's search
region can be kept very small, making for fast and easy tracking if a tracker must
be extended to additional (solved!) frames. It can not be used to extend tracks
into frames where the camera or object has not yet been solved, because without
the camera/object solution, the tracker's location in the image can not be
predicted. Note that this mode will not be used for offset trackers.

Zero-Weighted Trackers
Suppose you had a visual feature you were so unsure of, you didnt want it
to affect the camera (or object) path and field of view at all. But you wanted to
track it anyway, and see what you got. You might have a whole bunch of leaves
on a tree, say, and hope to get a rough cloud for it.
You could take your tracker, and try bringing its Weight in the solution
down to zero. But that would fail, because the weight has a lower limit of 0.05. As
the weight drops and the tracker has less and less effect, there are some
undesirable side effects, so SynthEyes prevents it.
Instead, you can click the zero-weighted-tracker (ZWT) button on the
tracker panel, which will (internally) set the weight to zero. The undesirable side
effects will be side-stepped, and new capabilities emerge.
With a weight of zero, ZWTs do not affect the solution (camera or object
path and field of view, and normal tracker locations), and can not be solved until
after an initial camera and/or object solution has been obtained. ZWTs are solved
to produce their 3-D position at the completion of normal solving.

Warning: Do not expect constraints on ZWTs to affect the overall


coordinate system. As we just said, ZWTs do not affect the solution!

The main solver changes some trackers to Far when they have very little
perspective (and consequently can even show up behind the camera). You can
change those trackers to ZWTs instead, to get rough (distant) 3-D information for
them, without affecting the main solve.

Tip: There is a separate preference color for ZWTs. Though it is


normally the same color as other trackers, you can change it if you
want ZWTs to stand out automatically.

249
POST-SOLVE TRACKING

Importantly, ZWTs are automatically re-solved whenever you change their


2-D tracking, the camera (or object) path, or the field of view. This is possible
because the ZWT solution will not affect the overall solution.
It makes possible a new post-solving workflow.

Solve As You Track

After solving, if you want to add an additional tracker, create it and


change it to a ZWT (use the W keyboard accelerator if you like). Keep the Quad
view open. Begin tracking. Watch as the 3-D point leaps into existence, wanders
around as you track, and hopefully converges to a stable location. As you track,
you can watch the per-frame and overall error numbers at the bottom of the
tracker panel.
ZWTs benefit from the Track/Search from solved tracker search prediction
method. Once the ZWT is solved, that 3-D location is used to predict the tracker's
2-D search location, so the search region size can be kept quite small (as long as
the overall solve and tracker location are good), reducing the chances of hopping
to an unrelated similar-looking feature.
Bring up the graph editor, and take a quick look at the error curve for any
spikessince the position is already calculated, the error is valid.
Once youve completed tracking, change the tracker back to normal
mode. Repeat for additional new trackers as needed. You can use the same
approach modifying existing trackers, temporarily shifting them to ZWTs and
back.
When you do your next Refine cycle using the Solver panel, the trackers
will be solved normally, and influence the solution in the usual way. But, you
were able to use the ZWT capability to help do the tracking better and quicker.

Juicy Details
ZWTs dont have to be only on a camera, they can be attached to a
moving object as well. You can also configure Far ZWTs. Offset ZWT trackers
can not take advantage of search from solved mode.
The ZWT calculation respects the coordinate system constraints: you can
constrain Z=0 (with On XY Plane) to force a ZWT onto the floor in Z-up mode. A
ZWT can be partially linked to another tracker on the same camera or object. It
doesnt make sense to link to a tracker on a different object, since such links are
always in all 3 axes, overriding the ZWT calculation. Distance constraints are
ignored by ZWT processing. (Again, ZWTs do not affect the solution and any
constraints on them will not affect the coordinate system.)
If you have a long shot and a lot of ZWTs and must recalculate them often
(say by interactively editing the camera path), it is conceivable that the ZWT
recalculation might bog down the interactive update rate. You can temporarily

250
POST-SOLVE TRACKING

disable ZWT recalculation by turning off the Track/ZWT auto-calculation menu


item. They will all be recalculated when you turn it back on.

Stereo ZWTs
A stereo pair of trackers can be made into a stereo ZWT by changing
either tracker to a ZWTthe other member will be changed automatically. A
stereo ZWT pair can produce a position from as little as a single frame in each
camera. After a solve has been produced, for true moving-camera shots, you can
track in one camera, make the pair into a ZWT, and then have an excellent idea
where the tracker will be in the other camera, potentially simplifying tracking.

Adding Many More Trackers


After you have auto-tracked and solved a shot, you may want to add
additional trackers, either to improve accuracy in a particular area of the shot, or
to flesh out additional detail, perhaps before building a mesh from tracker
locations.
SynthEyes provides a way to do this efficiently in a controlled manner,
with the Add Many Trackers dialog. This dialog takes advantage of the already-
computed blips and the existing camera path to identify suitable trackers: it is the
same situation as Zero-Weighted-Trackers (ZWTs), and by default, the newly-
created trackers will be ZWTsthey do not have to be solved any further to
produce a 3-D position, since the 3-D position is already known.
Important: you must not have already hit Clear All Blips on the Feature
panel or Clean Up Trackers dialog, since it is the blips that are analyzed to
produce additional trackers.
The Add Many trackers dialog, below, provides a wide range of controls to
allow the best and most useful trackers to be created. You can run the dialog
repeatedly to address different issues.
You can also use the Coalesce Nearby Trackers dialog to join multiple
disjointed tracks together: the sum is greater than the parts!

251
POST-SOLVE TRACKING

When the dialog is launched from the Track menu, it may spend several
seconds busily calculating all the trackers that could be added, and it saves that
list in a temporary store. The number of prospective trackers is listed as the
Available number, 1880 above. By adjusting the controls on the dialog, you
control which of these prospective trackers are added to the scene when you
push the Add button. At most, the Desired number of trackers will be added.
Basic Tracker Requirements
The prospective trackers must meet several basic requirements, as
described in the requirements section of the panel. These include a minimum
length (measured in frames), and an amplitude, plus average and peak errors.
The amplitude is a value between zero and one, describing the change in
brightness between the tracker center and background. Larger values will require
more pronounced trackers.
The errors numbers measure the distance between the 2-D tracker
position and the computed 3-D position of the tracker, mapped back into the
image. The average error limits the noisiness and jitter in the trackers, while the
peak error limits the largest glitch error. Notice that these controls do not
change any trackers, but instead select which of the prospective trackers are
actually selected for addition.
You can ask for only spot trackers, only corner trackers, or allow trackers
of either type to be created.

252
POST-SOLVE TRACKING

To a Range of Frames
To add trackers in a specific range of frames in the shot, set up that region
in the Frame-Range Controls: from a starting frame to an ending frame. Then,
set a minimum overlap: how many frames each prospective tracker must be
valid, within this range of frames. For example, if you have only a limited number
of trackers between frames 130 and 155, you would set up those two as the
limits, and set the minimum overlap to 25 at most, perhaps 20.
To a Specific Area
To add trackers in a particular area of the scene, open the camera view,
and go to a frame that makes the region needing frames clearly visible. Lasso
the region of interestit does not matter if there are any trackers there already or
not. The lassoed region will be saved. (Fine point: the frame number is also
saved, so it does not matter if you change frames afterwards.)
Open the Add Many trackers dialog, and turn on the Only within last
Lasso checkbox. The only trackers selected will be those where the 3-D point
falls within the (2-D) lassoed area, on the frame at which the lasso occurred.
With this option, SynthEyes specifically ensures that the new trackers are
evenly distributed within the lassoed area (in 2-D). This can make it worthwhile to
lasso a large area rather than giving no lasso area at all, if the new trackers
would otherwise be clustered too close together.
Zero-Weighted vs Regular Trackers
Once all the criteria have been evaluated, and a suitable set of trackers
determined, hitting Add will add them into the scene. There are several options to
control this (which should be configured before hitting Add).
The most important decision to make is whether you want a ZWT or a
regular tracker. Intrinsically, the Add many trackers dialog produces ZWTs, since
it has already computed the XYZ coordinates as part of its sanity-checking
process. By using ZWTs, you can add many more trackers without appreciably
affecting the re-solve time if you later need to change the shot. So using ZWTs is
computationally very efficient, and is an easy way to go if you need more trackers
to build a mesh from.
On the other hand, if you need additional trackers to improve the quality of
the track, by adding more trackers in an under-populated region of 3-space or
range of frames, then adding ZWTs will not help, since they do not affect the
overall camera solution. Instead, check the Regular checkbox, and ordinary
trackers will be created, still pre-solved with their XYZ coordinates. You can solve
again using Refine mode, and the camera path will be updated taking into
account the new trackers.
If you add hundreds or thousands of regular trackers, the solve time will
increase substantially. Designed for the best camera tracking, SynthEyes is most
efficient for long shots, not for thousands of trackers. To see why this choice was
made, note that even if all the added trackers are of equal quality, the solution

253
POST-SOLVE TRACKING

accuracy increases much slower than the rate trackers are added. You can use
some of the trackers for the solve, and keep the rest as ZWTs.
Other New Tracker Properties
Normally, you will want the trackers to be selected after they are added,
as that makes it easy to change them, see which were added, etc. If you do not
want this, you can turn off the Selected checkbox.
Finally, you can specify a display color for the trackers being added by
selecting it with the color swatch, and turning on the Set color checkbox. That
will help you identify the newly-added trackers, and you can re-select them all
again later using the Select same color item on the Edit menu.
It may take several seconds to add the trackers, depending on the number
and length of trackers. Afterwards, you are free to add additional trackers to
address other issues if you likethe ones already added will not be duplicated.

Coalescing Nearby Trackers


Now that you know how to create many more trackers, you need a way to
combine them together intelligently. Whether you use the Add Many More
Trackers panel or not, after an autotrack (or even heavy supervised tracking) you
will often find that you have several trackers on the same feature, but covering
different ranges of frames. Tracker A may track the Red Rock for frames 0-50,
and Tracker B may also track Red Rock from frames 55-82. In frames 51-54,
perhaps an actor walked by, or maybe the rock got blurred out by camera motion
or image compression.
It is more than a convenience to combine trackers A and B. The combined
tracker gives SynthEyes more information than the two separately, and will result
in a more stable track, less geometric distortion in the scene, and a more
accurate field of view. (Exception: if there is much uncorrected lens distortion,
you are better off with consistently short-lived trackers.)
The Coalesce Nearby Trackers dialog, available on the Tracker menu, will
automatically identify all sets of trackers that should be coalesced, according to
criteria you control.

254
POST-SOLVE TRACKING

When you open the dialog, you can adjust the controls (described shortly)
and then click the Examine button.
SynthEyes will evaluate the trackers and select those to be coalesced, so
that you can see them in the viewports. The text field, reading (click Examine)
in the screen capture above, will display the number of trackers to be eliminated
and coalesced into other trackers.
At this point, you have several main possibilites:
1. click Coalesce to perform the operation and close the panel;
2. adjust the controls further, and Examine again;
3. close the dialog box with the close box (X) at top right (circle at top left on
Mac), then examine the to-be-coalesced trackers in more detail in the
viewports; or
4. Cancel the dialog, restoring the previous tracker selection set.
If you are unsure of the best control settings to use, option 3 will let you
examine the trackers to be coalesced carefully, zooming into the viewports. You
can then open the Coalesce Nearby Trackers dialog again, and either adjust the
parameters further, or simply click Coalesce if the settings are satisfactory.
What Does Nearby Mean?
The Distance, Sharpness, and Consistency controls all factor into the
decision whether two trackers are close enough to coalesce. It is a fairly complex
decision, taking into account both 2-D and 3-D locations, and is not particularly
amenable to human second-guessing. The controls are pretty straightforward,
though.
As an aside, it might seem that all that is needed is to measure the 3-D
distance between the computed tracker points, and coalesce them if the points
are within a certain distance measured in 3-D (not in pixels). However, this
simplistic approach would perform remarkable poorly, because the depth
uncertainty of a tracker is often much larger than the uncertainty in its horizontal

255
POST-SOLVE TRACKING

image-plane position. If the distance was large enough to coalesce the desired
trackers, it would be large enough to incorrectly coalesce other trackers.
Instead, SynthEyes uses a more sophisticated and compute-intensive
approach which is evaluated over all the active frames of the trackers.
The first and most important parameter is the Distance, measured in
horizontal pixels. It is the maximum distance between two trackers that can be
considered for coalescing. If they are further apart than this in all frames, they
will definitely not be coalesced. If they are closer, some of the time, they may be
coalesced, increasingly likely the closer they are.
The second most important parameter, the Consistency, controls how
much of the time the trackers must be sufficiently close, compared to their overall
lifetime. So very roughly, at 0.7 the trackers must be within the given distance on
70% of the frames. If a track is already geometrically accurate, the consistency
can be made higher, but if the solution is marginal, the consistency can be
reduced to permit matches even if the two trackers slide past one another.
The third parameter, Sharpness, controls the extent to which the exact
distance between trackers affects the result, versus the fact that they are within
the required Distance at all. If Sharpness is zero, the exact distance will not
matter at all, while at a sharpness of one (the maximum), if the trackers are at
almost the maximum distance, they might as well be past it.
Sharpness can be used to trade off some computer time versus quality of
result: a small distance and low sharpness will give a faster but less precise
result. Settings with a larger distance and larger sharpness will take longer to run
but produce a more carefully-thought-out resultthough the two sets of results
may be very similar most of the time, because the larger sharpness will make the
larger distance nearly equivalent to the smaller distance and low sharpness.
If you are handling a shot with a lot of jitter in the trackers, due to large film
grain or severe compression artifacts, you should decrease the sharpness,
because those small differences in distance are in fact meaningless.
What Trackers should be Coalesced?
Three checkboxes on the coalesce panel control what types of trackers
are eligible to be coalesced.
First, you can request that Only selected trackers be coalesced. This
allows you to lasso-select a region where coalescing is required. (Note: if you
only need 2 particular trackers coalesced, for sure, use Track/Combine Trackers
instead.)
Second, frequently you will only want to coalesce auto-trackers, or
trackers created by the Add Many Trackers dialog. By default, supervised non-
zero-weighted trackers are not eligible to be coalesced. This prevents your
carefully-constructed supervised trackers from inadvertently being changed.
However, you can turn on the Include supervised non-ZWT trackers checkbox
to make them eligible.

256
POST-SOLVE TRACKING

SynthEyes will also generally coalesce only trackers that are not
simultaneously active: for example, it might coalesce two trackers that are valid
on frames 0-10 and 15-25, respectively, but not two trackers that are valid on
frames 0-10 and 5-15. If both are autotrackers, if they are simultaneously active,
they are not tracking the same thing. The exception to this is if they are a large
autotracker and a small one, or an autotracker and a supervised tracker. To
combine overlapping trackers, turn off the Only with non-overlapping frame
ranges checkbox.
A satisfactory approach might be to coalesce once with the checkbox on,
as is the default, then open the dialog again, turn the checkbox off, and Examine
the results to see if something worth coalescing turns up.
An Overall Strategy
Although we have talked as if SynthEyes only combines two trackers, in
fact SynthEyes considers all the trackers simultaneously, and can merge three or
more trackers together into a single result in one pass.
It is possible that coalescing immediately a second time may produce
additional results, but this is probably sufficiently rare to make it unnecessary in
routine use.
However, after you coalesce trackers, it will often be helpful to do a Refine
solving cycle, then coalesce again. After the first coalesce, the refine cycle will
have an improved geometric accuracy due to the longer tracker lifetimes. With
the improved geometry, additional trackers may now be stable enough to be
determined to be tracking the same feature, permitting a coalesce operation to
combine them together, and the cycle to repeat.
Viewing this pattern in reverse, observe that a broader distance
specification will be required initially, when trackers on the same feature may be
calculated at different 3-D positions.
This is particularly relevant to green-screen shots, where the
comparatively small number of trackable features and their frequently short
lifetime, due to occlusion by the actors, can result in higher-than-usual initial
geometric inaccuracy.
Because the green-screen tracking marks are generally widely separated,
there is little harm in increasing the allowable coalesce Distance. The features
can then be coalesced properly, and the Refine cycle will then rectify the
geometry. The process can be repeated as necessary.
If you are using Add Many Trackers and then Coalescing and refining, you
should turn on the Regular, not ZWT checkbox on the Add Many dialog, so that
the added trackers will affect the Refine solution.

257
Perspective Window
The perspective window is an essential tool for checking a solved scene,
building meshes, and extracting textures. The perspective window allows you to
go into the scene to view it from any direction. Or, you can lock the perspective
view to the tracked camera view. You can build a collection of test or stand-in
objects to evaluate the tracking. Later, well see that it enables you to assemble
tracker locations into object models as well.
The perspective window is controlled by a large right-click menu, where
many different mouse modes can be selected as well as various one-shot tools.
The middle mouse button can always be used for general navigation. The left
mouse button may be used instead, with the same variations, when the
Navigation mode is selected.

Locking to a Camera
The perspective window can be used to overlay inserted objects over the
live imagery, much like the camera view. Select Lock to Current Camera to lock
or release, or use the L key. Note that when the view is locked to the camera,
you can not move or rotate the camera, or adjust the field of view.
You can have the perspective view continue to be locked to the current
camera (Active Tracker Host) even if that changes, by turning on Stay Locked to
Host. Or you can select a specific camera or object to lock to, and it will stay
locked to it, even if you unlock and then relock the view.
There are two related preferences in the Perspective area of preferences.
The Stay Locked to Host preference is used when new perspective windows are
created; this preference is ON by default. The Always Lock to Active preference
causes the Lock button to always lock to the active tracker host, instead of a
previously-stored tracker host.

Tip: To emulate the locking behavior of SynthEyes before 1608, turn


the Stay Locked to Host preference OFF, and turn the Always Lock to
Active preference ON.

Image Overlay and Projection Screens


The perspective window is designed to work with undistorted imagery:
the perspective view always shows the view with an ideal, undistorted camera,
even if SynthEyes has calculated a lens distortion value. A conventional camera
view will be pulled from 360VR footage as well.
This reflects the essential difference between the camera and perspective
views: the camera view shows the source footage as is, and distorts all 3-D
geometry to match. The perspective view shows the 3-D world as is, and
dynamically un-distorts the footage in order to make the shot line up.

259
PERSPECTIVE WINDOW

Note: the distortion performed by the camera or perspective views is


that specified by the distortion value on the Lens panel, typically
calculated by a solvethis is different than the distortion values in the
image preprocessor, which affect the imagery before it arrives to either
the camera or perspective views. There's no need to compensate for
image-preprocessor values, since the image already reflects them.

As an additional feature for green/blue-screen shots, the perspective view


will matte out the keyed background, showing only the un-keyed actors and set.
You can set up your scene to place them within your 3-D set by controlling the
camera/screen distance. If you don't want this kind of keying, turn off green
screen processing from the Green Screen dialog on the Summary panel.
The perspective view will also key based on any existing combined or
separate alpha channel in the imagery (for example, from a third-party keyer)
be sure to turn on Keep Alpha when opening the shot or in the image
preprocessor.
While you can set the distance to the screen directly, or proportionately to
the world size (the default is a multiple of the camera's world size), it is most
useful to have the screen dynamically scale so that it always passes through a
single point, either a tracker or extra point. For example, in a green-screen shot
of several people talking, you can use a tracker at their feet as a target point, so
that the screen stays fixed in distance in the 3D environment, even if the camera
moves closer and further away. You can position 3D objects around it and see
the matted shot in their midst.
The perspective view's built-in dynamic projection screen generator works
behind the scenes, so the resulting 3D meshes cannot be selected, examined, or
exported.

Important: the built-in projection-screen generator is very behind-the-


scenes. To adjust its settings, such as the grid resolution and distance
from camera to screen, use the Perspective Projection Screen
Adjust script. It affects the currently-selected camera. Settings for new
cameras are loaded from a corresponding set of preferences in the
Perspective section.

The projection screen mode listed in the preferences and adjust script has
four settings: Never, Automatic, Locked and Always. It controls when the built-
in projection screen mesh is engaged, to handle distortion, green-screening, and
alpha channels. If it is disabled, the incoming image will be displayed as-is, with
no distortion correction or keying. The Automatic system turns it on when the
perspective view is locked to this camera and distortion, green-screening, or an
alpha channel is present, and leaves it off the rest of the time, using simpler and
faster image draw code to improve performance. The Locked setting generates
the screen whenever the perspective view is locked to this camera.

260
PERSPECTIVE WINDOW

Finally, the Always setting generates the screen all the time, even when
the perspective view isn't locked to the camera. This lets you see the screen
when you have unlocked from the camera, or from other perspective view
windows, which can be handy for lineup with matted-out green screens, for
example.
The projection screen (and the physical script-based screen described
below) operate by texture-mapping a distorted grid object. The accuracy of this
process depends on the number of segments (and thus vertices and faces) in the
grid. For more extreme distortion, you should increase the resolution of the grid.
You can also use the Projection Screen Creator script to create actual
mesh geometry within the scene that is texture-mapped with the shot imagery.
As a physical mesh, it can be exported directly to other applications. Because
the mesh is created once, when you run the script, it cannot be used with zoom
shots, and you must manually re-generate it when the lens FOV or distortion
changes.

Freeze Frame
Each perspective view can be independently disconnected from the main
user interface time slider, frozen on a particular frame. This can be useful to
view a shot from two different frames simultaneously (to link trackers from
different parts of the same shot), or to view two shots with different lengths
simultaneously and with some independent control. That is especially helpful for
multi-shot tracking, where the reference shot is only a few frames long.
See View/Freeze on this frame on the right-click menu. Using the normal
A, s, d, F, period, and comma accelerator keys within a frozen perspective
window will change the frozen frame, not the main user interface time. To
update the main user interface time from within the perspective window, use the
left and right arrows (or move outside the perspective window!). To re-set the
frozen time to the current time, hit View/Freeze on this frame again. To unfreeze,
use View/Unfreeze.
The Scrub mouse mode will scrub the frozen frame number, or the normal
frame number, depending on whether or not the frame is frozen.

Stereo Display
SynthEyes can display anaglyph stereo images, on the right-click menu
select View/Stereo Display. If it is a stereo shot, both images will be displayed if
the image is enabled. If it is not a stereo shot, SynthEyes will artificially create
two views for the stereo display. See the settings on View/Perspective View
Settings. They include the inter-ocular distance and vergence distance, plus the
type of glasses you have. (You should look for glasses that strongly reject the
unwanted colors, some paper glasses are best!) The normal anaglyph views still
show the colors in the original images, though if you select the Luma versions,

261
PERSPECTIVE WINDOW

you will get a gray-scale version of the scene for each eye; some people prefer
this for assessing depth.

Navigation
SynthEyes offers four basic ways to reposition the camera within the 3-D
environment: pan (truck), look, orbit, and dolly in/out. You can select which
motion you want by several different ways: by using the control and ALT keys, by
clicking one of the navigation mode buttons on the Mouse toolbar overlay first, or
by dragging within one of the mouse buttons. You can navigate at any time by
using the middle mouse button, or with the left mouse button when the navigation
mode is selected.
Navigating with the Mouse Toolbar
The simplest form of navigation uses the Pan, Look, Orbit, and Dolly
buttons on the Navigate toolbar, which you can re-open if needed by right-
clicking and selecting Toolbars/Mouse.
You can click on one of the four buttons (Pan, Look, Orbit, and Dolly) to
select that mode. It will turn light blue, as the active mouse mode. Left-clicking
and dragging in the viewport will then create the corresponding camera motion.
Alternatively, you can drag within the button itself to create the
corresponding motion without permanently changing the mode. This is good
for quickly doing something other than the current mode, without changing the
mode. For example, a quick Look while you have been panning.
You can also navigate using the middle mouse button. When the mouse
mode is set to something OTHER than Navigate, one of the Pan, Look, Orbit,
and Dolly buttons will still remain lit in a tan color. That color indicates the action
the middle mouse button will create, when depressed in the rest of the window.
Similarly, you can middle-click one of the buttons to change the action of
the middle mouse button, or middle-drag within the button to temporarily create a
motion without changing the mode.
Navigating Maya-Style
You can navigate similarly to Maya by turning on the maya-style
navigation in the Perspective section of the preferences. There's not really a
drawback!
When enabled, ALT -left-drag will Orbit, ALT -middle-drag will Pan, and
ALT -right-drag will Dolly. These operations are available continuously within the
perspective view, independent of the mouse mode.

On Mac OS X: Maya uses the Mac's Opt key for navigation. Normally,
SynthEyes would use the command key as the equivalent, but to
maximize compatibility, you can use either Opt or Command to
initiate Maya-style navigation.

262
PERSPECTIVE WINDOW

On Linux: On Linux, you may have to adjust your modifier key so that
ALT is not taken by your window manager, as you do for Maya itself.

Once you start a drag with one button, it continues in that mode. If you
click either of the other two mouse buttons, the move will be canceled, which is
often helpful, keep reading!
Navigating with Control and ALT
The idea behind using Control and ALT (command on the Mac) is that you
can change the motion at any time, as you are doing it, by pressing or releasing
a key, while keeping the mouse moving. That makes it a lot faster than switching
back and forth between various tools and then moving the camera.
When neither control nor ALT are pressed, dragging will pan the display
(truck the camera sideways or up and down). Control-dragging will cause the
camera to look around in different directions, without translating. Control-ALT-
dragging will dolly the camera forwards or backwards. ALT-dragging will cause
the camera to orbit.
Navigating When Locked
Often, the perspective view will be locked to the camera using the Lock
button, so that the view tracks with the solved or seed camera. If you use a 3D
navigation mode when the camera is locked, the lock will be removed (generally
causing the background shot imagery to disappear), and the move will proceed
from the initial camera position. If you cancel the navigation movement by right-
clicking during it, the view will return to the initial position, and the lock and
background shot imagery will be restored.
This behavior makes it easy to take a "quick peek" in the vicinity of the
solved camera, to better view the geometry of nearby features.
If you use the Perspective Projection Screen Adjustment script to change
the projection screen mode to Always, you will be able to still see the shot
imagery, at the screen distance you have selected.
More Navigation Details
The center of the orbit will be selected from the first (lowest-numbered)
applicable item in the following list:
1. the net center of all selected trackers, or in place mode, about their seed
point,
2. the net center of all selected meshes,
3. the center of any selected object or light,
4. the net center of all selected vertices in the current edit mesh (more on that
later), or
5. around a point in space directly ahead of the camera.

263
PERSPECTIVE WINDOW

HINT: if you are trying to orbit around selected vertices and it is not
working as expected, do a control-D/command-D to clear the current
(tracker and mesh) selection.

You can see which motion will happen by looking at the top-left of the
perspective window at the text display.
The shift key will create a finer motion of any of these types.
The mouses scroll wheel will dolly in and out if the perspective window is
not locked to the camera, or if it is, it will change the current time. If locked, shift-
scrolling will zoom the time bar.
If you hold down the Z or / (apostrophe/double-quote) key when left-
clicking, the mouse mode will temporarily change to be navigation mode; the
mode will switch back when the mouse button is released. You can also switch to
navigate mode using the N key. So it is always easy to navigate and quick to
navigate without having to move the mouse to select a different mode.
Zooming and Panning in 2-D
The navigation modes described above correspond to physically moving
or rotating the camera. Sometimes we wish to zoom into the image itself, as we
can do with the camera viewwhich does not correspond to any camera motion
at all.
You can zoom into the image by selecting the Zoom mouse mode, then
dragging in the viewport (or in the button itself). Or, use right-drag to zoom as in
the camera view. The Pan 2D mode then allows you to pan within the zoomed-in
image.
A 2-D pan and 3-D pan can be very difficult to distinguish, and confusing.
Whenever the image has been zoomed into, SynthEyes displays a wide pinkish
border around the entire image, and the normal Pan, Look, Orbit, and Dolly
buttons are disabled. Clicking any of those buttons will reset the zoom so that the
entire image is displayed. You can also reset the zoom by right-clicking the zoom
button.
Other Mouse Toolbar Buttons
The mouse toolbar also contains a Scrub mouse mode button and a Lock
button.
The Scrub mode allows you to move rapidly through the entire shot,
independent of the main toolbar, even if Freeze Frame is engaged in this
window.
The lock button provides a quick way to turn the lock (background image)
on and off. Note that you may see a slight rotation as you turn it off, as an
unlocked perspective view always keep the camera rotation angle at zero.

264
PERSPECTIVE WINDOW

More About Toolbars


The perspective window has its own type of toolbars, which bring items
from its extensive right-click menus into an easier-to-access format: currently
these are Mouse, Mode, Mesh, Linking, Paint, Create, View, Texturing, and Grid.
These toolbars can only overlay the perspective window; they cannot float free,
and they cannot contain scripts. They are opened using the Toolbars submenu
on the perspective right-click menu. Toolbar buttons that correspond to mouse
modes will be displayed with a blue background when the mouse mode is active.
The title slot of each menu may be dragged to reposition the menu bar, or
double-clicked to change it from horizontal to vertical format. The reddish square
closes the toolbar until it is reopened.
When you resize the perspective window, toolbars get mapped to new
locations, though this can work out poorly when many items are displayed or the
new window is smallyou may need to rearrange them manually to taste.
Most items are described as part of the perspective right-click menus.The
mouse mode items have additional functionality, as described above. The Paint
toolbar contains a few extra slots that are the parameters of the paint operations,
such as the width and opacity function of the brush.
Using the Save as Defaults item on the Toolbars submenu will store the
current toolbar positions and status for use as defaults for following SynthEyes
runs in the per-user pertool14.xml file.
Knowledgeable users can hand-edit the pertool14.xml file to alter or add
new toolbars if they follow the format carefully.

Creating Mesh Objects


Create objects on the perspective window grid with the Create mesh
object mode. Use the 3-D panel or right-click menu to control what kind of object
is created. Selecting an object type from the right-click menu launches the
creation mode immediately.
With the mesh type selected, click and drag in a 3D viewport or
perspective view to create the mesh. Shift-drag to create a "square" object such
as square, disk, pyramid, etc.
If the SynthEyes user interface is set so that a moving object is active on
the Shot menu, the created object will be attached to that object. You can also
use the Hierarchy View to reparent objects.
Use the # button on the 3-D panel to adjust the number of segments of the
selected standard geometric object (cubes, cylinders, etc). See Edit/Edit Pivots to
move the pivot point.
The Duplicate Mesh script on the Script menu can clone and offset a
mesh, so you can create a row of fence posts quickly, for example. The Duplicate

265
PERSPECTIVE WINDOW

Mesh on Trackers script can copy a tree mesh onto each tracker on a hillside, for
example.

Importing Mesh Objects


You can import OBJ, DXF, Filmbox FBX, SBM, or XYZ (lidar) meshes into
SynthEyes using File/Import Mesh. You can later discover the original source of
the meshes using File/Scene Information (meshes subsection)

Tip: Use Mesh De-duplication features (not in Intro) to cut disk usage
with large meshes and multiple SNI file versions.

If you later change the source mesh file, you can reload it inside the
SynthEyes scene, without having to reestablish its other settings, by selecting it
and clicking the Reload button on the 3-D Panel.
Similarly, if you need to replace a mesh with another version, for example
a lower- or higher-resolution version, use File/Import/Replace Mesh. This will
change the vertex, face, normal, vertex color, and texture coordinate information,
but not other information.

Note: Use File/Import Mesh of an FBX file for files containing a single
mesh. Files with multiple meshes will result in an error.

Use File/Import/Filmbox Scene Import Files to read files with multiple


meshes. It allows multiple meshes with positioning, cameras, lights, animation
data, etc to be read. A mesh imported by Filmbox Scene Import cannot be
reloaded, however.

Mesh Vertex Numbering


When you are sending meshes back and forth between SynthEyes and
other applications, especially for animated vertex deformations, you may want to
have the exact same vertex numbering in the SynthEyes export as was present
when the mesh was imported, so the vertex caches exported from SynthEyes
can be applied directly to your original mesh

What the Heck?: This is an advanced topic. If the discussion so far


doesn't make sense to you, if this isn't a problem you have, then you
can just skim this section. (It will help guide you on whether or not to
delete vertex normals when importing.)

SynthEyes stores meshes using a scheme where each vertex position has
at most a single normal and/or a single texture coordinate, which means that
vertices may be duplicated/renumbered when meshes are imported. For
example, along the edges of a cube, vertices may have two different normals,
one for each side. Such vertices are duplicated into two copies, with the same
position, each with a single normal. This results in rapid processing and display
for typical use in SynthEyes.

266
PERSPECTIVE WINDOW

If your object is facet-shaded (instead of smooth-shaded) upon export


from your application, it can result in inefficient processing in SynthEyes, as
SynthEyes honors the not-so-significant face normals. It can result in much larger
meshes (3-4x more vertices) and prevent smooth vertices from being generated.
If at all possible, supply per-vertex normals, or none at all (SynthEyes builds face
normals automatically if needed.)
To address this, when an OBJ is imported, SynthEyes may ask you if you
want to strip the normals from the object. If there will be a large change in the
number of vertices resulting from the import, this typically indicates that per-face
normals were supplied, so deleting them is a good idea. You can then generate
smooth per-vertex normals, or allow SynthEyes to efficiently generate its own
per-face normals. (The Filmbox importer simply ignores per-facet normals
automatically.)
If your workflow calls for passing meshes into SynthEyes then back out
and into your application, and you then expect to have the same exact vertex
numbering, you will have to be selective in your work, as follows.
Some SynthEyes importers and exporters use additional data from the
time of import to be able to recreate the original vertex numbering upon export.

Important: At present, the .obj reader and "Mesh as A/W .obj" and
Filmbox importers and exporters are the ones that use original-vertex
information. In other cases, such as Alembic and Blender, it is not
necessary.

The original-numbering information is preserved automatically by the .obj


and FBX importers. (Meshes imported into pre-1600 SynthEyes versions will not
have this information; the mesh must be reloaded.) You can verify that a mesh
has original-numbering information by using the Mesh Information script.
When you rely on any same-numbering workflow, you must be careful not
to add or remove vertices in SynthEyes, as this will create an inconsistency with
the mesh on your other application!
When you export meshes, if the exporter supports the original-numbering
information you will have an option to use it or not, via the "Use original
numbering" checkbox.
If a mesh was built inside SynthEyes so that it does not have original-
numbering information, you also have an option ("No repeating vertices") to
eliminate repeating vertices/normals/texture coordinates, for the benefit of
downstream applications that may create seams if redundant vertices are
present.
If a mesh doesn't have it already, we highly recommend creating Smooth
Vertex Normals (per-vertex normals) for the benefit of downstream applications.
While exporters will create per-face normals for the benefit of some downstream

267
PERSPECTIVE WINDOW

applications that crash without normals, per-face normals cause inefficient


processing.
As you see, SynthEyes supports a variety of options to maximize
compatibility with upstream and downstream applications. There's a little
complexity to that, but if you keep an eye on the Mesh Information output and
think about what is happening, you should be able to pretty readily understand
what is happening and how to get SynthEyes to do what you want.

Lidar Meshes
SynthEyes imports Lidar data from set surveys as described above, from
XYZ files. The XYZ files can contain not only XYZ data for millions of vertices,
but RGB color information for each vertex as well. The vertices will be displayed
in the viewports as a point cloud.

Tip: Use File/Import/Mesh to read XYZ lidar files. The Intro version can
store meshes containing at most 2 million points.

Typical GIS-type lidar files use Z- coordinate data, so SynthEyes assumes


that by default. To process files with other axis settings, select the appropriate
setting on the lidar import settings panel (see the preference in the Meshes
section). Note that the setting for the lidar file is independent of the setting of your
scene! Data will be converted if necessary.
Lidar files will automatically be re-centered if they are too lopsided, ie the
average offset is much larger than the bounding box of the data. Lopsided files
cause numeric inaccuracy, and can be hard to find in the viewports and hard to
work with in general. If you are reading lidar in sections, or otherwise need to
disable this, turn off the Re-center lopsided Lidar checkbox (there is a preference
for this in the Meshes section).

Tip: If you read a lidar file with re-centering turned off and don't see it,
click control-shift-HOME and look in the viewports for a tiny dot that is
your scene. You can then zoom in on it. In the perspective view, click
control-F.

Lidar files will be decimated automatically to maintain a maximum vertex


count set from the Decimate to Mpnts spinner (see the Lidar vertex limit target
preference in the Meshes section). You can choose a random decimation style or
a patterned every-N style, only in the preferences.

Note: The Intro version limits lidar data to at most 2 million points.

Use decimation to control the number of points input and displayed. Large
numbers of points will severely impact performance. The default 10 million points
will generally maintain reasonable performance, while still giving a very dense
point cloud.

268
PERSPECTIVE WINDOW

Although Lidar data does not contain facets, you can use the Edit Mesh
tools in the perspective view to create facets from the vertices, if your task
requires it. Use well-positioned clipping planes to control the Lasso Vertices tool
while triangulating lidar data (any SynthEyes-generated plane will doLasso
Vertices won't look behind it/them).

Opacity
It can be helpful to make one or more meshes partially transparent; this
can be achieved using the opacity spinner on the 3-D Panel, which ranges from
an opacity of zero (fully transparent and invisible) to an opacity of one (fully
opaque; the default).
The opacity setting affects the mesh in the perspective view and in the
camera view only if the OpenGL camera view is used. See View/OpenGL
Camera View and the preference Start with OpenGL camera view. The OpenGL
camera view is the default on Mac OS X and Linux.
Note that while in an ideal world the opacity setting would simulate turning
a solid mesh into an attenuating solid, in reality opacity is simulated using some
fast but surface-based alpha compositing features in OpenGL. Depending on the
situation, including other objects in the scene, the transparent view may differ
substantially from what a true attenuating solid would produce, but generally the
effect generated should be quite satisfactory for helping understand the scene.

Moving and Rotating Objects


When an object is selected, handles appear. You can either drag the
handle to translate the object along the corresponding axis, or control-drag to
rotate around that axis.
The handles appear along the main coordinate system axes by default, so
for example, you can always drag an object vertically no matter what its
orientation.

Tip: You can lock a mesh's position, rotation, scale, and parenting
against inadvertent changes by clicking its lock icon in the Hierarchy
View, or entering lock selected object or similar in Synthia.

However, if you select View/Local-coordinate handles on the right-click


menu, the handles will align with the objects coordinate system, so that you can
translate along a cylinders axis, despite its orientation.
Additionally, for cameras or moving objects, you can select Path-relative
handles, so you can adjust along or perpendicular to the path.
To rotate or scale about arbitrary points, use the 3-D views and click the
Lock Selection button on the 3-D panel. You can then start dragging anywhere
in the viewports, and the rotate or scale will be about that point.

269
PERSPECTIVE WINDOW

Alternatively, you can move a mesh's pivot point by activating Edit/Edit


Pivots; you can then move it in the 3-D or perspective views, with snapping to its
bounding box sides and center and the mesh vertices (see the Edit menu
description). Beware that this does not adversely affect your prior work, however.

Placing Seed Points and Objects


In the Place mode, you can slide the selected object around on the
surface of any existing mesh objects. For example, place a pyramid onto the top
of a cube to build a small house. Or, place a mesh onto a tracker or extra point or
vertex of a lidar mesh.
You can also use the place mode to put a trackers seed/lock point onto
the surface of an imported reference head model, for example, to help set up
tracking for marginal shots.
For this latter workflow, set up trackers on the image, import the reference
model. Go to the Camera and Perspective viewport configuration. Set the
perspective view to Place mode. Select each tracker in the camera view, then
place its seed point on the reference mesh in the perspective view. You can
reposition the reference mesh however you like in the perspective view to make
this easyit does not have to be locked to the source imagery to do this. This
work should go quite quickly.

Tip: if you are placing one mesh on another, or placing already-solved


trackers, you can select the next object to move by holding down shift
while clicking. Then click/drag again to place the object. Or,
control/command-D to unselect everything, then click to select.

If you need to place trackers (or meshes) at the vertices of the mesh, not
on the surface, hold the control key down as you use the place mode, and the
position will snap onto the vertices. The vertices of lidar meshes are also
snappable.

Grid Operations
The perspective windows grid is used for object creation and mesh
editing. It can be aligned with any of the walls of the set: floor, back, ceiling, etc.
A move-grid mode translates the grid, while maintaining the same orientation, to
give you a grid 1 meter above the floor, say.
A shared custom grid position can be matched to the location of several
vertices or trackers using the right-click|Grid|To Facets/Verts/Trackers menu
item. If 3 or more trackers (or vertices) are selected, the grid is moved into the
plane defined by the best-fitting approximation to them (ignoring outliers). If two
are selected, the grid is rotated to align the side-to-side axis along the two. If one
is selected, the grid slides to put that tracker at the origin.
To take advantage of this, you should first select 3 or more vertices or
trackers, and align once to set the overall plane. Then select only one of them,

270
PERSPECTIVE WINDOW

and align again to set it to be the origin. Select two of them, and align again to
spin the plane so the X axis is parallel to the two. (You can do it in the order
3+,1,2 or 3+, 2, 1 ... the 3+ should always be first!)
You can easily create an object on the plane defined by any 3 (or more)
trackers by selecting them, aligning the grid to the trackers, then creating the
object, which will be on the grid.
You can toggle the display of the grid using the Grid/Show Grid menu
item, or the G key.

Shadows
The perspective window generates shadows to help show tracking quality
and preview how rendered shots will ultimately appear.
The 3-D panel includes control boxes for Cast Shadows and Catch
Shadows. Most objects (except for the Plane) will cast shadows by default when
they are created.
If there are no shadow-catching objects, shadows will be cast onto the
ground plane. This may be more or less useful, depending on your ground plane;
if the ground is very irregular or non-existent, this will be confusing.
If there are shadow-catching objects defined, shadows will be cast from
shadow-casting objects onto the shadow-catching objects. This can preview
complex effects such as a shadow cast onto a rough terrain.

NOTE: shadow catching relies on some features of OpenGL that may


not be present in older video cards (or "integrated graphics"). In this
case, only ground-plane shadows will be produced.

Shadows may be disabled from the main View menu, and the shadow
black level may be set from the Preferences color settings. The shadow enable
status is sticky from one run to the next, so that if you do not usually use it, you
will not have to turn it off each time you start SynthEyes.
You can control the resolution of the shadow map from the preferences;
you may want to increase it if you are working with image resolutions over HD or
notice jaggies along the edges of the shadows (ie if a mesh's shadows extend
over the entire size of the image).

Tip: You can create a texture map for the shadows on a mesh, so that
you can export and render the shadow independently in other apps.
See the Shadow Maker button on the Texture Control Panel.

Note that as with most OpenGL fast-shadow algorithms, there can be


shadow artifacts in some cases. Final shadowing should be generated in your 3-
D rendering application.
Note that the camera viewport does not display shadows by design.

271
PERSPECTIVE WINDOW

Edit Mesh
The perspective window allows meshes to be constructed and edited,
which is discussed in Building Meshes from Tracker Positions. One mesh can be
selected as an edit mesh at any timeselect a mesh, then right-click Set Edit
Mesh or hit the M key. A mesh's status as the edit mesh is independent of its
status as a selected mesh: it can be the edit mesh while not being selected. To
not have any edit mesh, right-click Clear Edit Mesh or Set Edit Mesh with nothing
selected.
When a mesh is an edit mesh, every vertex will be displayed in the
perspective view, as long as it is not used only on back faces (changing the view
will make them visible). Selected vertices are always shown, even if they are on
backfaces.

Tip: the selected vertices of the edit mesh are shown in the 3D
viewports, to help you understand where you are working. To take
advantage of this, make sure the the edit mesh is not also selected: if
it is selected, it will be red, and the selected vertices will be too. Select
something else, or click on empty space to select nothing at all.

The Lasso Vertices tool respects the "solidity" of the mesh, it will not
select vertices that are "through" the object from the current viewpoint.
Lasso Vertices also respects any SynthEyes planes ("clipping planes")
that you may have positioned in the scene. By placing one or more clipping
planes in the middle of an unfaceted object (tracker positions or lidar data), you
can select only the vertices on the front or back, say, and then triangulate them in
a controlled way.
To work from a standard mesh to a customized version, the Lightsaber
deletion tool may be helpful; it rapidly hacks off unwanted portions of meshes.
See the descriptions of the modes in the perspective view reference.

Preview Movie
After you solve and add a few test objects, you can render a test movie or
sequence with anti-aliasing and motion blur. The movie can then play (in
Quicktime Player or Windows Media Player etc) at the full rate regardless of
length. Movies can be produced with these file extensions, depending on the
platform: ASF(Win), AVI(Win), BMP, Cineon, DPX, JPEG, MP4(Win), MOV
(Quicktime), OpenEXR, PNG, SGI, Targa, TIFF, or WMV(Win). Only image
sequences are available on Linux.
You can create a movie either from the vantage point of the camera of the
active camera or moving object, if the perspective views Lock button is engaged
(ie so that the background image is shown, and the view animates with the
camera.) Or, if the camera is not locked, the movie will be made from the
unchanging current viewpoint of the perspective view.

272
PERSPECTIVE WINDOW

To make the movie, either click RENDER on the main mouse toolbar in
the perspective view, or right-click in the perspective window to bring up the
menu and select the Preview Movie item. Either way, you'll bring up a dialog
allowing the output file name, compression settings, and various display control
settings to be set. The sequence/movie will be produced at the effective
resolution of the images, as affected by the image preprocessor.
To change the movie's image resolution, use the Output Resampling
options on Output tab of the Image Preprocessor, or the DownRez option on the
Rez tab. You should also set the desired Interpolation method on the Rez tab,
and some Blur (on Filtering) if the resolution is being reduced. This will reduce
artifacts due to the image size change. (Resolution changes are done through
the image preprocessor, rather than directly in the render, on the Preview Movie
panel, to maximize image quality.)
Usually you will should pick a resolution that results in square pixels, so
that the preview will not be stretched or squished horizontally. There is a
checkbox for that on the Preview Movie panel; when selected it will convert
1440x1080 to 1920x1080 or 720x480 source to 640x480, for example. However,
as described previously, it will produce higher-quality output if you adjust the
resolution using the Output Resampling options of the Image Preprocessor
instead.

Important: Video codecs often have specific requirements for the size
of the image being written, such as a width that is a multiple of 16 or
can't be larger than a certain size. These restrictions cannot be
determined by SynthEyes; if a preview movie fails to be written, you
might double-check the image size and consider a more standard
alternative.

SynthEyes can anti-alias and motion blur meshes you have inserted in
the scene using SynthEyes. Both are controlled by the Anti-aliasing and motion
blur dropdown; motion blur settings start with "MoBlur," the other settings do not
motion blur. Shutter angle and phase for perspective view previews are set on
the Perspective portion of the preferences panel. The motion blur settings include
anti-aliasing, for frames where the camera/object/mesh is stationary.

Tip: You can't generate preview movies of full-range 360 VR footage.


Use Save Sequence from the image preprocessor instead; it can
render meshes.

If you are making a Quicktime movie, be sure to bring up the compression


settings and select something; Quicktime has no default and may crash if you do
not select something.
Also, different codecs will have their own parameters and requirements.
Similarly, image files used in a sequence may have their own settings dialog.

273
PERSPECTIVE WINDOW

Older Tip: the Windows 32-bit H.264 codec requires that the Key
every N frames checkbox be off, and the limit data-rate to 90 kb/sec
checkbox be off: otherwise there will be only one frame.

Note that image sequences written from the Preview Movie are always 8
bit/channel with no alpha. If you are trying to save a sequence as part of a lens
distortion compensation workflow, you should be using Save Sequence on the
Output tab of the Image Preprocessor instead.

Technical Controls
The Scene Settings dialog contains many numeric settings for the
perspective view, such as near and far camera planes, tracker and camera icon
sizes, ambient illumination, etc. You can access the dialog either from the main
Edit menu, or from the perspective windows right-click menu.
By default, these items are sized proportionate to the current world size
on the solver control panel. Before you go nuts changing the perspective window
settings, consider whether it really means that you need to adjust your world size
instead!

274
Exporting to Your Animation Package
Once you are happy with the object paths and tracker positions, use the
Export menu items to save your scene.
The list below covers just some of the available options listed; check the
File/Export listing or the web site. There are multiple variants and various
generic exports as well.
The Filmbox (FBX) export covers the most SynthEyes features; if your
application can read Filmbox, you should likely always consider whether to use it
or an application-specific export. The FBX importer of your application may or
may not support features you need, likewise the SynthEyes exporters to various
applications may or may not be able to support all possible features.
When you start an export and get a pop-up control panel, be sure to look
for tooltips that describe the exporter's controls.
3ds max 4 or later (Maxscript). (Older export, Filmbox FBX preferred)
Should be usable for 3D Studio MAX 3 as well. Separate versions for
3dsmax 5 and earlier, and 3dsmax 6 and later.
After Effects (via Javascript or a special maya file or as 2D data)
Alembic 1.5+
Bentley Microstation
Blender
BVH motion capture (export and 2 imports, one for reference pose, one for
data)
Carrara
Cinema 4D (via python or Lightwave scene)
COLLADA
Combustion
ElectricImage (less integrated due to EI import limitations)
FLAIR motion control cameras (Mark Roberts Motion Control)
Flame (3-D)
Filmbox FBX (includes PC2 and MCX point cache outputs)
Fusion
Hash Animation:Master. Hash 2001 or later.
Houdini
Inferno 3-D Scene via Autodesk Action/DVE
Lightwave LWS. Use for Lightwave, optionally for Cinema 4D
Maya scene file (Older export, Filmbox FBX preferred)
MDD Motion Designer (Lightwave, Modo, many...) Point cache format with
exports for animated tracker positions, or animated meshes.
Mesh as A/W .obj file
Mistika
Modo
Motion 2-D

275
EXPORTING TO YOUR ANIMATION PACKAGE

Nuke
Particle Illusion
PC2. Point cache format
Photoscan
Poser
Realsoft 3D
Shake (several 2-D/2.5-D plus Maya for 3-D scenes)
SoftImage XSI, via a dotXSI file
Toxik (earlier versions, not updated for 2009)
trueSpace
Vue 5 Infinite
Vue 6 Infinite and Later
VIZ (via Filmbox FBX or 3ds Max scene)
SynthEyes offers a scripting language, SIZZLE, that makes it easy to
modify the exported files, or even add your own export type. See the separate
SIZZLE User Manual for more information. New export types are being added all
the time, check the export list in SynthEyes and the support site for the latest
packages or beta versions of forthcoming exporters.

General Procedures
You should already have saved the scene as a SynthEyes file before
exporting. Select the appropriate export from the list in the File/Exports area.
SynthEyes keeps a list of the last 3 exporters used on the top level of the File
menu as well.

Hint: SynthEyes has many exports. To simplify the list, click


Script/System Script Folder, create a new folder Unused in it, and
move all the scripts for applications you do not use into that folder. You
will have to repeat this process when you later install new builds,
however.

There is also an export-again option, which repeats the last export


performed by this particular scene file, with the most-recently-used export
options, without bring up the export-options dialog again to save time for
repeated exports.
When you export, SynthEyes uses the file name, with the appropriate file
extension, as the initial file name. By default, the exported file will be placed in a
default export folder (as set using the preferences dialog).
In most cases, you can either open the exported file directly, or if it is a
script, run the script from your animation package. For your convenience,
SynthEyes puts the exported file name onto the clipboard, where you can paste it
(via control-V or command-V) into the open-file dialog of your application, if you
want. (You can disable this from the preferences panel if you want.)

276
EXPORTING TO YOUR ANIMATION PACKAGE

Note that the detailed capabilities of each exporter can vary somewhat.
Some scripts offer popup export-control dialogs when they start, or small internal
settings at the beginning of each Sizzle script. For example, 3ds max does not
offer a way to set the units from a script before version 6 and the render settings
are different, so there slightly different versions for 3dsmax 5 and 6+. Settings in
the Maya script control the re-mapping of the file name to make it more suitable
for Maya on Linux machines. If you edit the scripts, using a text editor such as
Windows Notepad, you may want to write down any changes as they must be
re-applied to subsequent upgraded versions.
Be aware that not all packages support all frame rates. Sometimes a
package may interpret a rate such as 23.98 as 24 fps, causing mismatches in
timing later in the shot. Or one package may produce 29.96 vs 29.97 in another.
Handle image sequences and use frame counts rather than AVIs, QTs, frame
times, or drop-frame time codes wherever possible.
The Coordinate System control panel offers an Exportable checkbox that
can be set for each tracker. By default, all trackers will be exported, but in some
cases, especially for compositors, it may be more convenient to export only a few
of the trackers. In this case, select the trackers you wish to export, hit control-I to
invert the selection, then turn off the checkbox. Note that particular export scripts
can choose to ignore this checkbox.

Multiple Exports
You can configure SynthEyes to produce several exports simultaneously
from a single operation using File/Export Multiple and File/Configure Multi-export.
The multi-export configuration dialog lets you create a list of exports to be run.
The multi-export system uses the name of your file to create the names of
all your exports. The exports get placed in the first of the same folder as the last
single export you performed; in the folder specified by the default export
preference folder; or in the folder of the scene file. If you are producing multiple
different exports that have the same file extension (commonly .py for Python),
they will overwrite each other, so you will lose all but the last. Instead, double-
click the exporters in the multi-export configuration dialog, and specify a different
suffix to be added to the scene name, to keep them apart. For example, enter
_C4D for Cinema 4D, _Bl for Blender, etc, so you get Shot7_C4D.py and
Shot7_Bl.py.
When the multi-export is run, the parameter dialog for the individual
exports will not be shown, as with Export Again. Accordingly, you should pre-set
the options you want for each export format, by running each export manually the
first time and configuring their options.
The list of exports is stored with each particular shot. You can store a list
as your preferences for when future scenes are created. You can also reload the
list from the preferences at any time.

277
EXPORTING TO YOUR ANIMATION PACKAGE

The multiple-export list will be used automatically, if it is present, for Batch


exports.

Setting the Units of an Export


SynthEyes uses generic units: a value of 10 might mean 10 feet, 10
meters, 10 miles, 10 parsecswhatever you want. It does not matter to
SynthEyes. This works because match-moving never depends on the overall
scale of the scene.
SynthEyes generally tries to export the same way as wellsending its
numbers directly as-is to the selected animation or compositing package.
However, some software packages use an absolute measurement system
where, for instance, Lightwave requires that coordinates in a scene file always be
in meters. If you want a different unit inside Lightwave, it will automatically
convert the values.
For such software, SynthEyes needs to know what units you consider
yourself to be using within SynthEyes. It doesnt care, but it needs to tell the
downstream package the right thing, or pre-scale the values to match your
intention.
To set the SynthEyes units selection, use the Units setting on the
SynthEyes preferences panel. Changing this setting will not change any numbers
within SynthEyes; it will only affect certain exports.
The exports affected by the units setting are currently these:
After Effects (3-D)
Hash Animation Master
Lightwave
3ds max
Maya
Poser
Before exporting to one of these packages, you should verify your units
setting. Alternatively, if you observe that your imported scene has different values
than in SynthEyes, you should check the units setting in SynthEyes.

Hint: if you will be exporting to a compositing package, they often


measure everything, including 3-D coordinates, in terms of pixels, not
inches, meters, etc. Be sure to pick sizes for the scene that will work
well in pixels. While you might scale a scene for an actor 2m tall, if you
export to a compositor and the actor is two pixels tall, that will rarely
make sense.

278
EXPORTING TO YOUR ANIMATION PACKAGE

Image Sequences
Different software packages have different conventions and requirements
regarding the numbering of image sequences: whether they start at 0 or 1,
whether there are leading zeroes in the image number, and whether they handle
sequences that start at other numbers flexibly.
For example, if you have a shot that originally had frames img1.tif-
img456.tif, but you are using only images img100.tif-img150.tif of it, SynthEyes
will normally consider it as a 51 frame shot, starting with frame 0 (img100.tif) or,
with First frame is 1 preference on, as frame 1 at img100.tif.
Other software sometimes requires that their frame numbers match the file
number, so img100.tif must always be frame 100, no matter what frame# they
normally start at.
See also the section on Advanced Frame Numbering for opening shots.
By being aware of these differences, you will be able to recognize when
your particular situation requires an adjustment to the settingstypically when
there is a shift between the camera path animation and the imagery.

Rotation Angles, Gimbal Lock and Motion Blur


Typically SynthEyes sends camera and moving-object orientation
information to downstream applications using three rotation angles, as
applications commonly expose those three angles individually to the animator for
manipulation. These may be RX, RY, RZ values, Pan, Tilt, Roll values, etc.
This form of angle specification is prone to well-known pitfalls, however,
when the downstream application interpolates the individual angles, rather than
the overall orientation:
1) angles repeat every 360 degrees, ie 10 and 370 degrees refer to the same
orientation;
2) there are generally two different solutions which correspond to the same
physical position (where one angle is the negative, and the other two are
180 degrees different);
3) at some orientations, there may only be two fundamental degrees of freedom
(typically referred to as "gimbal lock").
Internally, SynthEyes uses a representation that does not suffer from this
problem, but specific angles must be produced to interface with most other
applications, and problems can arise as a result. SynthEyes produces angles
that stay in the range -180 to +180 degrees. When combined together, they
produce the correct overall orientation.
Normally, this topic is not an issue, good things happen automatically, and
this entire section of the manual might be ignored. In particular shots, or
combinations of settings, problems may arise. The trick is to know enough to
recognize when this occurs, so that you can address it (which is not hard).

279
EXPORTING TO YOUR ANIMATION PACKAGE

Consider a camera that spins continuously by more than 360 degrees


(problem #1). At some point, it might transition from +179.5 degrees to -179.5, for
example, a net motion of 1 degree. When you look at each frame, the camera is
fine.
The problem arises when you turn on motion blur. The rendering
application interpolates the camera position, and if it does so naively, it will
conclude that the camera spins not from 179.6, 179.7, 179.8, 179.9, (+/-)180, -
179.9 .. -179.5, but from 179.5 to 140, 120, 100, ... 0 ... -100, -120... -179.5. This
will produce a disastrous image for a frame or two of the sequence.
Similar issues are with problem #2 when a camera looks up past vertical.
Instead of tilting up 100 degrees, it tilts up only 80 degrees, and the pan and roll
are 180 different than they would be for a 110 degrees tilt. If the camera move
from horizontal to this extreme tilt back, there will be a large discontinuity, and a
similarly blurred-out image.
Problem #3 arises when the camera looks exactly straight up or down (or
in some other specific direction depending on the exact form of angles required
by the application). In this case, the pan and roll angles line up exactly, and you
can pick a value for either arbitrarily as long as you correct the other (ie pan=30,
tilt=50 and pan=95,tilt= -15 both refer to the same orientation). Motion blur
problems occur with the transition to the immediately preceding or following
frame, which have different specific required pan and tilt angles.
Some SynthEyes exporters include special code to correct or minimize
these situations (#3 is correctible only in a few specific cases). Current exporters
including this code: After Effects, Blender, Filmbox, Lightwave(enable/disabled
checkbox), Maya. It is not needed for 3dsmax.
When motion blur issues arise for exporters without this code (submit a
feature request as needed), or when no correction is feasible, you have a few
options:
try a different exporter, such as filmbox,
switch to a different rotation angle ordering (either directly in the
exporter settings or on the preferences panel),
change the overall scene coordinate system setup to a
configuration that avoids that specific problem,
in SynthEyes, slightly tweak the orientation of the camera/object on
a gimbal-lock frame to move it away from the degenerate
orientation,
hand-edit the rotation angle curves in the downstream application.
Switching to a different coordinate system setup is not unusual for moving-
object setups: a moving object may often be set up to be exactly aligned to
coordinate axes, and thus subject to a precise problem #2 gimbal lock. Switching
to a differently oriented object null addresses this easily.

280
EXPORTING TO YOUR ANIMATION PACKAGE

IMPORTANT WARNING: There is a problem #3 degenerate condition


in Z-Up coordinates when the camera is looking exactly along +Y with
no roll and ZXY rotation angle ordering is in use. This orientation is the
default orientation at the start of a camera track, if no coordinate
system has been set up (tripod shots!). Under these specific
conditions, there will be a motion blur glitch on the first frame. If this
occurs, fix any of the preconditions listed in this warning, for example
by setting up a coordinate system or switching to XYZ axis ordering.

On a related note, you should always use linear interpolation between


keys in your 3-D application (SynthEyes sets this as the default where possible).
If you use a spline-based interpolation, small amounts of jitter on the frames can
amplify to much larger excursions between the frames, and excessive motion
blur: this should be avoided.

Generic 2-D Tracker Exporters


There are a number of similar exporters that all output 2-D tracker paths to
various compositing packages. Why 2-D, you protest? For starters, SynthEyes
tracking capabilities can be faster and more accurate. But even more
interestingly, you can use the 2-D export scripts to achieve some effects you
could not with the compositing package alone.
For image stabilizing applications, the 2-D export scripts will average
together all the selected trackers within SynthEyes, to produce a synthetic very
stable tracker.
For corner-pinning applications, you can have SynthEyes output not the 2-
D tracker location, but the re-projected location of the solved 3-D point. This
location can not only be smoother, but continues to be valid even if the tracker
goes off-screen. So suppose you need to insert a painting into an ornate
picture-frame using corner pinning, but one corner goes off-screen during part of
the shot. By outputting the re-projected 3-D point (Use solved 3-D points
checkbox), the corner pin can be applied over the entire shot without having to
guess any of the path.
Taking this idea one step further, you can create an extra point in 3-D in
SynthEyes. Its re-projected 2-D position will be averaged with any selected
trackers; if there are none, its position will be output directly. So you can do a
four-corner pin even if one of the corners is completely blocked or off-screen.
By repeating this process several times, you can create any number of
synthetic trackers, doing a four-corner insert anywhere in the image, even where
there are no trackable features. Of course, you could do this with using a 3-D
compositing environment, but that might not be simplest.
At present, there are compatible 2-D exporters for After Effects, Digital
Fusion, Discreet (Combustion/Inferno/Flame), Particle Illusion, and Shake. Note
that you will need to import the tracker data file (produced by the correct

281
EXPORTING TO YOUR ANIMATION PACKAGE

SynthEyes exporter) into a particular existing tracker in your compositing


package.
There is also a 2-D exporter that exports all tracker paths into a single file,
with a variety of options to change frame numbers and u/v coordinates. A similar
importer can read the same file format back in. Consequently, you can use the
pair to achieve a variety of effects within SynthEyes, including transferring
trackers from SynthEyes file to SynthEyes file, as described in the section on
Merging Files and Tracks. This format can also be imported by Fusion.

Generic 3-D Exporters


There are several 3-D exports that produce plain text files. You can use
them for any software SynthEyes dont already support, for example, non-visual-
effects software. You can also use them as a way to manipulate data with little
shell, AWK, or Perl scripts, for example.
Importantly, you can also use them as a way to transfer data between
SynthEyes scene files, for example, to compute some tracker locations to be
used by a number of shots. There are several ways to do this, see the section on
Merging Files and Tracks.
The generic exports are Camera/Object Path for a path, Plain Trackers for
the 3-D coordinates of trackers and helper points, and corresponding importers.
You can import 3-D locations to create either helper points, or trackers. This
latter option is useful to bring in surveyed coordinates for tracking.

After Effects 3-D Javascript Procedure


The preferred method of exporting from SynthEyes to After Effects is via
the After Effects Javascript exporter. This is a very sophisticated 3D exporter that
produces excellent fidelity and flexibility into After Effects, including the ability to
transfer cards, moving object rigs, and distortion. There is also a special 2-D
exporter that creates four-corner pins for planar tracker, and a 2-D copy/paste
exporter.

Important---before you track! When opening a shot in SynthEyes


that you will transfer to After Effects, be sure to open the first frame in
the sequence, even if you will track only a portion. After Effects will
always go back to the first frame, putting SynthEyes and After Effects
out of sync if you start in the middle. Open the first frame, then use the
Start/End controls on the Shot setup panel or the time bar to adjust the
section you will use. If you mess this up, you will have to copy the
sequence to another folder without the earlier images, so After Effects
doesn't see them.

The exporter asks about your After Effects version and customizes its
export to match. Be sure to select the correct version. There are settings for
versions from CS3 onwards (to match distortion in After Effects, CS4 or later is

282
EXPORTING TO YOUR ANIMATION PACKAGE

required). Note that depending on the details of Adobe releases, there may not
be a setting for your particular version; if not, use the setting for the last version
preceding yours.
Using the Export Action control, you can have SynthEyes start After
Effects automatically, as soon as you run the export. Or, you can save the file to
run later, or on a different machine. If you save the file to run it in After Effects
later, do so by selecting File/Scripts/Run Script File... in After Effects, then select
the file you exported from SynthEyes. The exporter controls are described below.

Important: If the Run now options aren't starting After Effects, you
probably don't have the right AE version selected on the exporter. If it
is installed to a different location, see the controls listing below.

Demo version: Sorry, the demo version is not able to start external
applications, including After Effects. The file will be written and can be
run using After Effects' File/Scripts/Run Script.

The Add Cards operation within SynthEyes's perspective view is very


powerful; you may find it easiest to set up your 3D layers within SynthEyes's
more powerful 3D environment, then export the cards to After Effects, where
they become layers with the texturing already applied.
After Effects does not support meshes directly, as they are somewhat out
of its scope; they are not exported (except for Cards). However, the
adventuresome might export meshes from SynthEyes, import them in
Photoshop, then load the Photoshop "image" containing the mesh into After
Effects. Judicious use of layers is a simpler approach!

Note for CS5 or earlier: the After Effects Javascript exporter exports
SynthEyes lights, but it is not possible for a script to set the correct
type of the light before CS5.5. So you will need to set their type to
Point or Directional accordingly after completing the import. The default
is Spot, so it is never right! Adobe fixed this in CS5.5.

Everything is Measured in Pixels!


After Effects uses "pixels" as its unit within the 3-D environment, not
inches or feet (ie it does not convert units at all). The default SynthEyes
coordinate system setup keeps the world less than 100 units across. As AE
interprets that as pixels, your 3-D scene can appear to be quite small in AE, as is
the case in the tutorial on the web site, which is why we had to scale down the
object we created and inserted in AE. It is much easier to adjust the coordinate
system in SynthEyes first, so the 3-D world is bigger, for example by changing
the coordinates of the second point used in coordinate system setup from 20,0,0
to be 1000,0,0, say. You can also use the Extra Scaling parameter of the
exporter to produce larger scenes, especially if you are exporting to several
different packages.

283
EXPORTING TO YOUR ANIMATION PACKAGE

Exporter Controls
Here is a quick description of the parameters. Note that the names may
change slightly, or additional parameters may be added.

Important: Be sure that you configure the AE Version setting for your
AE version, or the version of whoever will be running the export, or
errors will occur when the script is run, or SynthEyes may not be able
to start AfterEffects automatically.

Export Action The script can be written to create a new project, to add (a
new comp) to an existing After Effects project, to add (new
layers) to the selected or first existing After Effects comp in
the current project, or to update the previous export. See the
Updating After Effects Projects section. There are two
versions of each type, Run Now and Don't Run. Both
produce the javascript, the Run Now version causes After
Effects to run it immediately.
For AE Version Select the version of After Effects you are using; more
precisely, the version you want to export for.
Timeline setup Three choices controlling how the shot is positioned in time
on the After Effects timeline. One or the other option may be
more useful for your needs. Active part: the active portion of
the shot is positioned at the start of the composition, with the
same displayed frame numbers as in SynthEyes. Entire
shot: the composition will be the entire duration of the
original shot, with the After Effects "work area" controls set
up to indicate the tracked portion of the shot. Match frames:
the shot is repositioned so that each frame number has the
position dictated by its file name(number). The composition
may be the same size (first frame# is 0), one frame longer
(first frame# is 1), or substantially longer (a larger frame#). If
you anticipate editorial changes, we recommend opening
the maximum shot size in SynthEyes initially, and tracking
the desired portion in SynthEyes, and exporting using the
"Entire Shot" option, not Match Frames.
Display in frame#s When checked, the After Effects user interface will operate
using frame numbers, matching SynthEyes. Unchecked, it
operates in seconds, making comparison to SynthEyes more
difficult.
Extra Scaling Multiply all the position numbers in SynthEyes by this value,
making the scene larger in After Effects (which measures in
pixels)
Force CS centering Offsets the SynthEyes coordinates so that the SynthEyes
origin falls at the center of the After Effects workspace.
Although this makes the coordinate values less convenient

284
EXPORTING TO YOUR ANIMATION PACKAGE

to work with, it reduces the amount of zooming and middle-


panning in the After Effects Top/Left/Front/etc views.
Include all cameras Export each camera in the scene in its own composition. If
unchecked, only the active camera is exported.
Layer for moving objects When checked, a 3D layer will be produced for each
moving object, and trackers and meshes will be parented to
it, as appropriate. When unchecked, no layer will be
produced, and those trackers and meshes will be animated
frame-by-frame.
Include Trackers No/All as regular/Planar as such. On No, trackers are not
exported at all. Otherwise, a null layer will be produced for
each non-planar tracker. When Planar as such is selected,
planar trackers will be exported as planar trackers,
producing a matching placeholder layer ready to receive
imagery or effects. When All as regular is selected, both
regular and planar trackers appear as nulls. Note that
generally you should use the Exportable checkbox on the
Coordinate System panel to control which trackers are
exported, to reduce clutter in After Effects. AE is not
particularly efficient at handling hundreds or thousands of
trackers.
Camera Shutter Controls the composition's shutter angle and phase,
separated by a comma. The current After Effects default of
180 corresponds to film. Keeping the shutter opening
centered (-90, or -half the angle) prevents relative shifts
between the imagery and animation.
Relative Tracker Size Controls the size of the tracker nulls in After Effects;
adjust and repeat as necessary.
Sort back to front on Choice: None/Start Frame/Middle Frame/End Frame.
When set to one of the frames, the tracker layers are
stacked in order, from near on top to far at the bottom, as
measure on the selected frame. When None is selected,
they are stacked in their creation order, ie typically by tracker
number.
Tracker Anchor Point Selects which corner of the layer will be placed at the
exact 3D location of the tracker. AE defaults to the Top Left,
but for example if you are adding trees at the tracker
location, the bottom center might be a better choice.
Tracker Facing Direction Controls the forward-facing direction of the layers
created for the trackers. Use to make the tracker nulls lie flat
or stand up straight, whatever is more convenient. For
"Camera @ start", @ middle, and @end, the trackers all face
the camera on the specified frame. If the control is set to
Custom, the custom rotation controls below are used.
Custom X,Y,Z Rotation (deg) These control the orientation of the tracker null
layers within the 3D workspace when the Tracker Facing

285
EXPORTING TO YOUR ANIMATION PACKAGE

Direction is set to Custom. The values must be separated by


commas.
Send shadowing When checked, transmits the Cast Shadows and Catch
Shadows checkboxes for cards from SynthEyes to After
Effects. If unchecked, After Effects defaults prevail.
Send Distortion When checked, a SynthEyes Distortion (CS6/CC) or
Undistort (CS4/5) effect will be applied to the background
footage layer, to match the distortion calculated by the solver
(on the Lens panel). See below for information on distortion.
When unchecked, no effect will be created: if there is a
non-zero distortion value in SynthEyes, the scene will
not match in After Effects, and you must remove the
distortion using the SynthEyes image preprocessor, re-solve,
and then export with no distortion.
Filter Type Selects the type of filtering used by the SynthEyes Distortion
node in CC/CS6, which affects the resulting image
sharpness or softening. Builtin uses AE's builtin sampling (ie
bi-linear), controlled by Draft/Better/Best layer quality
settings. The other settings match those on the SynthEyes
image preprocessor, offering tradeoffs of improved image
quality (sharpness) with a potential for increased noise,
ringing, and/or processing time.
Custom AfterFX.exe location (PC) If you have installed After Effects in a
non-standard location, such as a different drive, you should
click Browse and find and select the AfterFX.exe file from
your installation, so that Run Now modes can find it. Can
also be used to export scripts for a CS6 customer, but run
CC on your machine, say.
Custom AfterFX version (Mac) The application name to be started by Run
Now, such as Adobe After Effects CC. Can be used to
export scripts for a CS6 customer, but run CC on your
machine, say.

Enough about distortion! If distortion is present and exported, the exporter will
warn you about installing the SynthEyes effects. Once you
have done that, check this box to silence the message.
Updating After Effects Projects
The "New project" action in the After Effects export builds a new project
file from scratch, ie it does a File/New within After Effects. You will be prompted
to save any existing project by After Effects.
The "Add to project" action assumes you already have some non-
SynthEyes project, to which you would like to add the SynthEyes export as a new
comp. This option will do that.

286
EXPORTING TO YOUR ANIMATION PACKAGE

Tip: When you do an Add to Project, there will be no question about


Save/Don't Save/Cancel, because the existing project is not being
destroyed. If you get that question, you exported with New project
instead of Add to project, so you should cancel to avoid losing unsaved
changes in the current project!

If you select "Add to comp," then the layers for the export will be added to
an existing comp in the existing project: either the currently-active comp, or to the
first comp if none are active. Usually you will want to "Add to project," however.
If the "Update existing export" action is selected, the script will modify the
existing, already-open, After Effects project that you have already exported this
exact same scene to. This is helpful when you already have worked on an After
Effects scene, adding additional layers, etc, but have updated some of the
tracking in SynthEyes and don't want to what you added in After Effects.
The update script will update:
camera and object paths
lens distortion information
tracker positions and orientations
plane positions and orientations
light positions
Any new trackers, planes, or lights will be added (though new trackers will
always appear at the top of the layer stackup). The script will not add new shots
or moving objects. Trackers, planes, or lights that have been deleted from
SynthEyes will not be deleted from the After Effects scene.
When you update an existing scene, elements you have added within
After Effects will not be updated in any fashion, since SynthEyes does not know
about them. So it is desirable to keep the modified 3D scene as similar as
possible to the original scene, so that elements don't have to be unnecessarily
re-positioned.
To achieve that, you should be sure to configure a coordinate system
setup within SynthEyes, so that the 3D scene will have the same, or as similar as
possible, positioning, orientation, and scaling from one solve to another.
If you need to add a few trackers, you might want to make them zero-
weighted-trackers (ZWTs), and they will not affect the existing camera solve and
layer positioning at all.
Matching SynthEyes Distortion in After Effects
If SynthEyes calculates a solve that contains distortion, the 3D solve and
footage will match only if a corrected version of the original footage is generated.
Often that means outputting the source footage from the image preprocessor to
produce a linearized (corrected) version, using one of the Lens Distortion
workflows.

287
EXPORTING TO YOUR ANIMATION PACKAGE

However, it is possible to do that correction on the fly, within After Effects,


which makes the workflow simpler. SynthEyes includes special effects that
reproduce (some of) the lens distortion functionality of the SynthEyes image
preprocessor within After Effects.

Usage of plugins by others. SynthEyes Demo is for non-commercial


use. As a specific exemption, we permit someone who does not own a
SynthEyes license to use the After Effects plugins from SynthEyes
Demo commercially for the sole purpose of working with AE files
exported by a commercial SynthEyes license owner. You may not
distribute the plugins yourself.

The SynthEyes Distortion effect is a much better choice than trying to use
an AE distortion filter: it offers an exact match to the computed distortion, offers
better image-quality filtering options, and it integrates cleanly into proper lens-
distortion workflows.
Lens distortion for After Effects CC/CS6
For CC/CS6, we supply a native After Effects Effect (see below for
installation). Be sure to set the After Effects version to CS6 or CC as appropriate
when exporting.
You can use the After Effects exporter and distortion effect in the following
situations:
1) After doing a solve, using the Calculate Distortion control to calculate a
distortion value on the main Lens panel. This method doesn't control the
delivered imagery particularly well.
2) After solving and then clicking the Lens Workflow button on the
Summary Panel, using the 1-pass deliver-undistorted method.
3) After solving and then clicking the Lens Workflow button on the
Summary Panel, using the 2-pass deliver-distorted method.
In each case, when you do the export, a matching SynthEyes Distortion
effect will be created, so that the raw footage will be undistorted within After
Effects, and you can add effects in its 3D environment that line up with the
undistorted imagery.
To help out in case #3 above, the export script produces an additional
comp that redistorts the 3D environment back to match the original distorted
imagery. This comp is named ReCamera01 for the usual Camera01; the 3D
environment is Camera01_3D. (For cases #1 and #2 just ignore the extra comp).
To use this 2-pass setup, add your 3D elements inside the Camera01_3D
environment. Turn off visibility for the original footage layer within Camera01_3D,
so that it's output is only the added elements. The added elements get
redistorted then overlaid onto the original distorted imagery within ReCamera01.

288
EXPORTING TO YOUR ANIMATION PACKAGE

Image Mapper
There's an additional lens-distortion component available for those who
may wish to use it. It implements the map-based distortion approach within
AfterEffects. It can be used to apply lens presets not supported by the main After
Effects distortion plugin, for example.
To use it, apply the SynthEyes Image Mapper effect to the layer
containing the image map, ie the still image or image sequence. Then on its
Effect Controls, set the Image to map to be the appropriate layer.
The composition should have the same resolution as the map: this will be
the resolution of the mapped image. You will also have to set the project settings
to 16 or 32bit per channel depth.

IMPORTANT: After Effects does everything using a single consistent


bit depth, either 8, 16, or 32 bits per channel. Since image maps must
be 32-bit floating or at least 16 bits per channel, this means that your
entire project must be set to, and processed as, 32-bit or 16-bit. This
will affect processing time, temporary storage requirements, and even
effects availability or functionality in some cases. This is an After
Effects limitation over which we have no control. You can still do your
final render to your desired bit depth.

Installation for CC/CS6.


To be available, the SynthEyes effects must be installed in After Effects
(possibly, each version you have installed on your machine). The effects are
located within the plugins folder of your SynthEyes installation folder (click
Script/System Script Folder then go up a level to see plugins).
Windows:
1. Create a new "SynthEyes" folder within the (existing) folder:
C:\Program Files\Adobe\Adobe After Effects CC 20yy\Support
Files\Plug-ins\Effects
2. Copy SyAFXLens.aex, SyAFXMapper.aex, and SyAFXVRS.aex
into the new SynthEyes folder
3. Restart After Effects if it is already running
Mac OS X:
1. Create a new "SynthEyes" folder within the (existing) folder:
2. /Applications/Adobe After Effects CC 20yy/Plug-ins/Effects
3. Copy SyAFXLens.plugin, SyAFXMapper.plugin, and
SyAFXVRS.plugin into the new SynthEyes folder
4. Restart After Effects if it is already running
Lens distortion for CS4/CS5
The exporter can use the Pixel Bender effects for CS4 and CS5. The
exporter configures these only for case #1 above: distortion on the solver.

289
EXPORTING TO YOUR ANIMATION PACKAGE

You can manually configure the SynthEyes Pixel Bender effects to


implement one-pass and two-pass lens distortion workflows. A third "Advanced
Distortion" node handles undistortion and redistortion that matches the complex
high-order distortion and off-centering generated by the SynthEyes lens preset
generator. See the tutorials on our SynthEyesHQ youtube channel for details on
usage.
We have provided a matching "redistort" filter which allows you to redistort
footage to match, and be re-composited with, the original footage. You can read
more about Lens Workflow in this manual.
The SynthEyes/After Effects Pixel Benders filters perform only bi-linear
interpolation, which soften the images slightly compared to the Lanczos and
Mitchell-Netravali options available in SynthEyes. So you may still find it
desirable to use SynthEyes for the better filtering and its additional preprocessing
options.
Installation for CS4/CS5
To use the Pixel Benders, you must install three files, seadvdistort.pbk,
seundistort.pbk, and seredistort.pbk, for After Effects to use. These files are in
the SynthEyes plugins folder (click Script/System Script Folder then go up a level
to see plugins) and get installed to the following folders:
Win XP C:\Documents and Settings\username\My Documents\Adobe\Pixel Bender\
Win 7/Vista C:\Users\username\Documents\Adobe\Pixel Bender\
Mac OS ~/Documents/Adobe/Pixel Bender/
Note that you may have to create the folder. The folder is specific just to
your userid, other users of the same machine, if it is shared, will have to do the
same thing. After Effects will notice the filters when it next starts up.

After Effects 3-D "Maya ASCII" Procedure (older method)


This is an older way of exporting to After Effects; it is generally more
limited, for example, it does not export cards or moving objects. We include this
procedure for reference in case your facility is using it in its workflow.
1. Export to After Effects in SynthEyes to produce a (special) .ma file.
2. In After Effects, do a File/Import File
3. Change "Files of Type" to All File Formats
4. Select the .ma file
5. Double-click the Composition (Square-whatever composition in older AE
versions).
6. Re-import the original footage
7. Click File/Interpret Footage/Main and be sure to check the exact frame
rate and pixel aspect. Be especially careful with 23.976 and 29.97,
entering 24 or 30 fps will cause subtle errors!
8. Rewind to the beginning of the shot
9. Drag the re-imported footage from the project window into the timeline as
the first layer

290
EXPORTING TO YOUR ANIMATION PACKAGE

10. Tracker nulls have a top-left corner at the active point, instead of being
centered on the active point as in SynthEyes.

Important note: After Effects uses "pixels" as its unit within the 3-D
environment, not inches or feet (ie it does not convert units at all). The
default SynthEyes coordinate system setup keeps the world less than
100 units across. As AE interprets that as pixels, your 3-D scene can
appear to be quite small in AE, as is the case in the tutorial on the web
site, which is why we had to scale down the object we created and
inserted in AE. It is much easier to adjust the coordinate system in
SynthEyes first, so the 3-D world is bigger, for example by changing
the coordinates of the second point used in coordinate system setup
from 20,0,0 to be 1000,0,0, say. You can also use the rescaling
parameter of the exporter. You can also use the rescaling parameter of
the exporter to produce larger scenes, especially if you are exporting to
several different packages.

After Effects 2-D Planar Exporter


This exporter sends 2- or 3-D planar trackers to After Effects as animated
four-corner pins. If you are only going to be adding 2-D effects in After Effects
from planar trackers, such as a monitor insertion or logo replacement, and would
rather work in After Effects's more familiar 2-D environment, this is an excellent
choice.
You'll find it listed as AfterEffects 2-D from Planars on the Export menu, or
as the AE Corner Pin button on the Planar Tracking script bar.
The 2-D planar exporter has the same ability to Run Now as the 3-D
exporter, and supports New, Add, and Update options. You'll find all the controls
described above in the 3-D exporter writeup.
But remember, this exporter exports only planar trackers (2- or 3-D).

After Effects 2-D Single-Tracker Path Procedure


This procedure exports a single tracker path to a null in After Effects. If
multiple trackers are selected, they are averaged together to produce a lower-
noise version.
1. Select one or more trackers to be exported.
2. Export using the After Effects 2-D Clipboard. You can select either the 2-D
tracking data, or the 3-D position of tracker re-projected to 2-D.
3. *Open the text file produced by the export
4. *In the text editor, select all the text, using control-A or command-A.
5. *Copy the text to the clipboard with control-C or command-C.
6. In After Effects, select a layer to receive the path.
7. Go to the first frame of the shot.
8. Paste the path into it with control-V or command-V.

291
EXPORTING TO YOUR ANIMATION PACKAGE

Tip: *The SynthEyes export puts the contents of the file onto the
clipboard immediately by itself. As long as you proceed immediately to
After Effects, you can immediately paste into the layer, without having
to open the exported file. This can save quite a bit of time. You only
need to open the file if you come back to it later.

Alembic 1.5+
Alembic (http://www.alembic.io) is a multi-application interchange format
along the general lines of Filmbox (FBX). It is a bit more tailored than Filmbox,
however, specifically emphasizing meshes, especially large animated meshes
such as those from GeoH tracking. While Alembic contains the mesh data and
hierarchy, it does not talk about how the data was created, so that it might be
changed by a downstream application. Example uses include sending final
animations to renderers, or to send simulation results into an animation program.
See their website for more information on Alembic philosopy.

NOTE: This exporter uses the "Ogawa" back end of Alembic 1.5 and
later, which is faster and produces smaller files than earlier Alembic
libraries. But... earlier Alembic libraries are unable to read Ogawa files,
so you will need versions of your applications that also use Alembic 1.5
or later. Alembic 1.5 was released in early 2014.

Alembic does not include match-move shot image setup or mesh texturing
or even such simple capabilities as assigning different colors to various trackers
or meshes, so it is a bit less friendly than other formats (such as FBX). Alembic's
strengths are speed and ability to handle large files including animated geometry
within the file itself, rather than as separate point caches. Some users may prefer
it based on prior experiences with FBX; it remains to be seen whether Alembic
importers work better on average than Filmbox importers. SynthEyes offers it as
an option; you can determine if it meets your particular workflow needs.
Alembic Controls
Export which shots. Selector. Export only the active shot (ie the shot of the
current Active Tracker Host), all shots with the same frame rate as the
active shot, all shots with the same frame rate and start/end frame values
(typically, a stereo pair), or all shots. When a shot is exported, the camera
and all moving objects and their children are exported, as well as meshes
parented to them, and unparented meshes.
Timeline setup. Selector. Controls where the active portion of the shot winds up
in the downstream application, roughly equivalent to asking what part of
the timeline is exported. With "Active part", the first active frame of the
shot becomes the first frame on the timeline downstream. The shot
imagery should be set up so that the active part of the shot starts at the
beginning of the timeline. With "Entire shot", if the first active frame is #15,
it will be #15 in the downstream application: the entire original shot should
be applied starting at the beginning of the downstream timeline. With

292
EXPORTING TO YOUR ANIMATION PACKAGE

"Match Frames" (for image sequences), the animation is output so that for
example image shot27_img0200.png will appear at from 200 on the
downstream application, even if for example SynthEyes was only given
the shot starting at frame 100, and only frame 150 onwards is being used.
Output Axis Setting. Selector. Select Y-Up output, which is typical for Alembic,
or optionally Z-Up output, which may be useful for 3dsmax.
Additional scene scaling. Numeric. Use this to scale the entire scene up or
down by that factor, for example to make it 10x larger or perhaps apply
some units conversion (Alembic doesn't have any idea about "units"). A
definitive scene scaling should always be set up in SynthEyes, this is only
for adjusting it to accommodate some downstream application.
Create trackers chisels. Checkbox. When checked, small inverted pyramids
(chisels) will be created at each tracker location.
Chisel size override. Numeric. When zero, the size of the chisel is determined
automatically based on the SynthEyes world size value. Set a specific
value here, if desired. It is a "SynthEyes" value: the Additional scene
scaling will be applied to this value.
Create screen. Checkbox. When checked, a projection screen will be created to
be the eventual recipient of the shot imagery as a backdrop during effects
development.
Screen's Distortion Mode. Selector. The screen can be built in such a way that
it compensates for the distortion calculated during the solve, ie on the
Lens panel. It can also be built to re-apply that distortion, if needed.
Screen distance override. Numeric. The projection screen is normally (ie when
this value is zero) located a distance from the camera determined from the
Solver panel's world size value. Set a specific value here, if desired. It is a
"SynthEyes" value: the Additional scene scaling will be applied to this
value
Screen vertical grids. Numeric. The number of grids in the vertical direction for
the generated screen. The horizontal number is determined by multiplying
this value times the image aspect ratio. A lower number is suitable if there
is no distortion, but an increasingly larger value should be if image
distortion is present.
Set far and near clips. Checkbox. If set, the Alembic file will contain the clipping
values used by SynthEyes, ie on the Perspective View Settings panel.
Downstream applications may or may not use these values if they are
present.

Bentley MicroStation
You can exporter to Bentleys Microstation V8 XM Edition by following
these directions.
Exporting from SynthEyes
1. MicroStation requires that animated backgrounds consist of a consecutive
sequence of numbered images, such as JPEG or Targa images. If

293
EXPORTING TO YOUR ANIMATION PACKAGE

necessary, the Preview Movie capability in SynthEyess Perspective


window can be used to convert AVIs or MOVs to image sequences.
2. Perform tracking, solving, and coordinate system alignment in SynthEyes.
(Exporting coordinates from MicroStation into SynthEyes may be helpful)
3. File/Export/Bentley MicroStation to produce a MicroStation Animation
(.MSA) file. Save the file where it can be conveniently accessed from
MicroStation. The export parameters are listed below.
SynthEyes/MicroStation Export Parameters:
Target view number. The view number inside MicroStation to be
animated by this MSA file (usually 2)
Scaling. This is from MicroStations Settings/DGN File Settings/Working
Units, in the Advanced subsection: the resolution. By default, it is listed as 10000
per distance meter, but if you have changed it for your DGN file, you must have
the same value here.
Relative near-clip. Controls the MicroStation near clipping-plane
distance. It is a relative value, because it is multiplied by the SynthEyes world
size setting. Objects closer than this to the camera will not be displayed in
MicroStation.
Relative view-size. Another option to adjust as needed if everything is
disappearing from view in MicroStation.
Relative far-clip. Controls the MicroStation far clipping-plane distance. It
is a relative value, because it is multiplied by the SynthEyes world size setting.
Objects farther than this from the camera will not be displayed in MicroStation.
Importing into MicroStation
1. Open your existing 3-D DGN file. Or, create a new one, typically based on
seed3d.dgn
2. Open the MicroStation Animation Producer from
Utilities/Render/Animation
3. File/Import .MSA the .msa file written by the SynthEyes exporter.
4. Set the View Size correctlythis is required to get a correct camera
match.
a. Settings/Rendering/View Size
b. Select the correct view # (typically 2)
c. Turn off Proportional Resize
d. Set X and Y sizes as follows. Multiply the height(Y) of your image,
in pixels, by the aspect ratio (usually 4:3 for standard video or 16:9
for HD) to get the width(X) value. For example, if your source
images are 720x480 with a 4:3 aspect ratio, the width is 480*4/3 =
640, so set the image size to X=640 and Y=480, either directly on
the panel or using the Standard drop-down menu. This process
prevents horizontal (aspect-ratio) distortion in your image.
e. Hit Apply

294
EXPORTING TO YOUR ANIMATION PACKAGE

f. Turn Proportional Resize back on


g. Close the view size tool
5. On the View Attributes panel, turn on the Background checkbox.
6. Bring up the Animation toolbar (Tools/Visualization/Animation) and select
the Animation Preview tool. You can dock it at the bottom of MicroStation
if you like.
7. If you scrub the current time on the Animation Preview, youll move
through your shot imagery, with synchronized camera motion. Unless you
have some 3-D objects in the scene, you wont really be able to see the
camera motion, however.
8. If desired, use the Tools/3-D Main/3-D Primitives toolbar to create some
test objects (as you probably did in SynthEyes).
9. To see the camera cone of the camera imported from SynthEyes, bring up
Tools/Visualization/Rendering, and select the Define Camera tool. Select
the view with the SynthEyes camera track as the active view in the Define
Camera tool, and turn on the Display View Cone checkbox.
Transferring 3-D Coordinates
If you would like to use within MicroStation the 3-D positions of the
trackers, as computed by SynthEyes, you can bring them into MicroStation as
follows.
1. You have the option of exporting only a subset of points from SynthEyes
to MicroStation. All trackers are exported by default; turn off the
Exportable checkbox on the coordinate system panel for those you dont
wish to export. You may find it convenient to select the ones you want,
then Edit/Invert Selection, then turn off the box.
2. In SynthEyes, File/Export/Plain Trackers with Set Names=none, Scale=1,
Coordinate System=Z Up. This export produces a .txt file listing all the
XYZ tracker coordinates.
3. In MicroStation, bring up the Tools/Annotation/XYZ Text toolbar.
4. Click the Import Coordinates tool. Select the .txt file exported from
SynthEyes in Step 1. Set Import=Point Element, Order=X Y Z, View=2 (or
whichever you are using).
Transferring Meshes
SynthEyes uses two types of meshes to help align and check camera
matches: mesh primitives, such as spheres, cubes, etc; and tracker meshes, built
from the computed 3-D tracker locations. The tracker meshes can be used to
model irregular areas, such as a contoured job site into which a model will be
inserted. Both types of models can be transferred as follows:
1. In SynthEyes, select the mesh to be exported, by clicking on it or selecting
it from the list on the 3-D panel.
2. Select the File/Export/STL Stereolithography export, and save the mesh to
a file.

295
EXPORTING TO YOUR ANIMATION PACKAGE

3. In MicroStation, select File/Import STL and select the file written in step 2.
You can use the default settings.
4. Meshes will be placed in MicroStation at the same location as in
SynthEyes.
5. You can bring up its Element/Information and assign it a material.
To Record the Animation
1. Select the Record tool on the Animation toolbar
(Tools/Visualization/Animation)
2. Important: Be sure the correct (square pixels) output image size is
selected, the same one as the viewport size. For example, if your input is
4:3 720x480 DV footage, you MUST select 640x480 output to achieve 4:3
with square pixels (ie 640/480 = 4/3). MicroStation always outputs square
pixels. You can output images with any overall aspect you wish, as long
as the pixels are square (pixel aspect ratio is 1.0). Note that HD images
already have square pixels.
3. Dont clobber your input images! Be sure to select a different location for
your output footage than your input.

Blender Directions
Blender has a tendency to change around frequently, so the details of
these directions might best be viewed more as a guide than the last word. Unlike
with commercial software, new versions of Blender may not run perfectly good
scripts than ran in previous Blender versions, such as scripts produced by
SynthEyes. If you have to use a very recent Blender version that can no longer
run the SynthEyes-produced files, you can use an older working version of
Blender that can import your SynthEyes export. Import your SynthEyes output
there, save that scene, then load that scene into the latest version of Blender,
which typically retains backward compatibility for at least scene files. Emailing a
detailed description of required changes in the blender script will facilitate a
timely update.

Important: Be sure to adjust the Blender version number selector on


the exporter's option panel to reflect the version of Blender you are
trying to export to. It's OK if your exact version isn't listed, just use the
listed version immediately prior to yours.

Blender Directions
The normal Blender exporter handles shot and texture imagery as well as
meshes; it is a full-function export. The shot imagery is placed on a "projection
screen" --- a piece of physical geometry --- as an animated texture.

Important: When importing OBJ meshes from blender, you should be


sure to delete facet normals supplied by Blender. The blender export
does not renumber vertices, because blender uses a one vertex/one
texture coordinate system similar to SynthEyes. As long as you delete

296
EXPORTING TO YOUR ANIMATION PACKAGE

normals, the export will maintain the same vertex numbers as in


Blender.

The Blender exporter has some limited support for using the Cycles
panorama camera when exporting pure VR-mode = Present or Apply shots. This
permits a simpler insertion workflow for 360 VR shots with Blender. You will have
to set up the texturing and other materials within the Cycles environment
yourself.
When exporting, animated meshes will result in point cache files being
placed in a secondary folder, ie exporting scene37b will utilize a scene37b_bpc
folder if there is deforming geometry.
1. In SynthEyes, start the export to Blender (Python)
2. Select the a value in the top Blender version dropdown corresponding to
your version of Blender. There are only a few settings, corresponding to
general ranges of blender versions. For example, use "2.58+" for blender
versions after 2.58, up until the next available setting, ie "2.66+".
3. Adjust the Screen RelDist. and Clipping RelDist. The projection screen
distance is a multiple of the SynthEyes Solver panel's World Size value;
the screen should be further away than most or all of your trackers. The
Clipping RelDist. sets the far clipping plane; it should be past the
projection screen and trackers.
4. You can use the "Remove path prefix" and "Add path prefix" settings to
modify exported filenames, typically if you are outputting the file on one
computer, but will be opening it in blender on another. For example,
remove C:\Shots and add /Volumes/STORAGE/Shots.
5. You have the option to have a new instance of Blender open the exported
file immediately: check the "Open in Blender Automatically" checkbox. For
this to work, you must carefully set the "Blender application" field to point
to the location of your version of blender on your machine. The default
value will never be usable as is, but should show the general value
required.
6. With run automatically selected, you're all set to go! Click OK and blender
will open with the new file. You can look at the directions below for a few
more hints about using the file in Blender.

WARNING: using auto-run, be sure to close any OLD blender first!


The new one can open so quickly it's hard to tell which is which, and if
you're working on an old one you will get very confused.

If you don't have auto-run on, here's what to do to open the exported
python file in Blender.
1. With "run automatically" off, once you've completed the export from
SynthEyes (hit OK), start Blender
2. Change one of the views to the blender Text Editor
3. In the text editor, open the blender script you exported in step 2.

297
EXPORTING TO YOUR ANIMATION PACKAGE

4. Hit ALT-P or the Run button to run the script


5. In a 3-D view, use the Camera on the View menu to look through the
imported, animated, SynthEyes camera
6. On the View Properties dialog, you might wish to turn off Relationship
Lines to reduce clutter.
7. Use a Timeline view to scrub through the shot.
8. If parts of the scene or some of the trackers are inexplicably missing,
you probably needed to use larger projection screen or clipping
distances, export again with larger values.
Blender DirectionsBefore 2.57
(This older export has limited feature support.) When working with image
sequences and blender, it will be a good idea to ensure that the overall frame
number is the same as the number in the image file name. Although you can
adjust the offset, Blender incorrectly eliminates a frame number of zero.
1. In SynthEyes, export to Blender - Earlier (Python)
2. Start Blender
3. Delete the default cube and light
4. Change one of the views to the blender Text Editor
5. In the text editor, open the blender script you exported in step 1.
6. Hit ALT-P to run the script
7. Select the camera (usually Camera01) in the 3-D Viewport
8. In a 3-D view, select Camera on the View menu to look through the
imported, animated, SynthEyes camera
9. Select View/Background image
10. Click Use Background Image
11. Select your image sequence or movie from the selection list.
12. Adjust the background image settings to match your image. Make sure the
shot length is adequate, and that Auto Refresh is on. If the images and
animation do not seem to be synced correctly, you probably have to adjust
the offset.
13. Decrease the blend value to zero, or you can go without the background,
and set up compositing within blender.
14. On the View Properties dialog, you might wish to turn off Relationship
Lines to reduce clutter.
15. Use a Timeline view to scrub through the shot.

Cinema 4D Procedure
The Cinema 4D python exporter requires Cinema 4D Version R12 or later
without exception. To export to earlier versions, use the Lightwave export as
described below.

NOTE: Cinema4D works with whole-integer frame rates only. This can
be a problem with movie files at 29.97 or 23.976 fps, which will appear
as 29 or 23 fps. You will see sync errors even if you set the values to

298
EXPORTING TO YOUR ANIMATION PACKAGE

30 or 24 fps. To work around this C4D limitation, use image


sequences, changing the frame rate to the integer value before
exporting.

New way: Cinema 4D Python Export


1. Select the Cinema 4D Python export in SynthEyes
2. In Cinema4D, go to the Python menu, User Scripts, then Run Script.
3. Use that to run the script exported from SynthEyes.
Old way: Cinema 4D Via Lightwave
1. Export from SynthEyes in Lightwave Scene format (.lws) see below.
2. Start C4D, import the .lws file, yielding camera and tracking points.
3. To set up the background, add a Background using the Objects menu
4. Create a new Texture with File/New down below.
5. At right, click on next to the file name for texture.
6. Select your source file (jpeg sequence or movie).
7. Click on the right-facing triangle button next to the file name, select Edit.
8. Select the Animation panel
9. Click the Calculate button at the bottom.
10. Drag the new texture from the texture editor onto the Background on the
object list. Background now appears in the viewport.

COLLADA Exports
The COLLADA export ("DAE" extension) is an industry-standard format
that is very capable and can directly export full set reconstruction information, ie
all object positions, meshes, and texturing. Unfortunately it is not necessarily
read effectively by all programs that read COLLADA files. Programs that read
Filmbox (FBX files) should also be able to read COLLADA files. You will have to
see what the capabilities are of the programs you use.
You can export COLLADA files using the "COLLADA DAE" export or via
the "COLLADA DAE via Filmbox" export. The former occurs via a standard Sizzle
script that you can modify, if needed. The latter comes via Autodesk's Filmbox
exporter, which can also export COLLADA. If you need to use COLLADA to get
into a particular package with no direct import and it has trouble, you might try
both COLLADA exports to see if that package reads one or the other better.
Similarly, in some software such as 3ds max and maya, there may be
three different ways to import COLLADA files: a built-in COLLADA importer; via
the Filmbox importer; or via various open-source COLLADA importers. The
capabilities of each of these can be expected to change over time.

DotXSI Procedure
1. In SynthEyes, after completing tracking, do File/Export/dotXSI to create a
.xsi file somewhere.
2. Start Softimage, or do a File/New.

299
EXPORTING TO YOUR ANIMATION PACKAGE

3. File/Import/dotXSI... of the new .xsi file from SynthEyes. The options may
vary with the XSI version, but you want to import everything.
4. Set the camera to Scene1.Camera01 (or whatever you called it in
SynthEyes).
5. Open the camera properties.
6. In the camera rotoscopy section, select New from Source and then the
source shot.
7. Make sure Set Pixel Ratio to 1.0 is on.
8. Set Use pixel ratio to Camera Pixel Ratio (should be the default)
9. In the Camera section, make sure that Field of View is set to Horizontal.
10. Make sure that the Pixel Aspect Ratio is correct. In SynthEyes, select
Shot/Edit Shot to see the pixel aspect ratio. Make sure that XSI has the
exact same value: 0.9 is not a substitute for 0.889, so fix it! Back story:
XSI does not have a setting for 720x480 DV, and 720x486 D1 causes
errors!
11. Close the camera properties page.
12. On the display mode control (Wireframe, etc), turn on Rotoscope.

ElectricImage
The ElectricImage importer relies on a somewhat higher level of user
activity than normal, in the absence of a scripting language for EI. You can export
either a camera or object path, and its associated trackers.
1. After you have completed tracking in SynthEyes, select the camera/object
you wish to export from the Shots menu, then select File/Export/Electric
Image. SynthEyes will produce two files, an .obm file containing the
trajectory, and an .obj file containing geometry marking the trackers.
2. In ElectricImage, make sure you have a camera/object that matches the
name used in SynthEyes. Create new cameras/objects as required. If you
have Camera01 in SynthEyes, your camera should be "Camera 1" in EI.
The zero is removed automatically by the SynthEyes exporter.
3. Go to the Animation pull-down menu and select the "Import Motion"
option.
4. In the open dialog box, select "All Files" from the Enable pop-up menu, so
that the .obm file will be visible.
5. Navigate to, and select, the .obm file produced by SynthEyes. This will
bring up the ElectricImage motion import dialog box which allows you to
override values for position, rotation, etc.

Normally, you will ignore all these options as it is simpler to parent the
camera/object to an effector later. The only value you might want to
change is the "start time" to offset when the camera move begins. Click
OK and you will get a warning dialog about the frame range.

This is a benign warning that sets the "range of frames" rendering option

300
EXPORTING TO YOUR ANIMATION PACKAGE

to match the length of the incoming camera data. Hitting cancel will abort
the operation, so hit OK and the motion data will be applied to the camera.
6. Select "Import Object" from the Object pull-down menu.
7. Enable "All Files" in the pop-up menu.
8. Select the .obj file produced by SynthEyes.
9. Create a hierarchy by selecting one tracker as the parent, or bringing in all
trackers as separate objects.
10. If you are exporting an object path, parent the tracker object to the object
holding the path.

Filmbox FBX
Filmbox (FBX) format is an Autodesk-controlled proprietary format that is
available to applications such as SynthEyes under simple free license terms. It is
widely supported across the industry, but not necessarily deeply: like COLLADA,
different applications will read different amounts of information from the file.
While we can write the information into the FBX file, we can't make any other
application read the file correctly. (See the end of this section for application-
specific filmbox tips.)
In particular, since FBX was originally intended as a means of motion-
capture data interchange, many applications do not read animated textures
correctly, which are necessary to set up the match-moved shot on the
background plate. So you may need to do that manually.
Despite these limitations, FBX files do contain a lot of data, including
multiple cameras with zoom; a projection screen that can be distorted by lens
distortion; moving objects; the trackers including far and mo-cap style trackers;
all regular and far meshes in the scene, including UV map, normals, and
texturing; and lights.
While SynthEyes has a Filmbox importer as well, the FBX format is not a
substitute for the SNI file: you cannot export a SNI file to an FBX, import the FBX,
and expect to see the same thing as the original SNI file. There is no equivalent
for much of the data in the SNI file in an FBX, so much data (including all tracking
data) will be lost.

Warning: Filmbox has a degeneracy in its rotation angles that will


cause motion blur problems when the camera faces exactly to the
back. We recommend setting up a proper coordinate system, which
will usually avoid exact alignments.

When the SynthEyes scene is exported, the currently active shot is set up
as the main shot for downstream software. You can select which other cameras
will be exported...
The FBX exporter has a lot of options, which we'll describe here.
Export which cameras. This selects which cameras (and associated moving
objects, trackers, and far meshes) will be exported: only the currently-

301
EXPORTING TO YOUR ANIMATION PACKAGE

active camera (or camera of the currently-active moving object), all


cameras with the same frame rate as the active camera's frame rate
(typically to drop survey shots), or all cameras.
Output format. Filmbox can output in several different versions, including ASCII
or binary, or older versions. The binary version is usual: faster and
smaller. The ASCII exports can be examined by hand in a text editor.
FBX Version. Allows you to export files using older versions of the Filmbox
format.
Interpret SynthEyes units as. This option makes a note that the unit-less
SynthEyes values should be interpreted as the given unit by the importing
application. For example, you can tell the importing application to interpret
a 12.5 in SynthEyes as 12.5 inches or 12.5 meters (if the application
supports it).
Additional scaling. Multiplies all the SynthEyes coordinates by the given
number, typically to impose a units interpretation on an application that
ignores the units setting above. Use the SynthEyes coordinate system to
set up a proper scene scaling, not this!
Export Trackers. Keep this checked to export all exportable trackers; turn it off
to not export any trackers.
Marker Type. Trackers in SynthEyes will appear in downstream applications as
the given kind of object. FBX supports many marker types, but support for
them may be limited. The Chisel is a small piece of geometry and should
therefore be supported everywhere; it is the default. Null is another good
choice, as most packages support Null axes. Note that "None" is different;
it is a marker with no specific type, ie it will be determined by the importer.
Specify view. Says whether the field of view should be specified by a horizontal
or vertical value, or a focal length. This is the track that will be animated if
there is a zoom. The recommended default is horizontal. We do not
recommend focal length because it is dependent on back plate, which you
rarely know.
Relative tracker size. The geometric tracker size will be this fraction of the
camera's world size; used to auto-scale trackers to match the rough scene
size.
Relative far override. If this value is non-zero, it is multiplied by the world size to
determine how far the projection screen is placed from the camera.
Create screen. If set, a mesh projection screen is created and textured with the
animated shot imagery. This screen can be deformed by the lens
distortion to remove it. If not set, the shot imagery is supplied to the
downstream application as a background texture; it can not be deformed.

IMPORTANT: Once you have imported the FBX scene, do not modify
the projection screen in any fashion to correct any perceived
problem. If you do, you will destroy the match! Make any changes in
SynthEyes and re-export.

302
EXPORTING TO YOUR ANIMATION PACKAGE

Vertical grids. If a projection screen is created, this is the number of vertical grid
lines. The horizontal count is computed based on the shot aspect ratio.
While a lower value can be used if the screen is not being deformed, a
much higher value may be needed for significant distortion.
Screen's Lens Mode. If set to Remove (Normal), the projection screen will be
deformed in order to remove any distortion computed during the solve
and present on the main Lens panel (not the values on the image
preprocessor's Lens panel!). This will let you see the match in your 3D
package, without it needing to understand SynthEyes lens distortion. Set
to Apply, the screen will be distorted in such a way that undistorted
imagery becomes distorted. You will need to know what to do with that.
Set to None, the projection screen will not be distorted in any way, the
lens distortion values will be ignored.
Use original numbering. When checked, SynthEyes will export the mesh using
the same vertex, normal, and texture indices as when the file was
imported, if this information is available. This helps workflows that rely on
these numbers being the same.
No repeating vertices. When checked, SynthEyes reworks the mesh data so
that no vertex position occurs more than once in the output. This will
prevent susceptible applications (Maya) from incorrectly creating seams at
repeated vertices (when UV coordinates or normals are present).
However, it results in larger and more complex FBX files compared to the
usual approach.
Use quads if possible. When set, the export will contain quads, where possible,
as well as triangles. When off, only triangles will be exported, which may
be necessary for some downstream packages.
Deformed mesh format. Select None, 3dsmax (PC2), Maya (MCX), or Bones. If
the file contains GeoH tracking that deforms any meshes, this setting will
determine whether a Skin deformer (bones) is exported, or alternatively
which file format is used for point cache files. The best setting depends on
the capabilities of your downstream application. Applications such as
Blender may or may not support bones, and may require one or the other
format. (PC2 is used for Blender, though you should typically use the
Blender exporter instead of the Filmbox, since Blender's support of
Filmbox is incomplete.)
Texture Lighting. Dropdown. Controls which objects will be lit by scene lights,
and which will be unlit (the right choice for objects textured with shot
imagery). You should usually use the default choice, allowing each
individual mesh to control this directly, as set from the Texture panel.
There's also an option to light the projection screen, largely for
compatibility with an earlier error.
Start at frame. Set this value to the first desired frame number, typically 0 or 1
(Maya uses 1). It corresponds to the first frame of the raw shot,
independent of any start/end framing you have set up in SynthEyes to limit
the part of the shot that is tracked. Read on for more options.

303
EXPORTING TO YOUR ANIMATION PACKAGE

First *used* frame goes there. When set, images and animation are shifted so
that the first frame in the "used" portion of the shot will appear at the frame
number given by the "Start at frame" setting. For example, the raw shot
goes from frame 0 to 300, but only frames 100 to 200 are used in the edit
(see for example the top of the Shot/Edit Shot panel). With Start at frame
set to 1000 and this checkbox turned on, frame 100 will appear at frame
1000 in the downstream animation application. With this checkbox off, it
will appear at 1100.
Start at sequence's frame#. When set, the animation is shifted so that if an
image sequence starts at frame 39, so will the matching animation
downstream. This checkbox overrides the First *used* frame and Start at
frame settings above.
Embed Media. When set, the shot imagery and still textures will be embedded
inside the filmbox file. This prevents them from being misplaced and
makes them easier to move around, but makes the file that much larger.
Set far/near clips. When set, the SynthEyes near and far clipping distances are
put into the FBX file for consistency. When not set, these values will be left
to the discretion of the importing software.
Application-Specific Filmbox Tips
These are hints for particular apps, to better use FBX data from
SynthEyes. Note that often there may be a direct export to the application that
will eliminate the need for such tweaks.
Cinema 4D. Note that C4D imports FBX directly from its File/Open menu
item. To get the animated shot imagery to live-update in the display, do the
following:
Click on the Camera01ScreenMaterial in the Material Manager to open it
for editing.
In the Attribute Manager, click on the material's Editor tab, and turn on the
Animate Preview checkbox.
Click on the Color tab, then on the very wide button containing the shot's
file name, to bring up its Bitmap Shader in the Attribute Manager.
Click on the Animation tab, then on the Calculate button at bottom.

NOTE: Cinema4D works with whole-integer frame rates only. This can
be a problem with movie files at 29.97 or 23.976 fps, which will appear
as 29 or 23 fps. You will see sync errors even if you set the values to
30 or 24 fps. To work around this C4D limitation, use image
sequences, changing the frame rate to the integer value before
exporting.

Importing Filmbox Files


SynthEyes can import Filmbox files. Like other 3D applications, not all
Filmbox features are supported, as SynthEyes is a tracking application, not a 3D
animation package. And Filmbox is not a substitute for a SynthEyes sni file.

304
EXPORTING TO YOUR ANIMATION PACKAGE

NOTE: Filmbox files have different versions. SynthEyes can only read
FBX files with the same, or an earlier, FBX version. It cannot read FBX
files written be later versions of the Filmbox library used by all
applications. You can see that version listed in the dropdown for the
FBX exporter. Other applications will generally be able to write FBX
files compatible with earlier libraries as well, so you may need to write
your FBX file in a version compatible with your current SynthEyes
version.

Filmbox can be a useful file format for several purposes:


Importing camera paths for "From Path" solving,
Importing meshes,
Importing rigs for Geometric Hierarchy Tracking.
To read a single mesh, the importer is accessed via "File/Import/Import
Mesh." The file can contain only a single mesh; the mesh can be reloaded from
the file if it changes. When a mesh is being read, it is positioned at the origin in
SynthEyes.
Use File/Import/Filmbox Scene Import" to bring in much more information
from the scene.

NOTE: When you use the Scene importer on a scene with multiple
meshes, the meshes cannot be reloaded from the file using the 3D
panel's Reload Mesh.

The Scene importer has many more options. You'll definitely need to take
a look at them and adjust them depending on what you're trying to accomplish.
Read cameras. Checkbox. When on, the cameras, their field of view, and their
path information are read in from the file. Shot imagery must be set up
manually, however.
Use existing camera. Checkbox. When on (and Read cameras is on), the
existing default SynthEyes camera is reconfigured to be the first camera
from the file, rather than creating a second camera in the SynthEyes
scene. Typically, the FBX file contains a single camera which is used to
configure the default SynthEyes camera.
Read nulls. Checkbox. Reads the null objects (hierarchy) from the scene,
producing moving objects in SynthEyes. This is required if a rig is being
read, or if the scene requires moving objects. In other cases it may
produce unnecessary objects that can be deleted.
Read meshes. Checkbox. Read the meshes from the scene, creating them in
SynthEyes. When a scene is being read, meshes are positioned at the
locations specified in the scene file.
Keep face normals. Checkbox. Face normals can substantially increase the
number of vertices that SynthEyes must produce for a given mesh; since
SynthEyes generates face normals automatically when needed, it is

305
EXPORTING TO YOUR ANIMATION PACKAGE

generally a good strategy to delete them. Turning on this checkbox will


keep them, so you can see the difference and in case the true original
ones are required for whatever you are doing downstream. This checkbox
also affects whether or not per-polygon-vertex normals are kept, which
require even more vertices (one per vertex in each polygon, ie 3 or 4 times
the number of triangles/quads).
Bake mesh scaling. Checkbox. When checked, the scaling factor specified in
the scene is applied to the mesh's vertex data, so that the final result has
no scaling in SynthEyes (ie for rigs). When unchecked, any scaling is
maintained in SynthEyes (visible on the 3D panel with the scaling tool
selected).
Show backfaces. Checkbox. When set, the back faces of the imported meshes
will be shown in SynthEyes, or not if this is unchecked. Saves the trouble
of configuring it on the 3D panel after import.
Read lights. Checkbox. Check this to read directional and point lights from the
file. (Animated intensity data is not currently read.)
Enable shadowing. Checkbox. Large complex scenes with many objects and
large meshes can result in poor redraw performance due to shadowing.
This checkbox gives a quick way to turn off shadow processing in the
imported SynthEyes scene, so that you can re-enable it selectively if
required.
Read rigs. Checkbox. When checked, SynthEyes will read entire rigs with bones
and Fbx Skin deformers from the file. Read nulls and Read meshes must
be on. Note that SynthEyes always uses "linear" bones mode. Be sure to
check this next dropdown!
Key rigs on frame. Dropdown selector. Controls which frames will receive keys
on the Lock value channels: None, First, Last, Custom, or All. If the
imported rig is completely unposed, use None. If the rig has already been
pre-posed to match a specific frame, from which you will continue to
track, you'll want the rig keyed on the frame, either First, Last (for
backward tracking), or a custom frame (see next control, for when the rig
is posed somewhere in the middle of the shot). Use the All option when
the rig has been pre-animated over the entire shot. You can delete
unnecessary keys from the graph editor, of course.
Custom key frame. Frame number. This is the frame on which keys are
generated when Key rigs on frame is set to Custom and Read rigs is
checked.
Read scene settings. Checkbox. When checked, the frame rate and shot length
information is read from the file. This is needed for camera paths and
animated rigs, but not for reading plain meshes.
Shift to frame 0. Checkbox. When checked, the first active frame in the shot is
brought down to become frame 0(1 depending on preference) in
SynthEyes.
Additional scaling. Numeric factor. Use this to uniformly expand or contract the
size of the scene in SynthEyes, ie to reduce very large scenes, or

306
EXPORTING TO YOUR ANIMATION PACKAGE

increase tiny ones (maybe more common), or to effect a units conversion


such as converting cm to feet.
Show report. Checkbox. When checked, a report will be shown (via text editor)
at the completion of the import. Note that the report is always produced so
if you want, you can keep this checkbox off and only look at the report if
something unexpected happened. See fbximplog.txt in File/User Data
Folder.
More information. Checkbox. When checked, additional information is put into
the report file.
When you import rigs (especially from SynthEyes!), you may see a
number of objects such as Delete_Object05Pivot etc. These are non-animated
pivot positioning bones that are unneeded in SynthEyes and have been tagged
for deletion and moved out of the hierarchy. You can use them if you want, or
delete them. You can delete all of them by entering the following into Synthia:
delete objects starting with delete.

Fusion
There are several Fusion-compatible exporters.
Fusion 7 Composition exports a full 3D scene setup with multiple
cameras, lens distortion, point clouds, moving objects, all meshes including
imported, mesh textures, planar trackers, etc. The comp can be opened directly
in Fusion. The controls are discussed below.
Fusion 7 Corner Pin exports the planar trackers on the current camera
as 2-D corner pins in Fusion. The currently-selected planar trackers are
exported, or all of them if none are selected. Resize, cornerposition, and merge
nodes are generated for each planar. The controls are a subset of the 3D
export's controls, discussed below, plus a selector for Bilinear or Perspective
mapping in the corner pin(s).

Important: The resize is required so that imagery of any resolution


can be pinned into the main flow, due to how the fusion cornerposition
node works. The resize can be eliminated only if the image being
pinned is the same resolution as the main flow.

There is a less capable legacy 3D comp exporter for Fusion 5, and a


generic 2-D path ".dfmo" exporter.
Fusion Export Controls
Timeline setup. Selector: Active part starts at ..., Entire shot starts at..., or Match
Image frame numbers. Controls how frame numbers are mapped from
SynthEyes to fusion, as with After Effects.
Relative object size. Number.Sets the size of the marker for moving objects, as
a multiple of the object's world size.
Relative tracker size. Number. Sets the size of the marker for trackers, as a
multiple of the object's world size.

307
EXPORTING TO YOUR ANIMATION PACKAGE

Relative screen distance. Number. Sets the distance from the camera to the
image plane holding the source imagery in the 3D environment, as a
multiple of the object's world size.
Use PointCloud3Ds. Checkbox. Normally on, producing PointCloud3D nodes.
When off, many Locator3Ds will be produced, which has additional
features such as tracker colors. Turn this off only if there are few
exportable trackers (ie turn off exportable for most).
Planars as planes. Checkbox. When on, 3D planar trackers will be exported as
actual planes, so that a texture image (ie sign replacement) can easily be
added. When off, planar trackers are treated as normal trackers.
Renderable trackers. Checkbox. Controls the Make Renderable checkbox on
the PointCloud3D nodes, ie the tracker axis markers will be visible in
renderers and viewports. Useful initially, but not for final renders.
OpenGL renderer. Checkbox. When set, the OpenGL renderer is used for the
Renderer3D node, if not set, the software renderer is used.
Animate FL, not FOV. Checkbox. When set, the focal length is animated in
fusion. Normally, when the checkbox is clear, the field of view is animated.
Standard caveat applies: focal lengths are half a number, useless unless
you know the exact back plate size, which you don't. So FOV is preferred.
Use fusion meshes. Checkbox. When set, simple meshes will be exported
using Fusion's builtin Shape3D node. When not set, all meshes are
exported as OBJs and read with an FBX node, enhancing consistency
with SynthEyes grid counts etc.
Include prefs/comments. Checkbox. When checked, includes additional scene
setup information (recommended).
Include distortion fixes. Checkbox. When checked, nodes will be created to
undistort the footage before it is fed to the 3D environment, and to re-
distort the rendered footage for compositing with the original (see below).
Use in conjunction with the 1- or 2-pass lens workflows.
Map file type. Selector. Selects the image file type written to control undistortion
and redistortion nodes when Include distortion fixes is set.
Remove path prefix. Text. When this text appears at the beginning of a path, for
example an image name, it is removed. Use the add and remove path
prefixes to retarget the exported file to a different machine, for example
remove "/Volumes/SHOTS" and add "S:" to go from Mac to PC, or use
them to change from local to network drives.
Add path prefix. Text. Whenever the path prefix above is removed, this prefix
will be added.
Working with with Lens Distortion
The export will build image undistortion and redistortion nodes (using
distortion image maps) into the flow when lens distortion is present and Include
distortion fixes is checked. If you use a one-pass workflow, where undistorted
imagery is delivered, you can delete the re-distortion node.
When you use a 2-pass lens distortion workflow, where the rendered 3D
images are re-distorted, after you have the desired 3-D setup you should

308
EXPORTING TO YOUR ANIMATION PACKAGE

disconnect the input of the Camera3D node (or click "Unseen by Cameras" on
the Camera3D's Image tab), so that the resampled (undistorted) image is not
used as a background during the 3-D render. Instead, you should connect a 2D
Merge node to the output of the CameraRedistort node as the foreground, and
connect the original footage loader as the background for the Merge. This
ensures that the original footage is never resampled, for maximum image quality.
It also ensures that any little edge artifacts around the undistorted then
redistorted images are not used.
Known Fusion Issues
There are currently some issues with Fusion's handling of point clouds:
1. When a PointCloud3D node is selected, the tracker points may be
shown in the lower-left hand corner of camera views, instead of at
their correct location, on some machines
2. PointCloud3D nodes may not always redraw promptly in the
Perspective views, or at all, when you scrub repeatedly in the
timeline, on some machines. Clicking the desired time again should
force a redraw.
3. The software renderer does not render the PointCloud3D tracker
axesuse the OpenGL renderer instead.
Note that while you can tell the exporter to use Locators instead, you
should do that only if a limited number of trackers are to be exported, since a
Locator3D node will be created for each.

Houdini Instructions
You can use File/Run Script to run the exported file in Houdini. Here's the
older procedure also:
1. File/New unless you are addding to your existing scene.
2. Open the script Textport
3. Type source "c:/shots/scenes/flyover.cmd" or equivalent.
4. Change back from COPs to OBJs.

Lightwave
The Lightwave exporter produces a lightwave scene file (.lws) with several
options, one of them crucial to maintaining proper synchronization.
The lightwave exporter writes a lightwave object (lwo) file for meshes in
the scene, including the texture, and references them from the lws file. There is a
separate exporter if you would like to export only a Lightwave LWO2 file.
As mentioned earlier, Lightwave requires a units setting when exporting
from SynthEyes. The SynthEyes numbers are unitless: by changing the units
setting in the lightwave exporter as you export, you can make that 24 in
SynthEyes mean 24 inches, 24 feet, 24 meters, etc. This is different than in
Lightwave, where changing the units from 24 inches would yield 2 feet, 0.61

309
EXPORTING TO YOUR ANIMATION PACKAGE

meters, etc. This is the main setting that you may want to change from scene to
scene.
Lightwave has an obscure preferences-like setting on its Compositing
panel (on the Windows menu) named Synchronize Image to Frame. The
available options are zero or one. Selecting one shifts the imagery one frame
later in time, and this is the Lightwave default. However, for SynthEyes, a setting
of zero will generally be more useful (unless the SynthEyes preference First
Frame is 1 is turned on). The Lightwave exporter from SynthEyes allows you to
select either 0 or 1. We recommend selecting zero, and adjusting Lightwave to
match. You will only have to do this once, Lightwave remembers it subsequently.
In all cases, you must have a matching value on the exporter UI and in
Lightwave, or you will cause a subtle velocity-dependent error in your camera
matches in Lightwave that will drive you nuts until you fix the setting.
The exporter also has a checkbox for using DirectShow. This checkbox
applies only for AVIs, and should be on for most AVIs that contain advanced
codecs such as DV or HD. If an AVI uses an older codec and is not opened
automatically within Lightwave, export again with this checkbox turned off.

Modo
The modo exporter handles normal shots, tripod shots, object shots,
zooms etc. It transfers any meshes you've made, including the UV coordinates if
you've frozen a UV map onto a tracker mesh.
The UI includes the units (you can override the SynthEyes preferences
setting); the scaling of the tracker widgets in Modothis is a percentage value,
adjust to suit; plus there is an overall scaling value you can tweak if you want to
(better to set up the coordinates right instead).
Limitations
1. Only Image Sequences can be transferred to and displayed by Modo -- modo
does not support AVI or Quicktime backdrops.
2. Image sequences in modo MUST have a fixed number of digits: the first and
last frames must have the same number of digits (may require leading
zeroes). YES: img005..img150. NO: img2..img913. This may be a problem
for Quicktime-generated sequences.
3. Modo occasionally displays the wrong image on the first frame of the
sequence after you scrub around in Modo. Do not panic.
4. The export is set up to use Modo's default ZXY angle ordering.
Directions
1. Track, solve, etc, then export using the Modo Perl Script item (will produce a
file with a ".pl" extension). Be sure to select the correct modo version for the
export (40x for 401 or 402, 50x for 501..., etc).

310
EXPORTING TO YOUR ANIMATION PACKAGE

2. Start modo, on the System menu select Run Script and give it the file you
exported from SynthEyes.
3. To see the match, you may need to re-set the modo viewport to show the
exported camera, typically Camera01

Nuke
The nuke exporter produces a nuke file you can open directly. Be sure to
select the exporter appropriate to your version of Nukethe files are notably
different between Nuke versions. The 5.0 exporters are substantially more
feature-rich than the Nuke 4 exporter, handling a wide variety of scene types.
The pop-up parameter panel lets you control a number of features. The
Nuke exporter will change SynthEyes meshes to Nuke built-ins where possible,
such as for boxes and spheres. It can export non-primitive meshes as OBJ files
and link them in automatically. If the other meshes are not exported, they are
changed to bounding boxes in Nuke. Note that SynthEyes meshes can be scaled
asymmetrically; you can either burn the scaling into the OBJ file (especially
useful if you wish to use the OBJ elsewhere), or you can have the scaling
duplicated by the Nuke scene.
You can indicate if you have a slate frame at the start of the shot, or select
renderable or non-rendering tracker marks. The renderable marks are better for
tracking, the non-rendering marks better for adding objects within Nukes 3-D
view. The size of the renderable tracker marks (spheres) can be controlled by a
knob on the enclosing group. You can ask for a sticky note showing the
SynthEyes scene file, or a popup message with the frame and tracker count.
Note that Nuke 5.1 and earlier requires only INTEGER frame rates
throughout. SynthEyes will force the value appropriately, but you may need to
pay attention throughout your pipeline if you are using Nuke on 23.976 fps shots,
which is 24 fps from an HD/HDV camera.
Distortion Grid
There is a Nuke Distortion grid exporter that will output a nuke distortion
node that matches the distortion configured within SynthEyes. You can create a
distortion for adding or removing distortion. Be sure to select the "IP Lens Profile"
on the exporter if you are using a SynthEyes lens distortion profile, not just
quadratic or cubic distortion values.
Planar Tracker Corner Pin Export
The Nuke Corner Pin export creates an animated nuke corner pin node for
each selected exportable planar tracker. If no trackers are selected, then all
suitable planar trackers are exported.
The layers are merged together according to the layer stackup on the
Planar Options panel, ie trackers on top occlude those further down.

311
EXPORTING TO YOUR ANIMATION PACKAGE

PhotoScan
SynthEyes exports an XML file that PhotoScan can read; when
PhotoScan can't solve a complex set images by itself, this allows you to use the
full set of manual and automated tracking tools in SynthEyes instead.
The exported XML file contains the image names, camera position,
orientation, focal length, and back plate size data. You should export the XML file
into the same folder as the images.
All cameras are exported except those that are disabled on the solver
panel. Any frames marked on the skip-frames track are not output; they can be
output but disabled if desired based on a setting internal to the script. Aside from
that, it's very simple, there are no user settings needed for it.

Poser
Poser struggles a little to be able to handle a match-moved camera, so the
process is a bit involved. Hopefully Curious Labs will improve the situation in
further releases.
The shot must have square pixels to be used properly by Poser; it doesn't
understand pixel aspect ratios. So if you have a 720x480 DV source, say, you
need to resample it in SynthEyes, After Effects or something to 640x480. Also,
the shot has to have a frame rate of exactly 30 fps. This is a drag since normal
video is 29.97 fps, and Poser thinks it is 29.00 fps, and trouble ensues. One way
to get the frame rate conversion without actually mucking up any of the frames is
to store the shot out as a frame sequence, then read it back in to your favorite
tool as a 30 fps sequence. Then you can save the 640x480 or other square-pixel
size.
Note that you can start with a nice 720x480 29.97 DV shot, track it in
SynthEyes, convert it as above for Poser, do your poser animation, render a
sequence out of Poser, then composite it back into the original 720x480.
One other thing you need to establish at this time---exactly how many
frames there are in your shot. If the shot ranges are 0 to 100, there are 101; from
10 to 223, there are 214.
1. After completing tracking in SynthEyes, export using the Poser Python
exporter.
2. Start Poser.
3. Set the number of frames of animation, at bottom center of the Poser
interface, to the correct number of frames. It is essential that you do this
now, before reading the python script
4. File/Run Python Script on the python script output from SynthEyes.
5. The Poser Dolly camera will be selected and have the SynthEyes camera
animation on it. There are little objects for each tracker, and also
SynthEyes boxes, cones, etc are brought over into Poser.

312
EXPORTING TO YOUR ANIMATION PACKAGE

Open Question: How to render out of Poser with the animated movie
background. The best approach appears to be to render against black with an
alpha channel, then composite over the original shot externally.

Shake
SynthEyes offers three specific exporters for Shake, plus one generic one:
1. MatchMove Node.
2. Tracker Node
3. Tracking File format
4. 3-D Export via the AfterFX via .ma or Maya ASCII exports.
The first two formats (Sizzle export scripts) produce shake scripts (.shk files); the
third format is a text file. The fourth option produces Maya scene files that Shake
reads and builds into a scene using its 3-D camera.
Well start with the simplest, the tracking file format. Select one tracker
and export with the Shake Tracking File Format, and you will have a track that
can be loaded into a Shake tracker using the load option. You can use this to
bring a track from SynthEyes into existing Shake tracking setups.
Building on this basis, #2, Tracker Node, exports one or more selected
trackers from SynthEyes to create a single Tracker Node within Shake. There are
some fine points to this. First, you will be asked whether you want to export the
solved 3-D positions, or the tracked 2-D positions. These values are similar, but
not the same. If you have a 3-D solution in SynthEyes, you can select the solved
3-D positions, and the export will be the ideal tracked (predicted) coordinates,
with less jitter than the plain 2-D coordinates.
Also, since you might be exporting from Windows to a Mac or Linux
machine, the image source file(s) may be named differently: perhaps
X:\shots1\shot1_#.tga on Windows, and /Users/tom/shots/shot1_#.tga on the
Mac. The Shake export scripts dialog box has two fields, PC Drive and Mac
Drive, that you can set to automatically translate the PC file name into the Mac
file name, so that the Shake script will work immediately. In this example, you
would set PC Drive to X:\\ and Mac Drive to /Users/tom/.
Finally, the MatchMove node exporter looks not for trackers to export, but
for SynthEyes planes! Each plane (created from the 3-D panel) is exported to
Shake by creating four artificial trackers (in Shake) at the corners of the plane.
The matchmove export lets you insert a layer at any arbitrary position within the
3-D environment calculated by SynthEyes. For example, you can insert a matte
painting into a scene at a location where there is nothing to track. You can use a
collection of planes, positioned in SynthEyes, to obtain much of the effect of a 3-
D camera. The matchmove node export also provides Windows to Mac/Linux file
name translation.

313
EXPORTING TO YOUR ANIMATION PACKAGE

trueSpace
Warning: trueSpace has sometimes had problems executing the exported
script correctly. Hopefully Caligari will fix this soon.

1. In SynthEyes, export to trueSpace Python.


2. Open trueSpace.
3. Right-click the Play button in the trueSpace animation controls.
4. Set the correct BaseRate/PlayRate in the animation parameters to match
your source shot.
5. Open the Script Editor.
6. From inside the Script Editor, Open/Assign the python script you created
within SynthEyes.
7. Click Play (Time On) in the Script Manager.
8. When the Play button turns off, close the ScriptManager.
9. Open the Object Info panel.
10. Verify that the SynthEyes camera is selected (usually Camera01).
11. Change the Perspective view to be View from Object.
12. Select the Camera01Screen.
13. Open the Material Editor (paint pallete).
14. Right click on Color shaders button.
15. Click on (Caligari) texture map, sending it to the Material Editor color
shader.
16. Open the subchannels of the Material Editor (Color, Bump, Reflectance).
17. On the Color channel of the Material Editor, right click on the "Get Texture
Map" button and select your source shot.
18. Check the Anim box.
19. Click the Paint Object button on the Material Editor.
20. Click on File/Display Options and change the texture resolution to
512x512.
21. You may want to set up a render background to overlay animated objects
on the background, or you can use an external compositing program.
Make the Camera01Screen object invisible before rendering.
22. In trueSpace, you need to pay special attention to get the video playback
synchronized with rest of the animation, and to get the render aspect ratio
to match the original. For example, you must add the texture map while
you are at frame zero, and you should set the pixel aspect ratio to match
the original (SynthEyes's shot panel will tell you what it is).

Vue 5 Infinite
The export to Vue Infinite requires a fair number of manual steps pending
further Vue enhancements. But with a little practice, they should only take a
minute or two.

314
EXPORTING TO YOUR ANIMATION PACKAGE

1. Export from SynthEyes using the Vue 5 Infinite setting. The options
can be left at their default settings unless desired. You can save the
python script produced into any convenient location.
2. Start Vue Infinite or do a File/New in it.
3. Select the Main Camera
4. On its properties, turn OFF "Always keep level"
5. Go to the animation menu, turn ON the auto-keyframe option.
6. Select the Python/Run python script menu item, select the script
exported from SynthEyes, and run it.
7. In the main camera view, select the "Camera01 Screen" object (or
the equivalent if the SynthEyes camera was renamed)
8. In the material preview, right-click, select Edit Material.
9. The material editor appears, select Advanced Material Editor if not
already.
10. Change the material name to flyover or whatever the image shot
name is.
11. Select the Colors tab.
12. Select "Mapped picture"
13. Click the left-arrow "Load" icon under the black bitmap preview
area
14. In the "Please select a picture to load" dialog, click the Browse File
icon at the bottom --- a left arrow superimposed on a folder
15. Select your image file in the Open Files dialog. If it is an image
sequence, select the first image, then shift-select the last.
16. On the material editor, under the bitmap preview area, click the
clap-board animation icon to bring up the Animated Texture
Options dialog
17. Set the frame rate to the correct value.
18. Turn on "Mirror Y"
19. Hit OK on the Animated Texture dialog
20. On the drop-down at top right of the Advanced Material Editor,
select a Mapping of Object- Parametric
21. Turn off "Cast shadows" and "Receive shadows"
22. Back down below, click the Highlights tab
23. Turn Highlight global intensity down to zero.
24. Click on the Effects tab

315
EXPORTING TO YOUR ANIMATION PACKAGE

25. Turn Diffuse down to zero


26. Click the Ambient data-entry field and enter 400
27. Hit OK to close the Advanced Material Editor
28. Select the Animation/Display Timeline menu item (or hit F11)
29. If this is the first time you have imported from SynthEyes to Vue
Infinite, you must perform the following steps:
a. Select File/Options menu item.
b. Click the Display Options tab
c. Turn off "Clip objects under first horizontal plane in main
view only", otherwise you will not be able to see the
background image.
d. Turn off "Clip objects under first horizontal plane (ground /
water)
e. Turn off "Stop camera going below clipping plane (ground /
water)" if needed by your camera motion.
f. Hit OK
30. Delete the "Ground" object
31. If you are importing lights from SynthEyes, you can delete the Sun
Light as well, otherwise, spin the Sun Light around to point at the
camera screen, so that the image can be seen in the preview
window.
32. You may have to move the time bar before the image appears. Vue
Infinite only shows the first image of the sequence, so you can
verify alignment at frame zero.
33. You will later want to disable the rendering of the trackers, or delete
them outright.
34. Depending on what you are doing, you may ultimately wish to
delete or disable the camera screen as well, for example, if you will
composite an actor in front of your Vue Infinite landscape.
35. The import is complete; you can start working in Vue Infinite. You
should make probably save a copy of the main camera settings so
that you can have a scratch camera available as you prepare the
scene in Vue Infinite.

Vue 6 Infinite
1. Export from SynthEyes using the Vue 6 Infinite option, producing a
maxscript file.
2. Import the maxscript file in Vue 6 Infinite

316
EXPORTING TO YOUR ANIMATION PACKAGE

3. Adjust the aspect ratio of the backdrop to the correct overall aspect ratio
for your shot. This is important since Vue assumes square pixels, and if
they arent (for all DV, say), the camera match will be off badly.

317
EXPORTING TO YOUR ANIMATION PACKAGE

Realistic Compositing for 3-D


Once you've exported from SynthEyes, you can create new elements in
your 3-D animation or compositing application. In this section we'll take a brief
look at some techniques you can use to ensure the elements you insert (or
"integrate") look realisticthese don't have anything to do with SynthEyes per
se, but are just a reminder or hint for things to consider. For more information,
consult a book on compositing.
We'll run through the basic setups, then list various elements that you
should match in the scene beyond the basic "camera match." If you're an
experienced 3-D compositor, you can skip this section.

Basic Overlays
The simplest kind of insertion is the "what you see is what you get"
approach, where the output is what you see in the 3-D views of your 3-D
application. The new CG imagery simply superimposed on top of the original
shot.

Tip: Make sure that your rendering application is not applying lighting
to the projection screen holding the imagery.

While this looks like the thing to do, because it's generally already set up
and showing you the result, it is there to aid you in developing the shot and
verifying lineup, rather than to use as the final shot. Using an overlay directly is
suited for simple examples and previews.
Generally it is easiest to sell the new elements as part of the shot if you
can adjust them in a compositing application, such as After Effects, Fusion,
Motion, or Nuke, rather than trying to force the material settings on your 3-D
render to exactly match the final shot.

318
EXPORTING TO YOUR ANIMATION PACKAGE

Compositing 3-D Renders


When you use a compositing application, you will render your additional 3-
D elements "against black", ie with no background image. When you render, you
must ensure that your images are not shown in the back of the renders. You may
need to disable the background, or set that mesh as "not seen by the camera,"
depending on your application.

Your renderer will produce an alpha channel that shows the newly
rendered pixels and is used to overlay the 3-D imagery over the original images.

For example, you can get this in Fusion by disconnecting the image loader
from the input to the Camera3D node, or by un-checking the Enable Image Plane
checkbox on the Image tab of the Camera3D node.
With the 3-D portions of the scene rendered against black, then you
composite the rendered images over the original images. Again in Fusion, you
can connect the final Renderer3D node to a Merge as the foreground, and the

319
EXPORTING TO YOUR ANIMATION PACKAGE

original imager to the background input of the Merge. (Use the output of the
Custom nodes if undistortion and redistortion nodes are present.)

Sliding
Sliding is usually caused by improper placement of inserted objects: they
are in the air, underground, etc. The real world isn't flat or "square." If you see
sliding at a particular location, put a supervised tracker there, and use its
computed 3-D location to place the inserted object. Use tracker-based mesh
modeling to create or adjust the object as needed. For more information, see the
Understanding Sliding tutorial.

Rotoscoping
Overlays are great if the new objects are uniformly in front, but this is often
not the case. If not, you must perform rotoscoping, typically in your compositing
application, so that you have the isolated foreground object(s) and an alpha
channel, as in the rendering-for-compositing examples above.
Rotoscoping can be especially difficult and time-consuming for detailed
natural objects such as bushes and trees. For this reason, consider overlaying
existing objects with new CG elements as an alternative. This is especially typical
for architectural projects.

Matching Lighting
You should match the amount and direction of the lighting in the scene.
You should match not only the brightness of your objects to existing objects in
the scene (especially to ensure that your objects aren't too bright), but you should
also match the depth of the shadows, so that the shadows aren't darker than
existing shadows in the scene.
SynthEyes has a system to help you determine the direction to lights in
the scene (especially sunlight), based on tracking both the location of the shadow
and the location of the object casting the shadow. While the scene may not
always have appropriate trackers available, this method can immediately
produce accurate results.
You can use the Shadow Map Maker script to help create a physical
object for a shadow; it can be rendered as a separate layer if needed. To do this,
create a mesh from the trackers in the area the shadow will fall, then run the
script to generate a texture (stored ON the mesh) that corresponds to the
shadow. You'll see that texture with texture mapping turned on in the perspective
window. You should then save the map to disk by opening the Texture Panel,
clicking Create Texture (you've already done that, but you're setting up the next
step), click Set and select the filename where it should be stored, then click the
Save button to write the texture map. As a last step, turn Create Texture back off;
this will make the texture map available to be reread from disk later and minimize
the chance that any texture extraction operations overwrite it.

320
EXPORTING TO YOUR ANIMATION PACKAGE

Matching Motion Blur


You should include motion blur in your renders, to match the blur present
in the original scene. Without it, moving objects will typically appear too sharp.

Matching Defocus
You should also match the degree and depth of focus in the original shot:
typically cameras and lenses aren't as sharp as CG images. Depending on your
software, you may be able to use a depth of field setting while rendering, or you
may need to apply a 2-D blur in your compositing software.

Matching Noise
You should apply noise to the CG render to match what the camera has
produced, so that the CG effect doesn't look too clean. The noise should change
with each frame, of course. Compositing packages typically have a variety of
features for doing this.

Eroding Alpha
CG inserts can stick out when the boundary between the CG effect and
the background is too sharp. This is different than Matching Defocus, which
applies to the interior of the effects.
Instead, you want to blur the alpha channel but only inside the existing
area. You don't want any non-zero alpha creeping outside the portion which has
been rendered.
In Fusion, you can configure the Blur to blur only alpha. Then a Custom
tool with i1=1-2*(1-a1), r=r1*i1, g=g1*i1, b=b1*i1, a=i1 will pull the alpha in while
maintaining the premultiplying of Fusion's pipeline. (There's probably a better
way; this is a sledgehammer solution.) Just a pixel or two of alpha blur will do the
trick.

321
Building Meshes
SynthEyes has the tools to aid you in the task of set reconstruction:
building a digital model of the as-filmed motion-picture or television set. Set
reconstruction involves two main tasks, building meshes to represent the
geometry, and extracting textures to place upon the meshes. The meshes may
be simple cards, with an alpha channel for detail, or complex digital models.
Set models can serve to catch or cast shadows, act as front-projection
targets, surfaces for vehicles or characters to move upon, serve as a basis for
extensions to the set, etc. These uses can be previewed within SynthEyes before
moving to your compositing or animation package to produce finished work.
We will begin by considering how to build mesh geometry. There are three
basic approaches to that: creating simple planar 'cards'; by converting tracker
locations into a mesh; or by setting up geometric primitives (boxes, cylinders, etc)
to match the scene. Using cards is easy for relatively simple camera motions; the
second approach works best with irregular natural geometry; while using
primitives works best for man-made objects and sets.
Set reconstruction activities happen nearly exclusively within the
perspective window(s), via the right-click menu or perspective-window toolbars.
You should almost always complete the tracking of the scene before
beginning set modeling activities, as changes to the tracking will allow the entire
scene to shift around in ways that will force you to manually update your set
model. You should always have a clearly defined and well-thought-out
coordinate system set up first!

Creating 2.75-D 'Cards'


The simplest kind of geometry to create in the scene are 'cards,' small
planes placed carefully in 3-D. Cards can represent walls, flags, sides of desks,
or more complex things such as bushes. The headline shows 2.75-D to pique
your curiosity: if the card represents something real that is flat, the card
represents it exactly and it is 3-D. If the card represents something that isn't flat,
such as a bush, tree trunk, trash can, etc, it represents that object only
approximately (call it 2.5-D), with perspective mismatches depending on how
much depth range the actual object has, and how much the camera view
direction changes.
Cards are normal SynthEyes planes, but there is a nice way to place
them, and they are automatically set up to use the SynthEyes texture extraction
system. Since the texture can have any alpha channel painted onto it, the card
can represent any kind of complex shape you want, including holes, as long as
the object is mostly flat. The card a good way to quickly get something done. If
that portion of the image will be scrutinized carefully, you may have to do more
accurate modeling than the simple card.

323
BUILDING MESHES

Cards may be parented to moving objects, simply have the moving object
active on the main toolbar when you create the card. The card will inherit and be
aligned with respect to the object's coordinate system, rather than the world
coordinate system.
Creating the Card
You can create cards using the "Add Cards" mouse mode of the
perspective window, found in the Other Modes section of the right-click menu, or
on the mesh toolbar. Note that cards are simply planes, positioned and
configured easily: you can place a plane manually and texture it and the result is
just as much a card.
With the Add Cards mode active, you can lasso-select within the
perspective view. The trackers that fall within the lasso are examined and used to
determine the plane of the card, in 3-D. If there are a few trackers that are much
further away than the rest, that do not fit well on the plane of the others, they will
be ignored.
The bounding box of the lassoed area determines the size of the plane.
You can move the mouse around as you are lassoing to bump up the size of the
plane. You might notice the plane jumping around a bit if the trackers you have
lassoed don't form a particularly flat plane: keep moving until you get the plane
you want!
Alternatively, you can pre-select the trackers to be used to locate the
plane, using any method of tracker selection you want. While the Add Card mode
is active, use control-drag*** to do a simple lasso of the trackers, without creating
the plane yet. This makes it easier to navigate around in 3-D and verify that you
have selected the trackers you want, and that they are reasonably planar, before
creating the card.

Important: ***Prior to SynthEyes 1308, ALT/Command-Drag was used


instead of control-drag. SynthEyes 1308 and later use control-drag to
avoid conflict with Maya-style navigation.

If there are already trackers selected as you start to add a card, those
trackers are used to locate it, and instead of a lasso, the location where you first
push the mouse button and where you release it are used as the corners of the
plane; it is a simple rectangular drag (in the plane defined by the trackers). Pre-
selecting the trackers is more convenient when you want to carefully set locate
the edges of the plane. You can also do a control-A to select all the trackers, and
find the best overall (ground, typically) plane.
As you create a new card, its texturing parameters are copied from the
previous card, if any, so you can configure the first one as you like, then create
additional cards quickly. You may want to choose the resolution specifically for
each cardsmaller cards need less, bigger cards need moreto maintain an
adequate but not excessive amount of oversampling of the actual pixels.

324
BUILDING MESHES

Once you have created a card, it will be selected in the viewports, so that
you can work on the texturing. If you would like to compare the position of the
plane to the trackers used to align it, click Undo once. This will show the created,
but unselected card, plus the selected trackers used to align it. If you did not pre-
select the trackers, instead lassoing them within the tool, only the trackers
actually used to locate the card will be selected, not any outliers. You can
unlock from the camera and orbit about to get a better idea of what you have,
then click Redo to re-select the card.
Moving a Card
If you want to reposition the card along its own axes, be sure to switch the
perspective window to View/Local coordinate handles, so that you will be sliding
the plane along its own plane, not the coordinate axes. But be sure to read the
next paragraph!
If you re-solve the scene, for example after adding trackers and hitting
refine, or after changing the coordinate system setup, the position of the card can
be updated based on the new positions of the trackers originally used to create it.
Run the Linking/Align Via Links dialog on the perspective window's right-click
menu.
Since the trackers may shift around arbitrarily, or if you moved the card
after creating it, the card may no longer be in some exact location you wanted,
and you will need to manually adjust it.
Texturing
For the full details of texture extraction, see the texture extraction chapter.
Here is a quick preview. SynthEyes will pull an average texture from the scene
and write it to a file. You have control over the resolution, format, etc parameters
of that, as set from the Texture Control Panel (opened from the Window menu).
When you create a card using Add Cards, the texturing parameters will be
preconfigured to create a texture and write it to disk. The file name is based on
the card name and the overall saved SynthEyes scene file name: you must have
already saved the file at least once for this to work.
The texture will not be produced until you click the Run or Run All buttons
on the Texture Control Panel, or until you solve, if the Run all after solve
checkbox is turned on.
Once the texture is produced, you can paint on its alpha channel in
SynthEyes or externally. If you do so externally, you should turn off the Create
checkbox so the texture is not later accidentally overwritten.
Tricks
Cards can exactly represent only perfectly flat surfaces, since they are
themselves flat. It is relatively common to use flat cards in 3-D compositing
setups because the amount of perspective shift that should occur, if the item on
the card was more accurately modeled, is not discernable. You can extract and

325
BUILDING MESHES

build up composites with multiple levels of cards with extracted textures, if you
build appropriate alpha channels for them.
If the camera moves a lot, a single card can start to present an inaccurate
view, one that shows its essential flatness. You can create multiple cards,
oriented differently to match different parts of the shot, compute the texture
based on the correspondingly limited portion of the shot, and fade them in and
out over the duration of the shot (in your composition or 3-D animation app).

Creating Meshes from Tracker Positions


Now we'll move on to consider building up true 3-D models of the set. The
tracker locations are the basis of this process. As with all set reconstruction
activities, you should nail down the scene tracking before proceeding with
modeling.
The "Edit Mesh"
At any time, SynthEyes can have an Edit Mesh, which is not the same as
saying it is selected. The Edit Mesh has its vertices and facets exposed for
editing. A mesh can be the Edit Mesh and not be selected, and it can be selected
without being the edit mesh.
If, in the perspective view, you select a cylinder, for example, and click Set
as Edit Mesh on the right-click menu, youll see the vertices. Right-click the
Lasso Vertices mode and lasso-select some vertices, then right-click Mesh
Operations/Delete selected faces, and youve knocked a hole in the cylinder.
Right-click the Navigate mode. (See the Lasso controls on the Edit menu for
rectangular lassos.)
You can set a different edit mesh, or use the Clear Edit Mesh option so
that there will be none.
Example: Ground Reconstruction
With the solved flyover_auto.sni shot open and the perspective window
open, right-click Lock to current camera (keyboard: L), click anywhere to
deselect everything, then right-click Set Edit Mesh and Mesh
Operations/Convert to Mesh. All the trackers now are vertices in a new edit
mesh. (If you had selected a group of trackers, only those trackers would have
been converted.) Rewind to the beginning of the shot (shift-A), and right-click
Mesh Operations/Triangulate. Right click unlock from camera. Click one of the
vertices (not trackers) near the center, then control-middle-drag to rotate around
the new mesh.

Note: the triangulation occurs with respect to a particular point of view;


a top-down view is preferable to a side-on one which will probably
have an interdigitated structure, rather than what you likely want. The
automatic viewport-dependent triangulation is a quick and dirty head
start; you can use clipping planes or the Assemble Mesh mouse mode

326
BUILDING MESHES

of the perspective window to set up a specific triangulation reasonably


rapidly.

Lock the view back to the camera. Click on the tracker mesh to select it.

Select the 3-D control panel and click Catch Shadows. Select Cylinder as

the object-creation type on the 3-D panel, and create a cylinder in the
middle of the mesh object (it will be created on the ground plane). You will see
the shadow on the tracker mesh. Use the cylinders handles to drag it around and
the shadow will move across the mesh appropriately. For more fun, right-click
Place mode and move the cylinder around on the mesh.
In your 3-D application, you will probably want to subdivide the mesh to a
smoother form, unless you already have many trackers. A smoother mesh will
prevent shadows from showing sharp bends due to the underlying mesh.
In practice, you will want to exercise much finer control over the building of
the mesh: what it is built from, and how. The mesh built from the flyover trackers
winds up with a lot of bumpiness due to the trees and sparsity of sampling.
SynthEyes provides tools for building models more selectively.
If you are following along, keep SynthEyes open at this point, as we'll
continue on from here in the section on Front Projection, right before we start
with Texture Extraction.
Adding Vertices
To produce more accurate geometry, especially with natural ground
surfaces, you can increase the mesh density with the Track menus Add many
trackers dialog, rapidly creating additional trackers after an initial auto-track and
solve has been performed, but before using Convert To Mesh.
Especially for man-made objects, there may not be a tracker where you
need one to accurately represent the geometry. SynthEyes uses spot tracking
which favors the interior of objects, not the corners, which are less reliable due to
the changing background behind them. So even if you used auto-tracking and
the Add many trackers dialog, you will probably want to add additional
supervised trackers for particular locations.
To produce additional detail trackers, especially at the corners of objects,
the offset tracking capability can be very helpful, which is an advanced form of
supervised tracking. With offset tracking, you can use existing supervised
trackers to add new nearby trackers on corners and fine details, without too
much time. You can clone off a number of offset trackers to handle small details
in a building, for example. But beware! The accuracy of an offset tracker is
entirely determined by the quality of the offsets you build; if you do too quick a
job, the offset tracker will not be accurate in 3-D. Offset tracking works better
when the camera motion is simple, even if it is bumpy; for example, a dolly to the
right.

327
BUILDING MESHES

Whether you use Add Many Trackers, create new supervised trackers, or
clone out new offset trackers, you can use Convert to Mesh to add them to the
existing edit mesh if you are auto-triangulating.
Controlling Auto-Triangulation
The convert-to-mesh and triangulate tools operate only on selected
trackers or vertices, respectively. Usually you will want to select only a subset of
the trackers to triangulate. After doing so, you may find that you want to take out
some facets and re-triangulate them differently to better reflect the actual world
geometry or your planned use.
You can accomplish that by deleting the offending facets (after selecting
them by selecting all their vertices), and then selectively re-triangulating.
The triangulation depends on the direction from which the perspective
view camera observes the trackersit is an essentially 2-D process that works
on the view as seen from the camera. You should ensure the camera does not
view the group of trackers edge-on, as an unusable triangulation will result.
Instead, for trackers on a ground plane, the camera should look down from
above.
The triangulator works nicely for trackers and vertices that are roughly
planar.
When you are working on an convex object such as a head, or even a
non-convex object such as a body scan, you need to isolate a particular subset
of vertices that is nearly planar, in order to triangulate them automatically. You
can do that using one or more clipping plane(s), which are any well-positioned
SynthEyes planes.
The mesh vertex selection tool does not look behind any plane in the
workspace, so by positioning a plane right in the middle of an head, say, you can
select and work on only those vertices on the face, say. By moving the view and
plane around, you can easily select and triangulate any portion of the model.
Using the Assemble Mesh Mouse Mode
You can manually triangulate a mesh using the Assemble Mesh mode of
the perspective window. Obviously that takes some more time, at a bit over one
click per facet, but it goes quickly. It allows the very specific control necessary for
objects such as detailed building models, or for stitching together separately-
triangulated meshes.

Tip: You may want to use Assemble Mesh Mode when the vertices
form a volume, and you need to work on the front half of a head, say,
where triangulation will produce unuseful results from all vertices
(unless you use a clipping plane).

To use Assemble Mesh mode, go to the perspective window and select it


from the Mesh Operations submenu of the right-click menu. Do not use the

328
BUILDING MESHES

Convert to Mesh item on any trackers. (Assemble Meshes does also work
directly on vertices, such as an imported Lidar scan.)
Instead, begin clicking on the three trackers you want to form the first
facet. As you click on the third, the facet will be created. Click on a fourth tracker,
and a new facet will be created from two of the prior three and the new one in a
reasonably intelligent fashion. As you click on each additional tracker, a new
facet will be created.
You can hit Undo if the triangle created isn't the one you want. To get the
triangle you want, click on the vertex you do not want to deselect it; with only two
vertices then selected, clicking on another vertex will create a facet with it and
the two selected.
To start a new triangle in a different location, hold down the control key as
you click a tracker: the previously selected vertices will be deselected, leaving
only the new vertex selected. Clicking two more trackers will produce the first
new facet.
Mesh Editing
Often an outlying tracker may need to be removed from the mesh, for
example, the top of a phone pole that creates a tent in an otherwise mostly flat
landscape. You can select that vertex, and right-click Remove and Repair.
Removed vertices are not deleted, to give you the opportunity to reconnect them.
Use the Delete Unused Vertices operation to finally remove them.

Tip: There are toolbars with many perspective view operations. Right-
click and look in the toolbars submenu; for example see the Mesh
toolbar for editing operations.

Long triangles cause display problems in all animation packages, as


interpolation across them does not work accurately. SynthEyes allows you to
subdivide facets by placing a vertex at center, and converting the facet to three
new ones, or subdivide the edges by putting a vertex at the center of each edge
and converting each facet to four new ones. This latter operation is generally
preferable. (Detail: the result is non-watertight because some new vertices
appear along edges of existing triangles. A watertight version is available through
Sizzle and Synthia, though necessarily it can produce some skinny triangles.)
Also, you can add vertices directly using the Add Vertices tool, or move
them around with the move tool. Both of these rely on the grid to establish the
basic positioning, typically using the Grid menus Align to Trackers/Vertices
option. You can then add vertices on the grid, move them along it, or move them
perpendicular to it by shift-dragging. You can move multiple vertices by lasso-
selecting them, or shift-clicking them from Move mode. There are a variety of
tools on the perspective view's right-click menu, see the perspective view
reference.

329
BUILDING MESHES

Important!: if you move vertices, or add new ones, they will not update
if you update the mesh after a new Solver operation; see the next
section.

After we get into object tracking, you will see that you can use the mesh
construction process to generate starting points for object modeling efforts as
well.
What Happens If I Refine?
After you have built a mesh from tracker locations, you may need to
update the tracking and solution. The tracker locations will change, and without
further action, the mesh vertices will not. The tracker locations are stored in the
mesh as the mesh is generated.
However, SynthEyes contains a Linking subsystem that can record the
original tracker producing a vertex. This subsystem stores the linkages
automatically when you use the Convert to Mesh and Assemble Mesh operations
of the perspective window.
After a solve, you can update the mesh using the Linking/Update meshes
using links operation on the right-click menu. This does not happen
automatically, as there are several possibilities involved with linking, as you will
see below.
Important: the links cover only the vertices generated from trackers. If
you create additional vertices, either manually or by subdividing, those vertices
can not and will not be updated.
Links apply for a specific shot, ie the shot the tracker is a part of. A given
vertex can be linked to different trackers on different shots! When you use
Update meshes using links, the currently-active shot determines which links are
used to update the vertices.

Working with Primitives to Build Set Models


For rough models of buildings, desks, etc, it can be convenient to build set
models from SynthEyes's built-in primitives: plane, cube, cylinder, cone, pyramid.
The first and most important step to that process is usually to pick a convenient
coordinate system as that will make placing and aligning the meshes you create
much easier.
The perspective view's linking tools can make aligning primitive meshes
accurately easier. Exact alignment is especially important when a texture will be
extracted for the mesh, as even a little 'sliding'caused by the mesh not being
placed accuratelywill blur the extracted texture.

Tip: If you have a mesh selected and the 3-D panel open, you can use
the scaling tool's uniform scale value to not only rescale the mesh, but
if you drag while holding down the ALT/command key, you can
reposition it simultaneously along the line of sight from the camera so

330
BUILDING MESHES

that it's visual size doesn't change at all! (Similar to the Nudge mini-
spinner on the Coordinates panel.)

Tip #2: It's also possible to do this from a 3D viewport and the XYZ
panel, by turning on the selection lock and scaling tool, then scale-
dragging in the viewport starting from the camera as the pivot point.

Three-Point Alignment
In the standard coordinate system setup, you click on 3 trackers to set up
a coordinate system: an origin, and X-axis, and a point on the ground plane. You
can do something very similar to align a mesh to three trackers.
In this case, you set up three links between a tracker and the
corresponding vertex on the mesh. You must set the mesh as the edit mesh so
that its vertices are exposed.
Select the first vertex and the first trackerboth can be selected at once.
Use the corresponding Lasso operation for the vertices and tracker. It can be
convenient to zoom the perspective window, or use two different perspective or
3D viewports to do that. Select the Add link and align mesh menu item on the
Linking submenu, and a new link will be created and the mesh moved to match
them up.
Select the second vertex and second tracker, and do the Add link and
align mesh item again, and both vertices will match, achieved by moving,
rotating, and scaling the mesh.
Repeat for the third vertex and tracker. These do not have to be an exact
match! This time, the mesh will rotate about the 3-D line between the first two
trackers/vertices so that the third vertex and tracker fall on the same plane, just
like for the coordinate system setup. The first two links are "lock points" whereas
the third pair is like "on plane." So the third link is of a different type than the first
two, the tracker is linked to a position relative to the three vertices.
If you change the solve, you should update the position of the primitive
using the Linking/Align via Links dialog, described next.
Align Via Links Dialog
The three-point alignment method produces exact tracker/vertex matches
for a small number of points. Alternatively, you can use the Align Via Links dialog
(launched from the Linking submenu) to align a mesh as best possible to any
number of links.

331
BUILDING MESHES

You should establish all your links first, using the Add Links to selected
menu item. Then launch the dialog.
You can align the mesh to the trackers, or cause the camera and all the
trackers to be moved to match the mesh, leaving it right where it is! This second
option is useful when you have been given an entire existing 3-D model of the set
that you must match the solve to.
On the dialog, you'll notice that you can control whether or not the mesh is
allowed to scale to match the links, and whether or not that scaling is separate on
each axis, or uniform. When you have a complex existing mesh to match, you'll
certainly want uniform scaling. But if you are matching a box to some trackers,
and do not know the correct relative sizes of the axes, you should use the non-
uniform scaling.
You can also cause the tracker locations to be locked to the mesh
locations, which is handy when you have aligned the world to the mesh
positionthe created links will cause the solve to reproduce this same alignment
with the mesh later, in additional solves, without having to re-run this dialog.
Summary of Ways to Create Links
As a recap, here are some different operations that create links, as found
mostly on the Linking submenu of the perspective right-click menu:
The Add Card operation creates a link for each tracker used to determine the
plane of the card; each link is a composite link between the tracker and three
corners of the card.
Assemble Mesh and Convert to Mesh create a simple link between each
tracker and its corresponding vertex.
Add Link and Align adds a simple link for the first two trackers/vertices, and
adds a composite link between the third tracker and the three vertices.

332
BUILDING MESHES

Add Links to Selected creates a simple link between the selected tracker and
vertex.
This last operation comes in three flavors. If one tracker and several
vertices are selected, a link is created between the tracker and each vertex. If
only several trackers are selected, no vertices, then a link is created for each
tracker to each vertex that is already at the same location, as seen in the image.
If only vertices are selected, then each one is matched with the nearest tracker
that encloses it, and if the tracker is solved, the vertex location is updated to be
at that exact same location. The second and third methods are intended for re-
establishing links. Note that each apparent vertex often consists of several
vertices with different normals or UV coordinates at the same location.
Using Pinning
You can use the Pinning tool to place meshes into a scene without using
any trackers. This tool is described in Pinning Tool section of the Geometric
Hierarchy Tracking manual.
For tripod-type shots, pinning can be a quick solution. Since the pinning
tool pins a mesh into place based only on a single frame, it is fairly difficult to get
a mesh exactly in place for moving-camera shots, where the viewpoint is different
for other frames.
The Pinning tool can also be a quick way to get a field of view estimate, if
there's a good-sized geometric primitive (especially a box or cylinder) in the
scene.
Since there are no trackers, there are no links, so pinned meshes do not
update if a tracked scene is re-solved.
Other Helpful Operations
When you are working on an existing primitive, it can be helpful to add a
few specific 'detail' trackers to it, once it is already in place. You can select the
trackers, then use the Punch In Trackers operation on the Mesh Operations
submenu to do that. This operation depends on the viewing direction of the
camera: as seen in the image, the facets containing each tracker will be removed
and replaced with new facets that include the new tracker at their apex.
To aid your linking operations, you can also remove links, or show the
trackers with links, using operations on the Linking submenu.
When you show (flash) the trackers with links, if there are selected
vertices, then only trackers that are linked to the selected vertices will be flashed.

Applications, ie What Next?


Changing Camera Path
If you have a well-chosen grid of trackers, you may be able to fly another
camera along a similar camera path to the original, with the original imagery re-

333
BUILDING MESHES

projected onto the mesh, to produce a new view. Usually you will have to model
some parts of the scene fairly carefully, however.
Depth Maps
With a mesh constructed from the tracker positions, you can generate a
depth map or movie to feed to 3-D compositing applications.
Once you have completed tracking and created the mesh, open the
perspective window and begin creating a Preview Movie. Select the Depth
channel to be written and select an output file name and format, either an
OpenEXR or BMP file sequence (BMPs are OK on a Mac!). Unless the output is
OpenEXR, you must turn off the RGB data.
Click Start, and the depth map sequence will be produced. Note that you
may need to manipulate it in your compositing application if that application
interprets the depth data differently.
Front Projection
Go back to the Ground Reconstruction example. With the cylinder casting
an interesting shadow on an irregular surface, right-click Texturing/Rolling
Front Projection. The mesh apparently disappears, but the irregular shadow
remains. This continues even if you scrub through the shot.
In short, the image has been front projected onto the mesh, so that it
appears invisible. But, it continues to serve as a shadow catcher.
In this Rolling Front Projection mode, new U,V coordinates are being
calculated on each frame to match the camera angle, and the current image is
being projected, ensuring invisibility.
Alternatively, the Frozen Front Projection mode calculates U,V
coordinates only once, when the mode is applied. Furthermore, the image from
that frame continues to be applied for the rest of the frames. This kind of
configuration is often used for 3-D Fix-It applications where a good frame is used
to patch up some other ones, where a truck drives by, for example.
Because the image is projected onto a 3-D surface, some parallax can
safely be developed as the shot evolves, often hiding the essentially 2-D nature
of the fix. If the mesh geometry is accurate enough, this amounts to texture-
mapping it with a live frame.
The U,V coordinates of the mesh can be exported and used in other
animation software, along with the source-image frame as a texture, in the rare
event it does not support camera mapping. Frozen Front Projection is the prelude
to the texture extraction capabilities described next.

334
Texture Extraction
In the previous section, we described how you to use the camera-
mapping-type texture coordinate generation capabilities of SynthEyes to be able
to use the existing shot images as textures for meshes that model the set. Simple
camera mapping is not ideal for set reconstruction, as the images
have limited resolution,
are subject to compression noise,
require mesh texture coordinate values determined by the camera view, and
may be partially obscured by moving objects.
The texture extraction system can use many frames to produce a more
accurate texture at any practical resolution, and can use the mesh's existing
texture coordinate values. With a suitably accurate mesh, the extracted texture
can be good enough that you will have to pay attention that it does not look "too
good" you will have to add noise to renders to match the original shot.
With these capabilities, you can generate textures not only for objects
visible and modeled in the scene, but also for far background imagery, such as
mountains or clouds. When the shot is a nodal pan, you can create a panoramic
background that is larger than any of the individual frames, by using a large
mesh that the camera sweeps across. Similarly you can create extended textures
when a vehicle travels down a road.

Important: You will need to create garbage mattes to block out any
edges of the image that contain black margins, or any other area that
does not contain valid image (eg time-code). See Garbage Mattes.

Warning: Texture generation requires large amounts of memory, many


times more than a regular image of the same resolution, especially as
'undo' is considered. A 64-bit machine with at least 4 GB is
recommended for texture extraction.

Note: The Intro version is limited to extracting at most 2K x 2K


textures.

The texture extraction has different modes. An averaging technique


produces low-noise textures when the mesh modeling and tracking are very
accurate. An alternative approach produces sharper and noisier images when
tracking and modeling are less exact; this mode is for traveling shots and shots
where the object moves a lot relative the camera.
The texture display and extraction capabilities are controlled from the
Texture Control Panel, launched from the Window menu or the button at the
bottom of the 3-D Panel, which contains a small subset of the texture controls for
information. There are also some preferences in the Mesh section.

335
TEXTURE EXTRACTION

Note that SynthEyes is a match-moving and set reconstruction tool, so it is


not designed for more complex material specifications, repeating textures,
lighting controls, and the like, which are used downstream for CGI creation.

Mesh Preparation and Texture Coordinates


Your mesh will need to have texture coordinates to display a texture. The
geometric primitives, such as cube, cylinder, etc, have those coordinates, though
the more complex teapot and Earthling do not. If your mesh is imported, it will
have texture coordinates if they are present in the input file.
If your mesh does not have texture coordinates, you can create them via
the Texture submenu. Go to a frame in the shot that has a nice view of the
object, select Frozen Front Projection, then Remove Front Projection and Crop
Texture Coordinates. Since the mesh will generally occupy only a small portion of
the image, it will use only that small portion of the texture map, wasting much of
the available resolution. The Crop operation will expand the limited texture
coordinates to the full 0..1 range, and thus the entire texture image.
A particular limitation of creating texture coordinates using camera
mapping is that only the half of the object facing the camera can be used, since
the back side will have the exact same coordinates. The resulting mapped object
will appear to have a plane of symmetry with the identical texture on each half.
The geometric primitives have carefully-chosen texture coordinates that
use the whole image (as possible) without artificial limitations.

Important: Make sure all meshes are subdivided until each triangle
covers only a relatively small portion of the screen & 3-D environment.
This is especially true of tracker meshes.

360VR. When doing texture extraction from 360VR shots, be sure that
the meshes are especially well subdivided, especially when using
blocking meshes. Inadequate subdivision can cause seams to open in
the extracted texture, because straight lines in the 3D environment are
not straight in 360VR images.

Many of the calculations during texture extraction are performed only once
per triangle, not once per pixel, which saves a tremendous amount of time.
However, if triangles are too big, perspective changes from one end to the other
will cause unnecessary blurring. (The interior pixels of a triangle use bilinear
interpolation, instead of the full perspective transform and lens distortion
compensation used for the vertices).
If you extracting from a tracker mesh, you can add additional trackers in
problem areas. Use Track/Add Many to increase tracker density before meshing.
Or use Punch In Tracker mode in the perspective window to add individual later
trackers to the mesh.

336
TEXTURE EXTRACTION

Alternatively, you can write your mesh to an OBJ file, increase its density
in a third party application, perhaps with smoothing, and reimport it for texture
extraction.
If you are using geometric primitives created in SynthEyes, such as plane,
sphere, etc, you can control the segment counts using the # button on the 3-D
panel.

Texture Display
SynthEyes can display a texture on meshes, whether SynthEyes has
generated it, or a different CG or 2D painting package.
With the mesh selected, the texture control panel open, and the Create
checkbox off, click Set and open the texture image.
If you have problems with the orientation or left/right or top/bottom
handedness of the image, use the top drop-down (orientation) to obtain the
correct orientation of the map on the mesh.
The mesh can be lit in the perspective view, if it is a textured object such
as a soda can or cereal box. If the texture has been extracted from the scene
itself, for example as a blocking object, then it should not be lit. (Meshes that are
camera-mapped and textured with a frozen or rolling scene image are never lit.)
While the mesh is selected, it will show the red selection color in the
viewport blended with the texture, making the texture harder to see. You can
suppress this by turning on the Hide mesh selection checkbox on the Texture
Control Panel. You'll still see the drag handles, but the texture will be shown as-
is. This can be a little tricky, the effect will turn off if you close the Texture Control
Panel.
For help when you are painting alpha channels on textures, or having
them computed, you can have the mesh display only the alpha channel by
turning on the Show only texture alpha button.
It can be handy to repeatedly flip the two checkboxes on and off to better
understand the texture and alpha channel.

Texture Creation and Format Controls


Now we're moving on to image creation controls.
When you are going to extract a texture, you must turn on the Create
Texture checkbox first, before clicking the Set button to set up a file name. That
is necessary so that the Save File dialog can be used, instead of Open File
(since generally the file will not exist yet).
On the Save File dialog, select the type of image to be used to store the
image. Some file types, especially JPEG, are not able to store alpha channels.
After you set the file name, click Options to set any compression options
for that file type.

337
TEXTURE EXTRACTION

You can then set the horizontal and vertical resolution of the image to be
computed. Note that these controls have drop-downs for quick and accurate
selection of standard values, but you can also enter the values directly into the
edit fields.

Tip: You can enter the image size you want on the Texture Control
Panel, you do not have to stick to the preconfigured drop-down values,
and you can use larger values than the 4K that is the largest value in
the drop-down, if the situation warrants it (a traveling shot).

Warning: as you go to generate that 32K x 16K texture, keep in mind


that the amount of viable information in it will depend on what images
are fed into the process; for example, you can not produce a high
resolution texture from a single 720x480 input image. You will produce
a high-resolution image with many blurry pixels!

With the image depth drop-down, select the desired resolution to be saved
in the output file: 8-bit, 16-bit, half-float, or floating point. Which channel depths
are supported will depend on the output file format, for example JPEG is only 8-
bit.; there's no notification of that. You should see that in downstream packages,
or you can check file sizes.
Though it is here with the format controls, the filter type drop-down
controls how the internal texture processing is done, as the resolution changes
from the input image resolution to the texture image resolution. The default 2-
Lanczos should be used almost always; bilinear may be a little faster and less
crisp, 3-Lanczos will take longer and might possibly produce slightly sharper
images. 2-Mitchell (Mitchell-Netravali) is between bilinear and 2-Lanczos in
sharpness.

Animating the Enable


By default, SynthEyes will use all frames to determine the texture. If some
frames are problematic, you can animate the (stop-sign) enable track so that
those frames are not used. This is the right approach for frames with explosions
or sudden motion blur, or where an object moves rapidly across the mesh, for
example. Using the enable is necessarily a blunt tool; for more precision,
blocking meshes or garbage splines may be used.
The enable can also be used to disable a mesh once it has gone
completely off-screen, to save calculation time.
While the texture control panel is open, and with a single mesh selected,
you can see the enable track in the time bar. You can see all the mesh texture
enable tracks in the graph editors as well.

338
TEXTURE EXTRACTION

Blocking
If there is an actor or other object moving continuously in front of the
mesh, shutting down extraction for the entire time may not be an option. Instead,
SynthEyes offers two methods to prevent portions of the image from being used
for extraction: blocking meshes and garbage mattes.
These may not be necessary, if the disturbance is small and short-lived
and there are enough other frames: the image may not be materially affected.
Blocking Meshes
The idea behind blocking meshes is simple, direct, and physically based.
It is applicable for static portions of the set, rather than actors.
If you have modeled a wall and a desk in front of it, then the mesh for the
desk will block the portions of the wall behind it, preventing those sections from
participating in the wall's texture extraction. If the camera moves sufficiently that
the portion behind the desk is exposed elsewhere in the shot, then the wall's
entire texture can be computed. If not, the wall texture will have a blank spot
one that can not be seen as it is behind the desk!
A complex scene may have many meshes in it, some used for texture
extraction, some used for blocking, some not. To allow performance to be
optimized, the texture control panel contains a control over whether or not a
particular mesh should be tested to see if it blocks any texture extractions.
It may be set to Blocking or Non-blocking and defaults to non-blocking;
usually only a few must be set to blocking, if any. The blocking control can and
should be adjusted as needed for meshes whether or not they are having their
textures extracted or not. There is a notable calculation overhead for blocking
meshes, as something similar to a render must be performed on each frame for
each blocking render.
Garbage Mattes
If an actor or a hard-to-model object is moving in front of the mesh being
extracted, you can instead set up an animated garbage matte to exclude that
area.

Important: you will need to set up (un-moving) garbage mattes for


black edges or corners of the image. If you do not, you will have large
black artifacts in a trail through your texture!

The garbage mattes are set up with SynthEyes's roto system, which is
normally used to control the automatic tracker. By the time you get to texture
extraction, the time for that use is well passed, and you can add additional
garbage mattes or remove existing ones without any ill effect.
To make the texture extraction for a particular mesh sensitive to the
garbage splines, turn on the Blocked by Garbage Splines checkbox. (The control
defaults to off to prevent many needless examinations of the complex animated

339
TEXTURE EXTRACTION

spline shapes.) You can animate the spline's enable control on the roto panel if
needed.

Tip: don't forget to turn on Blocked by Garbage Splines!

As with its normal use, the garbage mattes set up for texture extraction do
not have to be particularly exact, they can be quick and dirty. For tracking mattes,
don't forget to use the Import Tracker to CP (control point) capability.

Texture Extraction Mode


The texture extractor has several different modes, for different situations.
Each mode describes a way to combine all the candidate pixels for a given
texture pixel into the final pixel value for the texture. The candidate pixels come
from all the frames of the shot, they are the image pixels that correspond to the
displayed location of the given texture pixel (ie computer graphics in reverse).
The candidates are subject to the roto-matte, blocking, enable, and tilt
controls described elsewhere in this chapter.
All modes utilize a weight determined by the tilt angle and distance of the
triangle. This process is described in a following section.
Average Pixel
Here, the final pixel is the weighted average of all the candidate pixels. If
an alpha value is present, it is included in the weight, and a final alpha is
produced as well.
Use the Average Pixel method with accurate meshes and good tracking,
to produce a final texture with low noiselower than the noise in the source
images. You will want to add noise back in to final renders that use this texture
so that it does not look too smooth.
If the tracking and meshing don't match very well, the produced texture will
be very blurry. (See Best Pixel below.)
Average w/alpha
This mode uses the same weighted averaging as Average Pixel, except
that it also produces an alpha channel based on the repeatability (or not) of each
individual pixel. See Alpha Generation below.
Best Pixel
This method produces the pixel with the highest, ie best, weight value. In
effect, it is uses the candidate pixel from the source image that presents the best
candidate. The final texture is an amalgam of patches, each patch from a single
source image.
This method produces sharp, non-blurry, textures even when the modeling
or tracking aren't great, especially when the camera to mesh geometry is
changing significantly over the duration of the shot.

340
TEXTURE EXTRACTION

Here are some potential downsides:


the camera noise from a single frame is "frozen" into each patch
there may be seams at the edges of some triangles
lighting changes over the duration of the shot will appear in the
patch structure (rather than being averaged out)
In a long traveling shot, this last limitation can be a useful feature.
You may find it helpful to touch up these textures in Photoshop or
equivalent to mitigate any seams or add a slight blur.

Weighting Control
The pixel averaging or best-pixel determination utilizes a weighting factor
based on the tilt angle of the triangle relative to the camera and the distance from
the camera to the triangle (center). These controls should not have to be
adjusted (except maybe to play) and are preferences instead of on the texture
control panel itself. We describe them here for your understanding.
The tilt angle weight is 1.0 when the camera is looking head-on to the
triangle. It drops off rapidly as the tilt angle increases, based on the texture fallof
parameter in the Meshes section of the preferences. A larger value emphasizes
head-on triangles, a zero value makes all triangles equal. Literally, the value is
exp(-falloff*sin^2(angle)) you can plot it in Excel on a rainy day.
The distance component is simpler: the camera world size value divided
by the Z distance to the triangle center. So a triangle twice as far has half the
weight, and vice versa.

Tech note: Since doubling all the weights or halving all the weights
makes no difference, the world size value just keeps the numbers well
normalized, without affecting the results.

The tilt and distance components are multiplied to form the final weight.

Tilt Control
Consider a vertical cylinder with a camera orbiting it in a horizontal plane.
At any point in time, the camera gets a good view of the part of the cylinder
facing it, but the portions seen edge on can not be seen well, as the texture is
fore-shortened with the pixels crunched together. As the camera orbits, the
portions that can been seen well, and not seen well, change continuously.
Although the tilt weighting reduces the effect of edge-on triangles, we
provide an additional hard-and-fast control.
To ensure that texture extraction proceeds on the portions that can be
accurately extracted, and ignores the portions that can not, there is a Tilt control.
With the tilt control at zero, all grazing angles are considered. At one,
extraction proceeds only on the portions where they precisely face the camera,

341
TEXTURE EXTRACTION

with intermediate cutoff angles at values in between. (Multiply by 90 to get the


relevant angle.)
The tilt angle check is performed on a facet-by-facet basis, so meshes
where this calculation is necessary should have adequate segmentation. If there
are only 12 facets around, for example, then there will only be 3 facets between
straight-on and edge-on, roughly 30 degrees at a time will be turned off or on. A
segment count of 48 would be a better choice.

Run Control
The texture extraction process can run under manual or automatic control.
You can run it on only the selected mesh(es) using the Run button, or on all
meshes using the Run All button. In both cases, of course only meshes set up for
extraction or blocking will be processed. It will take less time in total to process all
meshes simultaneously than to process each mesh separately.
You can also have the mesh extraction process run automatically at the
end of each solve operation (normal or refine). You should use this option
judiciously, as extraction can take quite some time and you will not want to do it
over and over while you are working on unrelated tracking or solving issues.

When Is the Texture Saved?


Extracted textures are saved as part of the process of extracting them.
However, they are also re-saved when you change either of the following
controls:
the texture orientation drop-down,
the channel-depth drop-down,
the file compression setting,
or of course, when you click the Save button on the texture control panel.
When you are painting an alpha channel or creating one with the alpha
spinners, you will need to save the affected texture manually when you are done
painting on it (unless you re-extract it or change one of the format controls listed
above).

Alpha Generation
When a mesh covers only the interior of an object, the pixels underneath it
repeat reliably in every frame. For example, a side of a building, a poster on a
wall, etc. However, the edge of a castle or a natural scene with rocks or trees will
have an irregular border that is tough to model with mesh geometry.
If the geometry extends past the edge of the object, those pixels will vary
over time, depending on what is behind the object. As the camera moves, those
pixels that are not over a part of the object will sweep across different parts of the
background, potentially producing a broad spread of pixel values.

342
TEXTURE EXTRACTION

We can exploit that to produce an alpha channel for the mesh that is
opaque where it covers the object of interest, and transparent for the
background. Such meshes and textures can then readily be used in traditional
compositing, especially if the mesh is always a flat plane.
SynthEyes measures the RMS error (repeatability) of pixels, and offers the
Alpha Control section of the texture control panel. To generate an alpha channel,
turn on the Create Alpha checkbox.
The Low error spinner sets the level at which the alpha channel will be
fully opaque (below this lower limit). The High error spinner sets the level at
which the alpha channel will be fully transparent (above this upper limit). The
Sharpness spinner controls what happens in between those limits, much like a
gamma control.
You can increase the Low limit until portions of the alpha channel start to
drop out that should not, and decrease the High limit to the point that the
background is fully clear.
The alpha channel will update immediately as you do this, without having
to recalculate the texture.

Important: though the texture will be updated in the viewports


immediately as you adjust the alpha spinner controls, you should click
the Save button when you are done, to re-write the modified textures to
disk.

You should not expect this process to be perfect; it depends strongly on


what the background is behind the object and how much variability there is in the
background itself. For example, a green-screen background will always stick with
the foreground, because it never varies!
To make it easier to see what you have in the alpha channel, you can use
the Show only texture alpha checkbox and of course the Hide mesh selection
checkbox. You can also use your operating system's image-preview tools to look
at the texture images that have been stored to disk.
To clean up the alpha channel, or create one from scratch, you can paint
in it directly, as described in the next section.

Alpha Painting
SynthEyes offers a painting system that allows you to directly paint in the
alpha channel of extracted textures. You can paint fine detail into textures,
especially cards, rather than trying to create extremely detailed geometry. And
you can better capture natural elements.
Painting can be completely controlled only from the Paint toolbar,
accessed from the perspective view's right-click menu. There must be exactly
one mesh selected; it should not be the edit mesh. There are convenience

343
TEXTURE EXTRACTION

buttons to show only the alpha channel, hide the selection, and hide the mesh
completely on the toolbar.
There are four mouse modes for painting, all of which use the same three
Size, Sharpness, and Density settings. These settings are adjusted by dragging
vertically, starting within the respective Density etc box on the Paint toolbar.

Fine Point!: The settings affect the last stroke drawn (so you can
change it), as well as the next stroke you draw. To change the
parameters before starting a new stroke, without affecting the old
stroke, either click one of the drawing mode buttons again, or right-
click one of the setting buttons. While the setting buttons are attached
to an existing stroke, there is an asterisk (*) after the name of the
button. You can re-attach to the last stroke by double-clicking one of
the settings buttons.

The Size setting controls the size of the brush, in pixels. The sharpness
setting controls the type of fall-off away from the center of the brush. The Density
setting ranges from -1 to +1: at -1, painting makes the pixels immediately
transparent, at +1, painting makes the pixels immediately opaque. In between,
pixels are made only somewhat more transparent or opaque. The transparent
and opaque buttons on the toolbar set the density quickly to the respective value.
The Paint Alpha mode is for 'scribbling' on a mesh while holding down the
(left) mouse button, turning the texture extracted at those pixels transparent or
opaque etc as controlled by the settings. You can paint away extra pieces of
geometry, where the texture is the blurry background, adjust and soften edges to
match the desired portion of the texture, etc.
Note: you must paint on the mesh, you really are painting on the
geometry. If you click off the edge of the mesh, thinking the size of the pen is
going to affect the mesh, it will not, nothing will happen at all.
The Paint Alpha loop is a scribble-type mode, but it creates filled regions,
to rapidly fill a slightly noisy interior in an automatically-created alpha channel, or
to knock a hole around some unwanted texture.
The Pen Z Alpha mode produces straight-line segments between
endpoints, with one endpoint created for each click of the mouse. The "Z" in "Pen
Z" refers to the shape of the paths created, not any particular meaning. Use Pen
Z mode to create clean straight lines along edges, to mask the edge of a
building, for example.
The Pen S Alpha mode produces curved spline-based curves between
endpoints, with an endpoint created per mouse click. Again, the "S" refers to the
shape produced, though you can think of it as spline or smooth as well. Use Pen
S mode to create smoother curved edges.

344
TEXTURE EXTRACTION

In addition to Undo, the paint toolbar contains buttons to delete the last
stroke (and then the one before that, and before that, ....) and to delete all the
strokes.
After finishing painting, click the Save button on the texture panel to re-
write the altered texture(s) to disk.
Your paint strokes are recorded, so that if you later re-calculate the
texture, the paint strokes will be re-applied to the new version of the texture. If
you have changed to the mesh or solve substantially, you may need to re-paint
or touch up the alpha to adjust.

Far Meshes
You may want to create background textures for large spherical or planar
background meshes, ie sky maps. This can be inconvenient, as to work properly
the sky map or distant backdrop must be very large and very far away, ie several
thousand times farther away than the maximum distance of the camera motion.
To simplify this, SynthEyes allows you to create "Far Meshes" similar to
far trackers. Far meshes automatically translate along with the camera, allowing
a conveniently-sized small mesh to masquerade as a large one.
Set the mesh to Far using the button on the 3-D Control Panel.
Afterwards, you will see it move with the camera, do not be alarmed!

Mesh and Texture Export


The SynthEyes exports have historically been intended to export primarily
the camera path and tracker locations, not any meshes present in the scene, as
those meshes were originally expected to be useful only for tracking verification.
The presence of the set reconstruction tools has changed that situation, of
course.
The details of exactly what can be exported from SynthEyes to any given
export target vary on a target by target basis. Filmbox (FBX) exports a robust set
of information.
Though it is of course nice to have everything happen automagically at the
press of a button, that will not necessarily be the case, and for many targets, may
not even be possible, as many export targets do not support features such as 3-
D meshes (for example, After Effects).
Accordingly you may have to confine yourself to elements such as cards,
or might have to manually transfer 3-D elements to your target package.
SynthEyes can produce Alias Wavefront ".OBJ" files which should be
readable by every reasonable 3-D package. After you create a mesh, you can
export it using the Mesh as A/W .obj file exporter. You will have to position and
orient it in the same location within your target application.

345
TEXTURE EXTRACTION

Fortunately, in the case of tracker-based meshes, the situation is quite


simple: tracker meshes always have their mesh origin at the world coordinate
system origin, ie vertex coordinates are always their location in 3-D. You can
always position a tracker mesh correctly by placing it at the origin, with no extra
rotation.
For a primitive-based mesh, you will need to obtain the correct
coordinates from SynthEyes's 3-D panel. The orientation can be a bit trickier, as
the orientation angles are not standardized from program to program. If you have
boxes and cones etc aligned with the coordinate system axes, it should be
relatively easy to reorient them correctly.
For more complex situations, you can ask SynthEyes to export your
primitive-based meshes like the tracker-based meshes, ie but converting all the
vertex locations from relative coordinates to world coordinates, removing the
effect of the translation, rotation, and scaling that have been applied to the
primitive. To do this, use the Meshes in Scene as .obj exporter. When you import
these meshes into your target position, you should position them at the origin
with no rotation, just as with the tracker-based meshes.
The meshes-in-scene exporter can export multiple meshes in one go, but
you should not use that capability if you have extracted textures for them!
Instead, select and export them one at a time, using the Selected Meshes option
of the export.
Once you have repositioned the meshes within your animation package,
you should re-apply their respective textures, using the appropriate texture file on
disk. (This is why you should not export multiple meshes as one object, because
they will each need a separate texture file.)

Using the Textures


The extracted textures will have low noise, which is an advantage of
texture extraction over using individual frames. Clean images make a good
starting point for further work, however, you will usually need to add noise as part
of your compositing, to match the original shot. You will need noise that changes
each frame to match the original shot.

346
Optimizing for Real-Time Playback
SynthEyes can be used as a RAM player for real-time playback of source
sequences, source with temporary inserts, or final renders. This section will
discuss how to best configure SynthEyes for this purpose.
Note that SynthEyes leaves incoming Cineon and DPX files in their raw
format, so that they can be undistorted and saved with maximum accuracy. If you
want to use SynthEyes as a RAM player, you should use the image preprocessor
to color-correct the images for proper display. You might use the low, mid,
gamma, and high level controls, or a color LUT.

Image Storage
First, you want to get the shot into RAM. Clearly, having a lot of RAM will
help. If you are using a 32-bit system (XP/Vista-32 or OS X), you can only cache
about 2.5 GB of imagery in RAM at a time, regardless of how much RAM is in
your system, due to the nature of 32-bit addressing. In SynthEyes-64, running on
XP/Vista-64, you can use your entire RAM, except for about 1.5 GB.
If your shot does not fit, you have two primary options: using the small
playback-range markers on the SynthEyes time bar to play back a limited range
of the shot at a time, or to reduce the amount of memory by down-sampling the
images in the SynthEyes image preprocessor (or maybe drop to black/white). If
you have 4K film or RED scans and are playing back on a 2K monitor, you might
as well down-sample by 2x anyway.
If you have a RAID array on your computer, SynthEyess sophisticated
image prefetch system should let you pull large sequences rapidly from disk.

Refresh Rate Optimization


You want SynthEyes to play back the images at as rapid a rate as
possible. On Windows, that usually means the Camera view, in normal mode, not
OpenGL. On a Mac, use the Camera View in OpenGL mode.
All the other items being displayed also take up time and affect the display
rate. From the Window menu, turn off Show Top Time bar and select No
Panel. On the View menu, adjust Show Trackers and Show 3-D points
depending on the situation.
Select the Camera view. It will now fill the entire viewport area, with only
the menu and toolbar at top, the status line showing playback rate at the bottom,
and a small margin on the left and right. You can further reduce the items
displayed by selecting Window/Floating Camera. (There is no margin-less full-
screen mode)

Actual-Speed Playback
Once you have your shot playing back as rapidly as possible, you
probably want it to play at the desired rate, typically 24, 25, or 29.97 fps.

347
OPTIMIZING FOR REAL-TIME PLAYBACK

You can tell SynthEyes to play back at full speed, half speed, quarter
speed, or double actual speed using the items on the View menu.
SynthEyes does not change your monitor display rate. It achieves your
desired frame rate by playing frames as rapidly as possible, duplicating or
dropping frames as appropriate (much like a film projector double-exposes
frames). The faster the display rate, the more accurately the target frame rate
can be achieved, with less jitter.
With the control panel hidden, you should use the space bar to start and
stop playback, and shift-A to rewind to the beginning of the shot.

Safe Areas
You can enable one or more safe-area overlays from the safe area
submenu of the View menu.

348
Troubleshooting
Sliding. This is what you see when an object appears to be moving,
instead of stationary on a floor, for example. This is a user error, not a software
error, typically due to object placement errors. Almost always, this is because the
inserted object has not been located in exactly the right spot, rather than
indicating a tracking problem. Often, an object is inserted an inch or two above a
floor. Be sure you have tracked the right spot: to determine floor level, track
marks on the floor, not tennis balls sitting on it, which are effectively an inch or
two higher. If you have to work from the tennis balls, set up the floor coordinate
system taking the ball radius into account, or place the object the corresponding
amount below the apparent floor.
Also, place trackers near the location of the inserted object whenever
possible.
Another common cause of sliding: a tracker that jumps from one spot to
another at some frame during the track.
It lines up in SynthEyes, but not XXX. The export scripts do what they
can to try to ensure that everything lines up just as nicely in your post-tracking
application as in SynthEyes, but life is never simple. There are preferences that
may be different, maybe youre integrating into an existing setup, maybe you
didnt think hitting xxx would matter, etc. The main causes of this problem have
been when the field of view is mangled (especially when people worry about
focal length instead, and have the wrong back plate width), and when the post-
tracking application turns out to be using a slightly different timing for the images,
one frame earlier or later, or 29.97 vs 30 fps etc, or with or without some
cropping.
Camera01: No trackers, please fix or set camera to disabled. You
have created a scene with more than one camera, opening a new shot into an
existing fileone with no trackers. The message is 100% correct. You need to
select the original camera on the Shot menu, then Shot/Remove object.
Cant locate satisfactory initial frame when solving. When the
Constrain checkbox is on (Solver panel), the constrained trackers need to be
active on the begin and end frames. Consequently, keeping Constrain off is
preferable. Alternatively, the shot may lack very much parallax. Try setting the
Solver Panels Begin and/or End frames manually. For example, set the range to
the entire shot, or a long run of frames with many trackers in common. However,
keep the range short enough that the camera motion from beginning to end stays
around 30 degrees maximum rotation about any axis.
I tried Tripod mode, and now nothing works and you get Cant locate
satisfactory initial frame or another error message. Tripod mode turns all the
trackers to Far, since they will have no distance data in tripod mode. Select all
the trackers, and turn Far back off (from the coordinate system control panel).

349
TROUBLESHOOTING

Bad Solution, very small field of view. Sometimes the final solution will
be very small, with a small field of view. Often this means that there is a problem
with one or more trackers, such as a tracker that switches from one feature to a
different one, which then follows a different trajectory. It might also mean an
impossible set of constraints, or sometimes an incomplete set of rotation
constraints. You might also consider flipping on the Slow but sure box, or give a
hint for a specific camera motion, such as Left or Up. Eliminate inconsistent
constraints as a possibility by turning off the Constrain checkbox.
Object Mode Track Looks Good, but Path is Huge. If youve got an
object mode track that looks good---the tracker points are right on the tracker
boxes---but the object path is very large and flying all over the place, usually you
havent set up the objects coordinate system, so by default it is the camera
position, far from the object itself. Select one tracker to be the object origin, and
use two or more additional ones to set up a coordinate system, as if it was a
normal camera track.
Master Reset Does Not Work. By design, the master reset does not
affect objects or cameras in Refine or Refine Tripod mode: they will have to be
set back to their primary mode anyway, and this prevents inadvertent resets.
Cant open an image file or movie. Image file formats leave room for
interpretation, and from time to time a particular program may output an image in
a way that SynthEyes is not prepared to read. SynthEyes is intended for RGB
formats with 8 or more bits per channel. Legacy or black and white formats will
probably not read. If you find a file you think should read, but does not, please
forward it to SynthEyes support. Such problems are generally quick to rectify,
once the problematic file can be examined in detail. In the meantime, try a
different file format, or different save options, in the originating program, if
possible, or use a file format converter if available. Also, make sure you can read
the image in a different program, preferably not the one that created it: some
images that SynthEyes couldnt read have turned out to be corrupted
previously.
Cant delete a key on a tracker (ie by right-clicking in the tracker view
window, or right-clicking the Now button). If the tracker is set to automatically key
every 12 frames, and this is one of those keys, deleting it will work, but
SynthEyes will immediately add a new key! Usually you want to back up a few
frames and add a correct key; then you can delete or correct the original one. Or,
increase the auto-key setting. Also, you can not delete a key if the tracker is
locked.

Crashes
We work hard to make sure that SynthEyes and Synthia are as reliable as
possible. If there is a problem, we want not only to minimize lost work, but we
very much appreciate it if you can provide us as much information as possible, so
that we can isolate and eliminate the cause of the problem.

350
TROUBLESHOOTING

For more information, see https://www.ssontech.com/faqs/crash.html


Recovering Crash Dumps
Dump files can be packaged up and emailed to support@ssontech.com
along with a step-by-step description of what you were doing immediately before
the crash. If the file is more than 5 MB or so, please use Dropbox etc.
Windows: Crash dump files for the last 3 crashes can be retrieved by
opening SynthEyes and selecting the File/User Data Folder menu item. Go
UP two levels to the AppData folder, then DOWN into the Local, SynthEyes,
and CrashDumps folders, in that order.
Mac OS X: When the Mac crash dialog appears, click the See Report button.
Then command-A and command-C to copy all of the report onto the
clipboard. Paste it into the email message.
Linux: The location or existence of the core dump file depends on your
particular system and its settings. You can package up core dumps if you can
generate them.
SNI File Recovery
In the event that SynthEyes detects an internal error, it will usually pop up
an Imminent Crash dialog box asking you if you wish to save a crash file,
crash.sni. You should take a screen capture with Print Screen on your keyboard,
then respond Yes. SynthEyes will save the current file to a special crash location,
then pops up another dialog box that tells you that location (within your
Documents and Settings folder).
You should then open a paint program such as Photoshop, Microsoft
Paint, Paint Shop Pro, etc, and paste in the screen capture. Save the image to a
file, then e-mail the screen capture, the crash save file, and a short description of
what you were doing right before the crash, to SynthEyes technical support for
diagnosis, so that the problem can be fixed in future releases. If you have
Microsofts Dr. Watson turned on, forwarding that file would also be helpful.
The crash save file (crash.sni) is your SynthEyes scene, right before it
began the operation that resulted in the crash. You can find the file by starting
SynthEyes, then selecting File/User Data Folder, if you did not record its location
earlier. You should often be able to continue using this file, especially if the crash
occurred during solving. It is conceivable that the file might be corrupted, so if
you recently had saved the file, you may wish to go back to that file for safety.

32-bit Systems: By far the largest source of SynthEyes crashes is


running your machine out of memory. Auto-tracked HD scenes will
do that easily on 32-bit systems. If you suspect that may be a problem,
or SynthEyes crashes, reduce the Max RAM Cache preference down
to 100 MB or so. It is also a good idea to re-open SynthEyes if you
have auto-tracked the same shot several timesor turn down the undo
setting because the amount of data per undo can be very large.

351
Combining Automated and Supervised Tracking
It can be helpful to combine automated tracking with some supervised
trackers, especially when you would like to use particular features in the image to
define the coordinate system, to help the automated tracker with problematic
camera motions, to aid scene modeling, or to stabilize effects insertion at a
particular location.

Guide Trackers
Guide Trackers are supervised trackers, added before automated
tracking. Pre-existing trackers are automatically used by the automated tracking
system to re-register frames as they move. With this guidance, the automated
tracking system can accommodate more, or crazier, motions than it would
normally expect.
Unless the overall feature motion is very slow, you should always add
multiple guide trackers distributed throughout the image, so that at any location in
the image, the closest guide tracker has a similar motion. [The main exception: if
you have a jittery hand-held shot where, if it was stabilized, the image features
actually move rather slowly, you can use only a single guide tracker.]

Note: guide trackers are rarely necessary, and are processed


differently than in previous versions of SynthEyes.

Supervised Trackers, After Automated Tracking


You can easily add supervised trackers after running the automated
tracker. Create the trackers from the Tracker panel, adjust the coordinate system
settings as needed, then, on the Solver Panel, switch to Refine mode and hit Go!

Converting Automatic Trackers to Supervised Trackers


Suppose you want to take an automatically-generated tracker and modify
it by hand. You may wish to improve it: perhaps to extend it earlier or later in the
shot, or to patch up a few frames where it gets off track.
From the Tracking Control Panel select the automatically-generated
tracker(s) you want to work on, and unlock them. This converts them to
supervised trackers and sets up a default search region for them.
You can also use the To Golden button on the Feature Control Panel to
turn selected trackers from automatic to supervised without unlocking them (and
without setting up a search region).
Sometimes, you may wish to convert a number of automatic trackers to
supervised, possibly add some additional trackers, and then get rid of all the
other automatically-generated trackers, leaving you with a well-controlled group
of supervised trackers. The Delete Leaden button on the Feature Control Panel
will delete all trackers that have not been converted to golden.

353
COMBINING AUTOMATED AND SUPERVISED TRACKING

You can also use the Combine trackers item on the Track Menu to
combine a supervised tracker with an automatically-generated one, if they are
tracking the same feature.
The Track/Fine-tune Trackers menu item re-tracks supervised trackers, to
improve accuracy on some imagery.

354
Stabilization
In this section, well go into SynthEyes stabilization system in depth, and
describe some of the nifty things that can be done with it. If we wanted, we could
have a single button Stabilize this! that would quickly and reliably do a bad job
almost all the time. If thats what youre looking for, there are some other
software packages that will be happy to oblige. In SynthEyes, we have provided
a rich toolset to get outstanding results in a wide variety of situations.

Note: See the section on 360VR for information on how to stabilize


360VR footage.

You might wonder why weve buried such a wonderful and significant
capability quite so far into the manual. The answer is simple: in the hopes that
youve actually read some of the manual, because effectively using the stabilizer
will require that you know a number of SynthEyes concepts, and how to use the
SynthEyes tracking capabilities.
If this is the first section of the manual that youre reading, great, thanks
for reading this, but youll probably need to check out some of the other sections
too. At the least, you have to read the Stabilization quick-start.
Also, be sure to check the web site for the latest tutorials on stabilization.
We apologize in advance for some of the rant content of the following
sections, but its really in your best interest!

Why SynthEyes Has a Stabilizer


The simple and ordinary need for stabilization arises when you are
presented with a shot that is bouncing all over the place, and you need to clean it
up into a solid professional-looking shot. That may be all that is needed, or you
might need to track it and add 3-D effects also. Moving-camera shots can be
challenging to shoot, so having software stabilization can make life easier.
Or, you may have some film scans which are to be converted to HD or SD
TV resolution, and effects added.
People of all skill levels have been using a variety of ad-hoc approaches
to address these tasks, sometimes using software designed for this, and
sometimes using or abusing compositing software. Sometimes, presumably, this
all goes well. But many times it does not: a variety of problem shots have been
sent to SynthEyes tech support which are just plain bad. You can look at them
and see they have been stabilized, and not in a good way.
We have developed the SynthEyes stabilizer not only to stabilize shots,
but to try to ensure that it is done the right way.

355
STABILIZATION

How NOT to Stabilize


Though it is relatively easy to rig up a node-based compositor to shift
footage back and forth to cancel out a tracked motion, this creates a fundamental
problem:
Most imaging software, including you, expects the optic center of an
image to fall at the center of that image. Otherwise, it looks weirdthe
fundamental camera geometry is broken. The optic center might also be called
the vanishing point, center of perspective, back focal point, center of lens
distortion.
For example, think of shooting some footage out of the front of your car as
you drive down a highway. Now cut off the right quarter of all the images and
look at the sequence. It will be 4:3 footage, but its going to look strangethe
optic center is going to be off to the side.
If you combine off-center footage with additional rendered elements, they
will have the optic axis at their center, and combined with the different center of
the original footage, they will look even worse.
So when you stabilize by translating an image in 2-D (and usually zooming
a little), youve now got an optic center moving all over the place. Right at the
point youve stabilized, the image looks fine, but the corners will be flying all over
the place. Its a very strange effect, it looks funny, and you cant track it right. If
you dont know what it is, youll look at it, and think it looks funny but not know
what has hit you.
Recommendation: if you are going to be adding effects to a shot, you
should ask to be the one to stabilize or pan/scan it also. Weve given you the tool
to do it well, and avoid mishap. Thats always better than having someone else
mangle it, and having to explain later why the shot has problems, or why you
really need the original un-stabilized source by yesterday.

In-Camera Stabilization
Many cameras now feature built-in stabilization, using a variety of
operating principles. These stabilizers, while fine for shooting babys first steps,
may not be fine at all for visual effects work.
Electronic stabilization uses additional rows and columns of pixels, then
shifts the image in 2-D, just like the simple but flawed 2-D compositing approach.
These are clearly problematic.
One type of optical stabilizer apparently works by putting the camera
imaging CCD chip on a little platform with motors, zipping the camera chip
around rapidly so it catches the right photons. As amazing as this is, it is clearly
just the 2-D compositing approach.
Another optical stabilizer type adds a small moving lens in the middle of
the collection of simple lens comprising the overall zoom lens. Most likely, the
result is equivalent to a 2-D shift in the image plane.

356
STABILIZATION

A third type uses prismatic elements at the front of the lens. This is more
likely to be equivalent to re-aiming the camera, and thus less hazardous to the
image geometry.
Doubtless additional types are in use and will appear, and it is difficult to
know their exact properties. Some stabilizers seem to have a tendency to
intermittently jump when confronted with smooth motions. One mitigating factor
for in-camera stabilizers, especially electronic, is that the total amount of offset
they can accommodate is smallthe less they can correct, the less they can
mess up.
Recommendation: It is probably safest to keep camera stabilization off
when possible, and keep the shutter time (angle) short to avoid blur, except when
the amount of light is limited. Electronic stabilizers have trouble with limited light
so that type might have to be off anyway.

3-D Stabilization
To stabilize correctly, you need 3-D stabilization that performs keystone
correction (like a projector does), re-imaging the source at an angle. In effect,
your source image is projected onto a screen, then re-shot by a new camera
looking in a somewhat different direction with a smaller field of view. Using a new
camera keeps the optic center at the center of the image.
In order to do this correctly, you always have to know the field of view of
the original camera. Fortunately, SynthEyes can tell us that.

Stabilization Concepts
Point of Interest (POI). The point of interest is the fixed point that is being
stabilized. If you are pegging a shot, the point of interest is the one point on the
image that never moves.
POI Deltas (Adjust tab). These values allow you to intentionally move the
POI around, either to help reduce the amount of zoom required, or to achieve a
particular framing effect. If you create a rotation, the image rotates around the
POI.
Stabilization Track. This is roughly the path the POI tookit is a
direction in 3-D space, described by pan/tilt/roll anglesbasically where the
camera (POI) was looking (except that the POI isnt necessarily at the center of
the image).
Reference Track. This is the path in 3-D we want the POI to take. If the
shot is pegged, then this track is just a single set of values, repeated for the
duration of the shot.
Separate Field of View Track. The image preparation system has its own
field of view track. The image preps FOV will be larger than main FOV, because
the image prep system sees the entire input image, while the main tracking and
solving works only on the smaller stabilized sub-window output by image prep.

357
STABILIZATION

Note that an image prep FOV is needed only for stabilization, not for pixel-level
adjustments, downsampling, etc. The Get Solver FOV button transfers the main
FOV track to the stabilizer.
Separate Distortion Track. Similarly there is a separate lens distortion
track. The image preps distortion can be animated, while the main distortion can
not. The image prep distortion or the main distortion should always be zero, they
should never both be nonzero simultaneously. The Get Solver Distort button
transfers the main distortion value (from solving or the Lens-panel alignment
lines) to the stabilizer, and begs you to let it clear the main distortion value
afterwards.
Stabilization Zoom. The output window can only be a portion of the size
of the input image. The more jiggle, the smaller the output portion must be, to be
sure that it does not run off the edge of the input (see the Padded mode of the
image prep window to see this in action). The zoom factor reflects the ratio of the
input and output sizes, and also what is happening to the size of a pixel. At a
zoom ratio of 1, the input and output windows and pixels are the same size. At a
zoom ratio of 2, the output is half the size of the input, and each incoming pixel
has to be stretched to become two pixels in the output, which will look fairly
blurry. Accordingly, you want to keep the zoom value down in the 1.1-1.3 region.
After an Auto-scale, you can see the required zoom on the Adjust panel.
Re-sampling. Theres nothing that says we have to produce the same
size image going out as coming in. The Output tab lets you create a different
output format, though you will have to consider what effect it has on image
quality. Re-sampling 3K down to HD sounds good; but re-sampling DV up to HD
will come out blurry because the original picture detail is not there.
Interpolation Filter. SynthEyes has to create new pixels in-between the
existing ones. It can do so with different kinds of filtering to prevent aliasing,
ranging from the default Bi-Linear, 2-Mitchell, to the most complex 3-Lanczos.
The bi-linear filter is fastest but produces the softest image. The Lanczos filters
take longer, but are sharperalthough this can be drawback if the image is
noisy.
Tracker Paths. One or more trackers are combined to form the
stabilization track. The trackers 2-D paths follow the original footage. After
stabilization, they will not match the new stabilized footage. There is a button,
Apply to Trkers, that adjusts the tracker paths to match the new footage, but
again, they then match that particular footage and they must be restored to
match the original footage (with Remove f/Trkers) before making any later
changes to the stabilization. If you mess up, you either have to return to an
earlier saved file, or re-track.

Overall Process
Were ready to walk through the stabilization process. You may want to
refer to the Image Preprocessor Reference.

358
STABILIZATION

Track the features required for stabilization: either a full auto-track,


supervised tracking of particular features to be stabilized, or a combination.
If possible, solve the shot either for full 3-D or as a tripod shot, even if it is not
truly nodal. The resulting 3-D point locations will make the stabilization more
accurate, and it is the best way to get an accurate field of view.
If you have not solved the shot, manually set the Lens FOV on the Image
Preprocessors Lens tab (not the main Lens panel) to the best available
value. If you do set up the main lens FOV, you can import it to the Lens tab.
On the Stabilization tab, select a stabilization mode for translation and/or
rotation. This will build the stabilization track automatically if there isnt one
already (as if the Get Tracks button was hit), and import the lens FOV if the
shot is solved.
Adjust the frequency spinner as desired.
Hit the Auto-Scale button to find the required stabilization zoom
Check the zoom on the Adjust tab; using the Padded view, make any
additional adjustment to the stabilization activity to minimize the required
zoom, or achieve desired shot framing.
Output the shot. If only stabilized footage is required, you are done.
Update the scene to use the new imagery, and either re-track or update the
trackers to account for the stabilization
Get a final 3-D or tripod solve and export to your animation or compositing
package for further effects work.
There are two main kinds of shots and stabilization for them: shots
focusing on a subject, which is to remain in the frame, and traveling shots, where
the content of the image changes as new features are revealed.

Stabilizing on a Subject
Often a shot focuses on a single subject, which we want to stabilize in the
frame, despite the shaky motion of the camera. Example shots of this type
include:
The camera person walking towards a mark on the ground, to be
turned into a cliff edge for a reveal.
A job site to receive a new building, shot from a helicopter orbiting
overhead
A camera car driving by a house, focusing on the house.
To stabilize these shots, you will identify or create several trackers in the
vicinity of the subject, and with them selected, select the Peg mode on the
Translation list on the Stabilize tab.

359
STABILIZATION

This will cause the point of interest to remain stationary in the image for
the duration of the shot.
You may also stabilize and peg the image rotation. Almost always, you will
want to stabilize rotation. It may or may not be pegged.
You may find it helpful to animate the stabilized position of the point of
interest, in order to minimize the zoom required, see below, and also to enliven a
shot somewhat.
Some car commercials are shot from a rig that shows both the car and the
surrounding countryside as the car drives: they look a bit surreal because the car
is completely stationaryhaving been pegged exactly in place. No real camera
rig is that perfect!

Stabilizing a Traveling Shot


Other shots do not have a single subject, but continue to show new
imagery. For example,
A camera car, with the camera facing straight ahead
A forward-facing camera in a helicopter flying over terrain
A camera moving around the corner of a house to reveal the
backyard behind it
In such shots, there is no single feature to stabilize. Select the Filter mode
for the stabilization of translation and maybe rotation. The result is similar to the
stabilization done in-camera, though in SynthEyes you can control it and have
keystone correction.
When the stabilizer is filtering, the Cut Frequency spinner is active. Any
vibratory motion below that frequency (in cycles per second) is preserved, and
vibratory motion above that frequency is greatly reduced or eliminated.
You should adjust the spinner based on the type of motion present, and
the degree of stabilization required. A camera mounted on a car with a rigid
mount, such as a StickyPod, will have only higher-frequency residual vibration,
and a larger value can be used. A hand-held shot will often need a frequency
around 0.5 Hz to be smooth.

Note: When using filter-mode stabilization, the length of the shot


matters. If the shot is too short, it is not possible to accurately control
the frequency and distinguish between vibration and the desired
motion, especially at the beginning and end of the shot. Using a longer
version of the take will allow more control, even if much of the
stabilized shot is cut after stabilization.

Start with a larger cut value, and decrease it (to a value that filters more)
only after assessing the impact on the shot. If a shot contains large bumps and

360
STABILIZATION

you try to filter them to severely, the entire source image will go offscreen: not
only will zoom be excessive, but the output will be totally black.

Minimizing Zoom
The more zoom required to stabilize a shot, the less image quality will
result, which is clearly bad. Can we minimize the zoom, and maximize image
quality? Of course, and SynthEyes provides the controllability to do so.
Stabilizing a shot has considerable flexibility: the shot can be stable in lots
of different ways, with different amounts of zoom required. We want a shot that
everyone agrees is stable, but minimizes the effect on quality. Fortunately, we
have the benefit of foresight, so we can correct a problem in the middle of a shot,
anticipating it long before it occurs, and provide an apparently stable result.
Animating POI
The basic technique is to animate the position of the point-of-interest
within the frame. If the shot bumps left suddenly, there are fewer pixels available
on the left side of the point of interest to be able to maintain its relative position in
the output image, and a higher zoom will be required. If we have already moved
the point of interest to the left, fewer pixels are required, and less zoom is
required.
Earlier, in the Stabilization Quick Start, we remarked that the 28% zoom
factor obtained by animating the rotation could be reduced further. Well continue
that example here to show how. Re-do the quick start to completion, go to frame
178, with the Adjust tab open, in Padded display mode, with the make key button
turned on.
From the display, you can see that the red output-area rectangle is almost
near the edge of the image. Grab the purple point-of-interest crosshair, and drag
the red rectangle up into the middle of the image. Now everything is a lot safer. If
you switch to the stabilize tab and hit Autoscale, the red rectangle enlarges
there is less zoom, as the Adjust tab shows. Only 15% zoom is now required.
By dragging the POI/red rectangle, we reduced zoom. You can see that
what we did amounted to moving the POI. Hit Undo twice, and switch to the Final
view.
Drag the POI down to the left, until the Delta U/V values are approximately
0.045 and -0.035. Switch back to the Padded view, and youll see youve done
the same thing as before. The advantage of the padded view is that you can
more easily see what you are doing, though you can get a similar effect in the
Final view by increasing the margin to about 0.25, where you can see the dashed
outline of the source image.
If you close the Image Prep dialog and play the shot, you will see the
effect of moving the POI: a very stable shot, though the apparent subject
changes over time. It can make for a more interesting shot and more creative
decisions.

361
STABILIZATION

Too Much of a Good Thing?


To be most useful, you can scrub through your shot and look for the worst
frame, where the output rectangle has the most missing, and adjust the POI
position on that frame.
After you do that, there will be some other frame which is now the worst
frame. You can go and adjust that too, if you want. As you do this, the zoom
required will get less and less.
There is a downside: as you do this, you are creating more of the
shakiness you are trying to get rid of. If you keep going, you could get back to no
zoom required, but all the original shakiness, which is of course senseless.
Usually, you will only want to create two or three keys at most, unless the
shot is very long. But exactly where you stop is a creative decision based on the
allowable shakiness and quality impact.
Warning: SynthEyes uses spline interpolation between keys. If you have
keys close together, other frames may have surprisingly high values. If this is
likely or already happening, open the Graph Editor before the Image
Preprocessor.
Auto-Scale Capabilities
The auto-scale button can automate the adjustment process for you, as
controlled by the Animate listbox and Maximum auto-zoom settings.
With Animate set to Neither, Auto-scale will pick the smallest zoom
required to avoid missing pieces on the output image sequence, up to the
specified maximum value. If that maximum is reached, there will be missing
sections.
If you change the Animate setting to Translate, though, Auto-scale will
automatically add delta U/V keys, animating the POI position, any time the zoom
would have to exceed the maximum.
Rewind to the beginning of the shot, and control-right-click the Delta-U
spinner, clearing all the position keys.
Change the Animate setting to Translate, reduce the Maximum auto-zoom
to 1.1, then click Auto-Scale. SynthEyes adds several keys to achieve the
maximum 10% zoom. If you play back the sequence, you will see the shot
shifting around a bit10% is probably too low given the amount of jitter in the
shot to begin with.
The auto-scale button can also animate the zoom track, if enabled with the
Animate setting. The result is equivalent to a zooming camera lens, and you
must be sure to note that in the main lens panel setting if you will 3-D solve the
shot later. This is probably only useful when there is a lot of resolution available
to begin with, and the point of interest approaches the boundary of the image at
the end of the shot.

362
STABILIZATION

Keep in mind that the Auto-scale functionality is relatively simple. By


considering the purpose of the shot as well as the nature of any problems in it,
you should often be able to do better.

Tweaking the Point of Interest


This is different than moving it!
When the selected trackers are combined to form the single overall
stabilization track, SynthEyes examines the weight of each tracker, as controlled
from the main Tracker panel.
This allows you to shift the position of the point-of-interest (POI) within a
group of trackers, which can be handy.
Suppose you want to stabilize at the location of a single tracker, but you
want to stabilize the rotation as well. With a single tracker, rotation can not be
stabilized. If you select two trackers, you can stabilize the rotation, but without
further action, the point of interest will be sitting between the two trackers, not at
the location of the one you care about.
To fix this, select the desired POI tracker in the main viewport, and
increase its weight value to the maximum (currently 10). Then, select the other
tracker(s), and reduce the weight to the minimum (0.050). This will put the POI
very close to your main tracker.
If you play with the weights a bit, you can make the POI go anywhere
within a polygon formed by the trackers. But do not be surprised if the resulting
POI seems to be sliding on the image: the POI is really a 3-D location, and
usually the combination of the trackers will not be on the surface (unless they are
all in the same plane). If this is a problem for what you want to do, you should
create a supervised tracker at the desired POI location and use that instead.
If you have adjusted the weights, and later want to re-solve the scene, you
should set the weights back to 1.0 before solving. (Select them all then set the
weight to 1).

Resampling and Film to HDTV Pan/Scan Workflow


If you are working with filmed footage, often you will need to pull the actual
usable area from the footage: the scan is probably roughly 4:3, but the desired
final output is 16:9 or 1.85 or even 2.35, so only part of the filmed image will be
used. A director may select the desired portion to achieve a desired framing for
the shot. Part of the image may be vignetted and unusable. The image must be
cropped to pull out the usable portion of the image with the correct aspect ratio.
This cropping operation can be performed as the film is scanned, so that
only the desired framing is scanned; clearly this minimizes the scan time and disk
storage. But, there is an important reason to scan the entire frame instead.
The optic center must remain at the center of the image. If the scanning is
done without paying attention, it may be off center, and almost certainly will be if

363
STABILIZATION

the framing is driven by directorial considerations. If the entire frame is scanned,


or at least most of it, then you can use SynthEyess stabilization software to
perform keystone correction, and produce properly centered footage.
As a secondary benefit, you can do pan and scan operations to stabilize
the shots, or achieve moving framing that would be difficult to do during
scanning. With the more complete scan, the final decision can be deferred or
changed later in production.
The Output tab on the Image Preparation controls resampling, allowing
you to output a different image format then that coming in. The incoming
resolution should be at least as large as the output resolution, for example, a 3K
4:3 film scan for a 16:9 HDTV image at 1920x1080p. This will allow enough
latitude to pull out smaller subimages.
If you are resampling from a larger resolution to a smaller one, you should
use the Blur setting to minimize aliasing effects (Moire bands). You should
consider the effect of how much of the source image you are using before
blurring. If you have a zoom factor of 2 into a 3K shot, the effective pixel count
being used is only 1.5K, so you probably would not blur if you are producing
1920x1080p HD.
Due to the nature of SynthEyes integrated image preparation system, the
re-sampling, keystone correction, and lens un-distortion all occur simultaneously
in the same pass. This presents a vastly improved situation compared to a typical
node-based compositor, where the image will be resampled and degraded at
each stage.

Changing Shots, and Creating Motion in Stills


You can use the stabilization system to adjust framing of shots in post-
production, or to create motion from still images (the Ken Burns effect).
To use the stabilizing engine you have to be stabilizing, so simply
animating the Delta controls will not let you pan and scan without the following
trick. Delete any the trackers, click the Get Tracks button, and then turn on the
Translation channel of the stabilizer. This turns on the stabilizer, making the
Delta channels work, without doing any actual stabilization.
You must enter a reasonable estimate of the lens field of view. If it is a
moving-camera or tripod-mode shot, you can track it first to determine the field of
view. Remember to delete the trackers before beginning the mock stabilization.
If you are working from a still, you can use the single-frame alignment tool
to determine the field of view. You will need to use a text editor to create an IFL
file that contains the desired number of copies of your original file name.

Stabilization and Interlacing


Interlaced footage presents special problems for stabilization, because
jitter in the positioning between the two fields is equivalent to jitter in camera

364
STABILIZATION

position, which were trying to remove. Because the two different fields are taken
at different points in time (1/30th or 1/25th of a second apart, regardless of shutter
time), it is impossible for man or machine to determine what exactly happened, in
general. Stabilizing interlaced footage will sacrifice a factor of two in vertical
resolution.
Best Approach: if at all possible, shoot progressive instead of interlace
footage. This is a good rule whenever you expect to add effects to a shot.
Fallback Approach: stabilize slow-moving interlaced shots as if they were
progressive. Stabilize rapidly-moving interlaced shots as interlaced.
To stabilize interlaced shots, SynthEyes stabilizes each sequence of fields
independently.
Note that within the image preparation subsystem, some animated tracks
are animated by the field, and some are animated by the frame.
Frame: levels, color/hue, distortion/scale, ROI
Field: FOV, cut frequency, Delta U/V, Delta Rot, Delta Zoom
When you are animating a frame-animated item on an interlaced shot, if
you set a key on one field (say 10), you will see the same key on the other field
(say 11). This simplifies the situation, at least on these items, if you change a
shot from interlaced to progressive or yes mode or back.

Avoid Slowdowns Due to Missing Keyframes


While you are working on stabilizing a shot, you will be re-fetching frames
from the source imagery fairly often, especially when you scrub through a shot to
check the stabilization. If the source imagery is a QuickTime or AVI that does not
have many (or any!) keyframes, random access into the shot will be slow, since
the codec will have to decompress all the frames from the last keyframe to get to
the one that is needed. This can require repeatedly decompressing the entire
shot. It is not a SynthEyes problem, or even specific to stabilizing, but is a
problem with the choice of codec settings.
If this happens (and it is not uncommon), you should save the movie as an
image sequence (with no stabilization), and Shot/Change Shot Images to that
version instead.
Alternatively, you may be able to assess the situation using the Padded
display, turning the update mode to Neither, then scrubbing through the shot.

After Stabilizing
Once youve finished stabilizing the shot, you should write it back out to
disk using the Save Sequence button on the Output tab. It is also possible to
save the sequence through the Perspective windows Preview Movie capability.
Each method has its advantages, but using the Save Sequence button will
be generally better for this purpose: it is faster; does less to the images; allows

365
STABILIZATION

you to write the 16 bit version; and allows you to write the alpha channel.
However, it does not overlay inserted test objects like the Preview Movie does.

Important: when writing stabilized 16-bit or floating-point images, be


sure that the Process bit depth and Store bit depth on the shot
parameters dialog is set appropriately. The default settings reduce
footage to 8 bits for tracking; you need to set it back to get the full bit
depth out.

You can use the stabilized footage you write for downstream applications
such as 3dsmax and Maya.
But before you export the camera path and trackers from SynthEyes, you
have a little more work to do. The tracker and camera paths in SynthEyes
correspond to the original footage, not the stabilized footage, and they are
substantially different. Once you close the Image Preparation dialog, youll see
that the trackers are doing one thing, and the now-stable image doing something
else.
You should always save the stabilizing SynthEyes scene file at this point
for future use in the event of changes.
You can then do a File/New, open the stabilized footage, track it, then
export the 3-D scene matching the stabilized footage.
But if you have already done a full 3-D track on the original footage, you
can save time.
Click the Apply to Trkers button on the Output tab. This will apply the
stabilization data to the existing trackers. When you close the Image Prep, the 2-
D tracker locations will line up correctly, though the 3-D Xs will not yet. Go to the
solver panel, and re-solve the shot (Go!), and the 3-D positions and camera path
will line up correctly again. (If you really wanted to, you could probably use Seed
Points mode to speed up this re-solve.)

Important: if you later decide you want to change the stabilization


parameters without re-tracking, you must not have cleared the
stabilizer. Hit the Remove f/Trkers button BEFORE making any
changes, to get back to the original tracking data. Otherwise, if you
Apply twice, or Remove after changes, you will just create a mess.

Also, the Blip data is not changed by the Apply or Remove buttons, and it
is not possible to Peel any blip trails, which correspond to the original image
coordinates, after completing stabilization and hitting Apply. So you must either
do all peeling first; remove, peel, and reapply the stabilization; or retrack later if
necessary.

366
STABILIZATION

Flexible Workflows
Suppose you have written out a stabilized shot, and adjusted the tracker
positions to match the new shot. You can solve the shot, export it, and play
around with it in general. If you need to, you can pop the stabilization back off the
trackers, adjust the stabilization, fix the trackers back up, and re-solve, all without
going back to earlier scene files and thus losing later work. Thats the kind of
flexibility we like.
Theres only one slight drawback: each time you save and close the file,
then reopen it, youre going to have to wait while the image prep system
recomputes the stabilized image. That might be only a few seconds, or it might
be quite a while for a long film shot.
Its pretty stupid, when you consider that youve already written the
complete stabilized shot to disk!
Approach 1: do a Shot/Change Shot Images to the saved stabilized shot,
and reset the image prep system from the Preset Manager. This will let you work
quickly from the saved version, but you must be sure to save this scene file
separately, in case you need to change the stabilization later for some reason.
And of course, going back to that saved file would mean losing later work.
Approach 2: Create an image prep preset (stab) for the full stabilizer
settings. Create another image prep preset (quick), and reset it. Do the
Shot/Change Shot Images. Now youve got it both ways: fast loading, and if you
need to go back and change the stabilization, switch back to the first (stab)
preset, remove the stabilization from the trackers, change the shot imagery back
to the original footage, then make your stabilization changes. Youll then need to
re-write the new stabilized footage, re-apply it to the trackers, etc.
Approach 1 is clearly simpler and should suffice for most simple situations.
But if you need the flexibility, Approach 2 will give it to you.

367
Rotoscoping and Alpha Channel Mattes
You may choose to use SynthEyess rotoscoping and alpha channel matte
capabilities when you are using automatic tracking or planar tracking.
Rotoscoping is helpful in the following situations:
A portion of the image contains significant image features that dont
correspond to physical objects---such as reflections, sparkling, lens
flares, camera moir patterns, burned-in timecode, etc,
There are pesky actors walking around creating moving features,
You want to track a moving object, but it doesnt cover the entire
frame,
You want to track both a moving object and the background
(separately).
In these situations, the automatic or planar tracker needs to be told, for
each frame, which parts of the image should be used to match-move the planar
tracker or camera and each object, and which portions of the image should be
ignored.

Hint: Often you can let the autotracker run, then manually delete the
unwanted trackers. This can be a lot quicker than setting up mattes. To
help find the undesirable trackers, turn on Tracker Trails on the Edit
menu.

SynthEyes provides two methods to control where the autotracker tracks:


animated splines and alpha channel mattes. Both can be used in one shot.
Planar trackers have four different methods: animated splines (as
described here), alpha channel mattes, in-plane masks, and tracker layering. For
more details specific to planar trackers, see the Planar Tracking Manual (on the
SynthEyes Help menu).
To create the alpha channel mattes, you need to use an external
compositing program to create the matte, typically by some variation of painting
it. If youve no idea what that last sentence said, you can skip the entire alpha
channel discussion and concentrate on animated splines, which do not require
any other programs.

Overall, and Rotoscope Panel


The Rotoscoping Panel controls the assignment of animated splines and
alpha-channel levels to cameras and objects. The next section will describe how
to set up splines and alpha channels, but for now, here are the rules for using
them.
The rotoscoping panel contains a list of splines. The top-most spline that
contains the blip wins. (The top of the list is the top of the stack on the image.) As

369
ROTOSCOPING AND ALPHA CHANNEL MATTES

you add new splines (at the beginning of the list), they override the ones you
have previously added. Internally, SynthEyes searches the spline list from top to
bottom. You can think of the splines as being layered: the bottom of the list is the
back layer, the top of the list is at front and has priority.

Note: the order of the spline list has been changed from SynthEyes
2011 and earlier, to reflect typical industry conventions.

There are two buttons, Move Up and Move Down , that let you
change the order of the splines.
A drop-down listbox, underneath the main spline list, lets you change the
camera or object to which a spline is selected.

This listbox always contains a Garbage item. If you assign Garbage to a


spline, that spline is a garbage matte and any blips within it are ignored.
If a blip isnt covered by any splines, then the alpha channel determines to
which object the blip is assigned.

Spline Workflow
When you create a shot, SynthEyes creates an initial static full-screen
rectangle spline that assigns all blips to the shots camera. You might add
additional splines, for garbage matte areas or moving objects you want to track.
Or, you might delete the rectangle and add only a new animated spline, if you are
tracking a full-screen moving object.
Ideally, you should add splines before running the autotracker the first
time, that will be simplest. However, if you run the autotracker, then decide to
add or modify the splines (using the Roto panel), you can then use the Features
panel to create a new set of trackers:
Delete the existing trackers using control-A and delete, or the
Delete Leaden button on the features panel,
Click the Link Frames button, which updates the possible tracker
paths based on your modified splines. Dont worry, you will be
prompted for this if you forget in almost all cases.

370
ROTOSCOPING AND ALPHA CHANNEL MATTES

Click the Peel All button to make new trackers.


The separate Link step is required to accommodate workflows with
manual Peeling using Peel mode button. (You may also be prompted to Link in
when entering Peel mode.)

Stereo Shots: You can use the Copy Splines script to copy a spline
from one eye to the other. You'll usually need to adjust the keyframes
somewhat for the other eye.

Animated Splines
Animated splines are created and manipulated in the camera viewport
only while the rotoscope control panel is open. At the top of the rotoscope
panel, a chart shows what the left and right mouse buttons do, depending on the
state of the Shift key.
Each spline has a center handle, a rotate/scale handle, and three or more
vertex control handles. Splines can be animated on and off over the duration of
the shot, using the stop-light enable button .
Vertex handles can be either corners or smooth. Double-click the vertex
handle to toggle the type.
Each handle can be animated over time, by adjusting the handle to the
desired location while SynthEyes is at the desired frame, setting a key at that
frame. The handle turns red whenever it is keyed on that frame. In between keys,
a control handle follows a linear path. The rotospline keys are shown on the
timebar, and the and advance to key buttons apply to the spline keys.

To create an animated spline, turn on the magic wand tool , go to the


splines first frame and left-click the splines desired center point. Then click on a
series of points around the edge of the region to be rotoscoped. Too many points
will make later animation more time consuming. You can switch back and forth
between smooth and corner vertex points by double-clicking as you create. After
you create the last desired vertex, right click to exit the mode.

You can also turn on and use create-rectangle and create-circle


spline creation modes, which allow you to drag out the respective shape.
After creating a spline, go to the last frame, and drag the control points to
reposition them on the edge. Where possible, adjust the spline center and
rotation/scale handle to avoid having to adjust each control point. Then go to the
middle of the shot, and readjust. Go one quarter of the way in, readjust. Go to the
three quarter mark, readjust. Continue in this fashion, subdividing each unkeyed
section until the spline is in the correct location already, which generally wont be
too long. This approach is much more effective than proceeding from beginning
to end.

371
ROTOSCOPING AND ALPHA CHANNEL MATTES

You may find it helpful to create keys on all the control points whenever
you change any of them. This can make the spline animation more predictable in
some circumstances (or to suit your style). To do this, turn on the Key all CPs if
any checkbox on the roto panel.
Note that the splines dont have to be accurate. They are not being used
to matte the objects in and out of the shot, only to control blips which occur
relatively far apart.
Right-click a control point to remove a key for that frame. Shift-right-click
to remove the control point completely. Shift-left-click the curve to create a new
control point along the existing curve.
As you build up a collection of splines in the viewport, you may wish to
hide some or all of them using the Show this spline checkbox on the roto control
panel. The View menu contains an Only selected splines item; with it enabled,
only the spline selected in the roto panels list will appear in the viewport.

From Tracker to Control Point


Suppose the shot is from a helicopter circling a highway interchange you
need to track, and there is a large truck driving around. You want to put a
garbage matte around it before autotracking. If the helicopter is bouncing around
a bit and only loosely locked onto the interchange, you might have to add a fair
number of keys to the spline for the truck.
Alternatively, you could track the truck and import its path into the spline,
using the Import Tracker to CP mode of the rotoscoping panel.
To do this, begin by adding a supervised tracker for the truck. At the start
of the shot, create a rough spline around the truck, with its initial center point
located at the tracker. Turn on Import Tracker to CP, select the tracker, then click
on the center control point of the spline. The trackers path will be imported to the
spline, and it will follow the truck through the shot. (You'll be asked a question
first, see below.) You can animate the outline of the spline as needed, and youre
done.
If the truck is a long 18-wheeler, and youve tracked the cab, say, the back
end of the truck may point in different directions in the shot, and the whole truck
may change in size as well.
You might simplify animating the trucks outline with the next wrinkle: track
something on the back end of the truck as well. Before animating the trucks
outline at all, import that second trackers path onto the rotation/scale control
point. Now your spline will automatically swivel and expand to track the truck
outline.
You may still need to add some animation to the outline control points of
the truck for fine tuning. If there is an exact corner that can be tracked, you can
add a tracker for it, and import the trackers path directly onto splines individual
control points.

372
ROTOSCOPING AND ALPHA CHANNEL MATTES

When you import a tracker into a control point, you'll be asked whether
you want to import the relative motion, or the absolution position.
If you import the relative motion, the control point will stay in its exact
same position on the current frame. Use this mode when the control point is in its
desired location at present, and you want to use the motion of a nearby tracker to
guide the motion of the control point.
If you import the absolute position, the control point will jump to exactly the
location of the tracker, on the current frame and all others. Use this mode when
the tracker is already in the desired location for the control point.
The tracker import capability gives a very flexible capability for setting up
your splines, with a little thought. Here are a few more details. The import takes
place when you click on the spline control point. Any subsequent changes to the
tracker are not live. If you need them, you should import the path again. The
importer creates spline keys only where the tracker is valid. So if the tracker is
occluded by an actor for a few frames, there will be no spline keys there, and the
splines linear control-point interpolation will automatically fill the gap. Or, you can
add some more keys of your own. Youll also want to add some keys if your
object goes off the edge of the screen, to continue its motion.
Finally, the trackers you use to help animate the spline are not special.
You can use them to help solve the scene, if they will help (often they will not), or
you can delete them or change them into zero-weighted trackers (ZWTs) so that
they do not affect the camera solution. And you should turn off their Exportable
flag on the Coordinate System panel.

Writing Alpha Mattes from Roto Splines


If you have carefully constructed some roto splines, you can export them
to other compositing programs using the image preprocessor. Select an output
format that supports alpha channels, and turn on the alpha channel output. If the
source does not contain an alpha channel, the roto spline data will be rendered
as alpha instead. The green-screen key will be combined in as well, if one is
configured.
You can also output an RGB version of the roto information, even for
formats that dont support alpha channels, by turning off the RGB checkbox in
the save-sequence settings, then turn on the alpha channel output checkbox.
The data will automatically be converted from an alpha channel to RGB.
In a complex object-tracking setup, you can output a mask showing the
region for each object, by having that object active in the main user interface
when you render the output.

Using Alpha Mattes


SynthEyes can use an alpha channel painted into your shot to determine
which image areas correspond to which object or the camera, or whether or not

373
ROTOSCOPING AND ALPHA CHANNEL MATTES

to include any given pixel in the planar tracking process (see the Planar Tracking
Manual for setup details).
The alpha channel is a fourth channel (in addition to Red, Green, and
Blue) for each pixel in your image. You will need external program, typically a
compositor, to create such an alpha channel. Plus, you will need to store the shot
as sequenced DPX, OpenEXR, SGI, TARGA, or TIFF images, as these
formats accommodate an alpha channel.
Or, you can store the alpha channel in separate files, named
appropriately. See Separate Alpha Channels. In this case, the original files are
left with no alpha data, and the alpha is written in separate files, typically as gray-
scale PNG files.
Suppose you wish to have a camera track ignore a portion of the images
with a garbage matte. Create the matte with the alpha value of 255 (1.0, white)
for the areas to be tracked, and 0 (0.0, black) for the areas to be ignored. Youll
need to do this for every frame in the shot, which is why the features of a good
compositing program can be helpful. [Note: if a shot lacks an alpha channel,
SynthEyes creates a default channel that is black(0) for all hard black pixels
(R=G=B=0), and white(255) for all other pixels.]
You can make sure the alpha channel is correct in SynthEyes after you
open the shot by temporarily changing the Camera View Type on the Advanced
Feature Control dialog (launched from the Feature Panel) to Alpha, or using the
Alpha channel selection in the Image Preprocessing subsystem.
Next, on the Rotoscoping panel, delete the default full-size-rectangular
spline. This is very important, because otherwise this spline will assign all blips to
its designated object. The alpha channel is used only when a blip is not
contained in any spline!
Change the Shot Alpha Levels spinner to 2, because there are two
potential values: zero and one. This setting affects the shot (and consequently all
the objects and the camera attached to it).
Change the Object Alpha Value spinner to 255. Any blip in an area with
this alpha value will be assigned to the camera; other blips will be ignored. This
spinner sets the alpha value for the currently-active object only.
If you are tracking the camera and a moving object along with a garbage
matte simultaneously, you would create the alpha channel with three levels: 0,
garbage; 128, camera; 255, object. Note that this order isnt important, only
consistency.
After creating the matte, you would set the Shot Alpha Levels to 3. Then
switch to the Camera object on the Shot menu and set the Object Alpha Value to
128. Finally, switch to the moving object on the Shot menu, and set the Object
Alpha Value to 255.
Note that the Shot Alpha Levels setting controls only the tolerance
permitted in the alpha level when making an assignment, so that other nearby

374
ROTOSCOPING AND ALPHA CHANNEL MATTES

alpha values that might be incidentally generated by your rotoscoping software


will still be assigned correctly. If you set Shot Alpha Levels to 17, the nominal
alpha values would be 0, 16, 32, 255, and you could use only any 3 of them if
that was all you needed.

375
Object Tracking
Heres how to do an object-tracking shot, using the example shot
lazysue.avi, which shows a revolving kitchen storage tray spinning (called a
Lazy Susan in the U.S. for some reason). This shot provides a number of
educational opportunities. It can be tracked either automatically or under manual
supervision, so both will be described.
The basic point of object tracking is that the shot contains an object whose
motion is to be determined so that effects can be added. The camera might also
be moving; that motion might also be determined if possible, or the objects
motion can be determined with respect to the moving camera, without concern
for the cameras actual motion.
The object being tracked must exhibit perspective effects during the shot.
If the object occupies only a small portion of the image, this will be unlikely. A film
or HD source will help provide enough accuracy for perspective shifts to be
detected.
For object-tracking, all the features being tracked must remain rigidly
positioned with respect to one another. For example, if a head is to be tracked,
feature points must be selected that are away from the mouth or eyes, which
move with respect to one another. If the expression of a face is to be tracked for
character animation, see the section on Motion Capture.
Moving-object tracking is substantially simpler than motion capture, and
requires only a single shot and no special on-set preparation during shooting.

IMPORTANT: You need to have six or more different trackers


visible pretty much all of the time (technically it can get down to 3 for
short periods of time but with very low accuracy). Generally you should
plan for at least 8-10 to make allowance for short-term problems in
some of them. More trackers means more accuracy and less jitter.

Fine Print: no matter how many trackers you have, if they are all on
the same plane (the floor, a piece of paper, a tablet, etc), they only
count as four, and as the rule says, you must have six! If the object
being tracked is flat, you will have to use a known (flat) mesh as a
reference, entering known coordinates for each tracker.

Warning: if the object occupies only a portion of the image, it will not
supply enough perspective shift to permit the field of view of the lens to
be estimated accurately. You must either also do a camera track (even
a tripod solve will do), or you must determine a lens field of view by a
different method (a different shot, say), and enter it as a Known Field
of View.

377
OBJECT TRACKING

Automatic Tracking
Open the lazysue.avi shot, using the default settings.
On the Solver panel, set the cameras solving mode to Disabled.
On the Shot menu, select Add Moving Object. You will see the object at the
origin as a diamond-shaped null object.
Switch to the Roto Masking panel, with the camera viewport selected.
Scrub through the shot to familiarize yourself with it, then rewind back to the
beginning.

Click the create-spline (magic wand) button on the Roto panel.


Click roughly in the center of the image to establish the center point.
Click counterclockwise about the moving region of the shot, inset somewhat
from the stationary portion of the cabinetry and inset from the bottom edge of
the tray. Right-click after the last point. [The shape is shown below.]

Click the create-spline (magic wand) button again to turn it off.


Double-click the vertices as necessary to change them to corners.
In the spline list on the Roto panel, select Spline1 and hit the delete key.
On the object setting underneath the spline list, change the object setting
from Garbage to Object01. Your screen should look something like this:

378
OBJECT TRACKING

Go to the Feature Panel .


Change the Motion Profile to Gentle Motion.
Hit Blips all frames.
Hit Peel All.
Go to the end of the shot.
Verify that the five dots on the flat floor of the lazy susan have associated
trackers: a green diamond on them.
If you need to add a tracker to a tracking mark, turn on the Peel button on the
Feature panel. Scrub around to locate a long track on each untracked spot,
then click on the small blip to convert it to a tracker. Turn off Peel mode when
you are done.
Switch to the Coordinate System Panel.
Go to frame 65.
Change the tracker on the floor that is closest to the central axis to be the
origin.
Set the front center floor tracker to be a Lock Point, locked to 10,0,0.
Set the front right tracker to XY Plane (or XZ plane for a Y-Up axis mode).
Switch to the Solver Panel.
Make sure the Constrain checkbox is off.
Hit Go!.
Go to the After Tracking section, below.

Supervised Tracking
The shot is best tracked backwards: the trackers can start from the easiest
spots, and get tracked as long as possible into the more difficult portion at the
beginning of the shot. Tracking backwards is suggested for features that are
coming towards the camera, for example, shots from a vehicle.
Open the lazysue.avi shot, using the default settings.
On the Solver panel, set the cameras solving mode to Disabled.
On the shots menu, select Add Moving Object. You will see the object at the
origin as a diamond-shaped null object.

On the Tracker panel, turn on Create . The trackers will be associated


with the moving object, not the camera.
Switch to the Camera viewport, to bring the image full frame.

379
OBJECT TRACKING

Click the To End button on the play bar.


Click the Playback direction button from to (backwards).
Create a tracker on one of the dots on the shelf. Decrease the tracker size to
approximately 0.015, and increase the horizontal search size to 0.03.
Create a tracker on each spot on the shelf. Track each as far as possible
back to the beginning of the shot. Use the tracker mini-view to scroll through
the frames and reposition as needed. As the spots go into the shadow, you
can continue to track them, using the tracker gain spinner. When a tracker
becomes untrackable, turn off Enable , and Lock the tracker . Right-click
the spinner to reset it for the next tracker.
Continue adding trackers from the end of the shot roughly as follows:

Begin tracking from the beginning, by rewinding, changing the playback


direction to forward , then adding additional trackers. You will need to add
these additional trackers to achieve coverage early in the shot, when the
primary region of interest is still blocked by the large storage container.

380
OBJECT TRACKING

Switch to the graph editor in graph mode , sort by error mode . Use
the mouse to sweep through and select the different trackers. Or, select Sort
by error on the main View menu, and use the up and down arrows on the
keyboard to sequence through the trackers. Look for spikes in the tracker
velocity curves (solid red and green). Switch back to the camera view as
needed for remedial work.
Switch to the Coordinate System control panel and camera viewport, at the
end of the shot.
Select the tracker at center back on the surface of the shelf; change it to an
Origin lock.
Select the tracker a bottom left on the shelf, change it to Lock Point with
coordinate X=10.
Select the tracker at front right; change it to an On XY Plane lock (or On XZ if
you use Y-axis up for Maya or Lightwave).
Switch to the Solver control Panel.
Switch to the Quad view; zoom back out on the Camera viewport.
Hit Go! After solving completes in a few seconds, hit OK.
Continue to the After Tracking section, below.

381
OBJECT TRACKING

After Tracking
Switch to the 3-D Objects panel, with the Quad viewport layout selected.
Click the World button, changing it to Object.

Turn on the Magic Wand tool and select the Cone object.
In the top view, draw a cone in the top-right quadrant, just above and right of
the diamond-shaped object marker.
Hint: it can be easier to adjust the cones position in the Perspective view,
locked to the camera, with View/Local coordinate handles turned on.
Scrub the timeline to see the inserted cone. In your animation package, a
small amount of camera-mapped stand-in geometry would be used to make
the large container occlude the inserted cone and reveal correctly as the shelf
spins.
Advanced techniques: use Coalesce Trackers and Clean Up Trackers.
Use the Hierarchy View to move trackers or objects back and forth from
camera to object if needed.

Difficult Situations
When an object occupies only a relatively small portion of the frame, there
are few trackers, and/or the object is moving so that trackers get out of view
often, object tracking can be difficult. You may wind up creating a situation where
the mathematically best solution does not correspond to reality, but to some
impossible tracker or camera configuration. It is an example of the old adage,
Garbage In, Garbage Out (please dont be offended, gentle reader).
Goosing the Solver
Small changes in the initial configuration may allow the solver to,
essentially randomly, pick a more favorable solution. Here are typical
countermeasures when you are not getting the right solve.
1. Turn on the Slow but sure checkbox on the Solver panel.
2. Enter a Rough camera motion selection on the solver panel. Select
the direction (Left/Right/Up/etc) that the camera would be moving
around the object if the object was fixed.
3. Try a variety of manually-selected Begin/End seed frames on the
solver panel.
4. Small changes in trackers, or adding additional trackers, especially
those at different depths, may also be helpful in obtaining the
desired solution.
Keep in mind that when you have problems getting the right solve,
typically the field of view value will not be accurate, because the shot

382
OBJECT TRACKING

contains little, or contradictory, information. So you may want to supply


your best guess as a Known value.
Inverting Perspective
Sometimes, in a low-perspective object track, you may see a situation
where the object model and motion seem almost correct, except that some things
that are too far away are too close, and the object rotates the wrong way. This is
called inverted perspective and is a result of low/no/conflicting perspective
information. If you cannot improve the trackers or convince the solver to
arbitrarily pick a different solution using the counter-measures in the previous
section, read on.
The Invert Perspective script on the Script menu will invert the object and
hopefully allow you to recover from this situation quickly. It flips the solved
trackers about their center of gravity, on the current frame, changes them to seed
trackers (this will mess up any coordinate system), and changes the solving
mode to From Seed Points. You can then re-solve the scene with this solution,
and hopefully get an updated, and better, path and object points. You should
then switch back to Refine mode for further tracking work!
Using a 3-D Model
You might also encounter situations where you have a 3-D model of the
object to be tracked. If SynthEyes knows the 3-D coordinates of each tracker, or
at least 6-10 of them, it will be much easier to get a successful 3-D track. You
can import the 3-D model into SynthEyes, then use the Perspective windows
Place mode to locate the seed point of each tracker on the mesh at the correct
location. Or, carefully position the mesh to match one of the frames in the shot,
select the group of trackers over top of it, and use the Track menus Drop onto
mesh to place all the seed points onto the mesh at once.
Turn on the Seed checkbox on for each (if necessary, usually done
automatically), and switch to the From Seed Points solving mode on the solver
panel.
If you have determined the 3-D coordinates of your tracker externally
(such as from a survey or animation package), construct a small text file
containing the x, y, and z coordinates, followed by the tracker name. Use
File/Import/Tracker Locations to set these coordinates as the seed locations,
then use the From Seed Points solver option. If the tracker named doesnt exist,
it will be created (using the defaults from the Tracker Panel, if open), so you can
import your particular points first, and track them second, if desired, though
tracking first is usually easier.
The seed points will help SynthEyes select the desired (though
suboptimal) starting configuration. In extreme situations, you may want to lock
the trackers to these coordinates, which can be achieved easily by setting all the
imported trackers to Lock Points on the Coordinate System panel. To make this

383
OBJECT TRACKING

easy, all the affected trackers are selected after an Import/Tracker Locations
operation.
Overall Distance Constraints
When an object track has little perspective, it can jitter large amounts in
depth, ie the distance from the camera to the object. SynthEyes allows you to
create an overall distance constraint to control and de-jitter that distance using
the solver locking panel. You should track and solve the shot, then set up a
distance constraint using the computed distance as a guide: the graph editor and
Get 1f buttons on the solver locking panel make this easy to see and control.
Then when you Refine the solution, the distance will match what you have set
upie a nice smooth curve. Since such shots contain little perspective to start
with, extreme precision in depth is not required.
When you use an overall distance constraint, the distance is measured
from the camera to the origin of the object (or for plain camera solves, from the
camera to the overall coordinate system origin). For moving objects tracks with
distance constraints, you should pay some attention to the location of the origin.
You must be certain to set up a coordinate system for the object!
You should set up the origin either at the natural center of rotation of the
object (for example, at the neck of a head), or at a point located within the point
cloud of the object. If the origin is well outside the object, any small jitter in the
orientation of the object will have unwarranted effects on its position as well.

384
Joint Camera and Object Tracking
If both the camera and an object are moving in a shot, you can track each
of them, solve them simultaneously, and produce a scene with the camera and
object moving around in the 3-D scene. With high-quality source, several objects
might be tracked simultaneously with the camera. First, you must set up
rotoscoping or an alpha channel to distinguish the object from the background.
Or, perform supervised tracking on both. Either way, youll wind up with one set
of trackers for the object, and a different set for the background (camera).

Note: If the camera is not moving, that's fine, it's a straight object track
that will track the object relative to the camera. If the camera is on a
tripod, panning and tilting, that's also fine, set the camera to tripod
mode. This discussion here about scaling both camera and object
applies when both camera and object translate.

You must set up a complete set of constraintsposition locks, orientation,


and distance (scale)for both the camera and object (a set for each object, if
there are several). Frequently, users ask why a second set of constraints for the
object is required, when it seems that the camera (background) constraints
should be enough.
However, recall a common film-making technique: shooting an actor, who
is close to the camera, in front of a set that is much further away. Presto, a giant
among mere mortals! Or, in reverse, a sequel featuring yet another group of
shrunken relatives, name the variety. The reason this works is that it is
impossible to visually tell the difference between a close-up small object moving
around slightly, and a larger object moving around dramatically, a greater
distance away. This is true for a person or a machine, or by any mathematical
means.
This applies independently to the background of a set, and to each object
moving around in the set. Each might be large and far, or close and small. Each
one requires its own distance constraint, one way or another.
The objects position and orientation constraints are necessary for a
different reason: they define the objects local coordinate system. When you
construct a mesh in your favorite animation package, you can move it around
with respect to a local center point, about which the model will rotate when you
later begin to animate it. In SynthEyes, the objects coordinate constraints define
this local coordinate system.
Despite the veracity of the above, there are ways that the relative
positioning of objects moving around in a scene can be discerned: shadows of an
object, improper actor sightlines, occasions where a moving object comes in
contact with the background set, or when the moving object temporarily stops.
These are assumptions that can be intellectually deduced by the audience,

385
JOINT CAMERA AND OBJECT TRACKING

though the images do not require it. Indeed, these assumptions are
systematically violated by savvy filmmakers for cinematic effect.
However, SynthEyes is neither smart/stupid enough to make assumptions,
nor to know when they have been violated. Consequently, it must be instructed
how to align and size the scenes in the most useful fashion.
The alignment of the camera and object coordinate systems can be
determined independently, using the usual kinds of setups for each.
The relative sizing for camera and object must be considered more
carefully when the two must interact, for example, to cast shadows from the
object onto a stationary object.
When both camera and object move and must be tracked, it is a good idea
to take on-set measurements between trackable points on the object and
background. These measurements can be used as distance constraints to obtain
the correct relative scaling.
If you do not have both scales, you will need to fix either the camera or
object scale, then systematically vary the other scale until the relationship
between the two looks correct.
SynthEyes 12.08 or later: If the shot is suitable, you can use the Moving-
Obj Path or Moving-Obj Position Phases to set the relative scale of the camera
and object. In both cases, the camera must be moving, not locked or on a tripod.
Use the Moving-Obj Path phase when the object comes to a stop for some
portion of the shot, or less desirably, has a section where it travels in an exactly
straight line. Use the Moving-Obj Position phase when trackers on the moving
object come very close to, or become aligned with, trackers on the camera.
For further details, see the phase overview and the Phase Reference
manual

386
Survey Shots
We define a survey shot to be one where the images consist of a number
of fairly unordered images, typically a series of images taken by a photographer
wandering around a set with a digital still camera, instead of a film/movie camera.
These can also be referred to as witness cameras. Such shots can provide a
variety of views not available through the principal photography, and high-
resolution still cameras with excellent prime lenses can be used.
Unlike a normal shot which have a lot of continuity from one image to the
next, the stills are often from substantially different vantage points and might not
be organized in a specific linear sequence.

Note: what we're describing here is different than having a survey for a
set, which may consist of anything from a few tape measurements to a
Lidar point cloud. SynthEyes can work with those, but that's not the
subject of this section!

Once you've tracked a survey shot, you'll usually use it as part of a multi-
shot setup, or in conjunction with measured 3-D data.

How to Shoot
Here are our recommendations for survey photography. These will help
you produce the best results in the least time, though they are not requirements.
They can all be violated.
Shoot all images from a single camera
Use a low-distortion prime lens
If the camera has a zoom lens, do not use it at its widest setting.
Reduce the field of view by 20% or so to reduce distortion
Do not change the zoom setting from one image to the next
All images should be the same resolution
Shoot the images in a defined order to save time tracking (ex: take
an image starting at far left, then 2 steps to the right for the next
image, 2 more steps for the next, etc)

Survey Shot Setup


To create a survey shot, click Shot/Add Survey Shot. You will be prompted
to set up a filename to save the IFL (image file list) file for the shot. Do not
overwrite one of your images with the IFL! You should usually place it in the
same folder as the images, unless that folder is read-only.
The Survey Shot IFL Editor will open. Click the Add button, and select
some or all of the images to be placed in this survey. The images do not have to
be the same size or type (although it is a good idea!).

387
SURVEY SHOTS

All images will be padded up to the width of the widest and height of the
highest. In the future we may re-scale them instead. Either way, it is necessary!

Important: All images in a survey must have square pixels.

The files from any Add are inserted after the currently-selected image in
the image list. They are inserted in a sorted order determined by
Windows/Linux/OSX.
Since it will be useful to have the images in at least a rough order that
makes sense, you can use Add files as many times as needed to explicitly set up
the order. Or, you can use multiple Adds if the images are coming from different
folders.
You can also use the Move Up and Move Down buttons to change the
order of the images into something that makes sense.
After you click OK on the IFL editor, you will be presented with the
standard Shot Settings panel. You should not have to change anything.
Specifically: do not change the image or pixel aspect. They are set automatically,
based on the required padding. Pixel aspect must always be 1.0 for survey
images. Do not turn on rolling shutter compensation, that will make a terrible
mess!
Once the survey has been created, you can edit the list again via
Shot/Edit Survey Shot. While editing, you'll see the images update in the main
camera view as you move images around, and clicking an image in the list will
show it in the camera view.
You can add additional images to a survey shot at a later time, but....

Warning: Once you have started tracking, always add new images at
the end of the shot, otherwise you will get the tracking data out of sync
with the images.

Warning 2: The image file list is NOT restored if you later use Undo or
Redo in the main SynthEyes interface. It is not part of the SynthEyes
file. You can cancel out of the Survey Shot IFL editor and the initial IFL
will be restored, but using Undo later will not restore the IFL.

Tracking
Survey shots must always be supervised-tracked, since the images
generally jump around quite a lot. You will place each tracker in the right location
on each relevant frame. SynthEyes provides the workflow to do that efficiently.
To set up for tracking a survey shot:
Go to the Track room
Turn on Track/Lock z-drop on
Turn on Track/Hand-held: Sticky

388
SURVEY SHOTS

Do not turn on the Create tool on the Tracker panel!


You will add trackers one at a time, locating that tracker on each frame in
the survey before going on to the next tracker.
For each image feature to be tracked:
Go to a frame that contains it
Hold down the C key and click and drag in the camera view to
position the tracker
Use the 's', 'd', ',', or '.' keys to move to another frame containing
that image feature
Click and drag again in the camera view to position the tracker
Repeat for each frame containing the feature (tracker) being
created
To remove the tracker from a specific frame, click (off) the Enable
stoplight on the tracker control panel
Notice that you do not have to add frames in any particular order, and you
do not have to toggle the enable track on and off repeatedly. Each time you place
the tracker, it is enabled for that frame only. Similarly, turning the tracker enable
off affects only that particular frame. This behavior is specific to Survey shots,
because the frames are largely unordered.

Important: Once you have added trackers, any additional images must
be added at the end of the shot.

Solving
In most cases, you must solve survey shots as a Zooming lens. This is a
consequence of the different image sizes, lenses, etc. SynthEyes configures
survey shots as Zooms automatically.
If all images are exactly the same size, and were shot on the same
physical camera with the same prime lens, or you are 100% positive that the
zoom lens was never zoomed because you shot it, only then can you solve the
shot as a Fixed, Unknown lens.
Survey shots are processed slightly differently during solving, because the
images are largely unordered.
If the solver cannot locate initial seed frames, you can set them manually
on the Solver panel. In this case, or if the solver is using an incorrect solution,
you can also set the direction hint as well (second drop-down on the solve
panel). In any case, you are looking for two frames with many trackers in
common, that look at the same trackers from about 30 degrees apart. The
direction hint is then the direction of motion from the first camera position to the
second.
The solver decides what additional frames to solve, once it gets started,
without regard to their order.

389
SURVEY SHOTS

You may need to think a bit more about your trackers if the solver does
not solve all the frames: that means that you do not have sufficient overlap
between any solved trackers and the trackers on unsolved frames. Whereas
normally the tracker count channel in the Graph Editor helps to identify trouble
spots, the unordered nature of surveys makes that information uninformative.
In that situation, you may find it helpful to use Script/Select by Type to
select all the solved trackers, examine the bar graph in the graph editor, and see
if you can locate the solved trackers on some of the unsolved frames as well.

Oops! Not a survey.


Sometimes you may need to change a shot from a survey to a normal
shot, or vice versa. You can only change a survey to a normal shot if all the
images are the same size (if not, but you still need a regular shot, resize the
images in your image editor app or write them out from the SynthEyes image
editor).
To change a shot from a normal shot to a survey, add a survey shot with
some dummy images (do not overwrite the IFL!), then do a Shot/Change Shot
Images and select the original IFL. Select all the trackers, and re-parent them
using the Coordinate System panel to the new survey shot.
To change a survey shot to a normal shot, create a new shot, selecting
the IFL of the survey shot. The images must all be the same size. Select all the
trackers, and re-parent them using the Coordinate System panel to the new
survey shot.

390
391
Multi-Shot Tracking
SynthEyes includes the powerful capability of allowing multiple shots to be
loaded simultaneously, tracked, linked together, and solved jointly to find the best
tracker, camera, and (if present) object positions. With this capability, you can
use an easily-trackable overview shot to nail down basic locations for trackable
features, then track a real shot with a narrow field of view, few trackable features,
or other complications, using the first shot as a guide. Or, you might use a left
and right camera shot to track a shot-in-3-D feature. If you dont mind some large
scene files, you can load all the shots from a given set into a single scene file,
and track them together to a common set of points, so that each shot can share
the same common 3-D geometry for the set.

Note: although stereo is a form of multi-shot tracking, multi-shot


tracking is not necessarily stereo. Stereo shots require that both eyes
be synchronized, have the same image size, aspect, and length (and
of course there must be only two of them). When a shot is stereo,
special additional features and tools are available that are not available
to general multi-shot setups.

In this section, well demonstrate how to use a collection of digital stills as


a road-map for a difficult-to-track shot: in this case, a tripod shot for which no 3-D
recovery would otherwise be possible. A scenario such as this requires
supervised tracking, because of the scatter-shot nature of the stills. The tripod
shot could be automatically tracked, but theres not much point to that because
you must already perform supervised tracking to match the stills, and theres not
much gained by adding a lot more trackers to a tripod shot. It will take around 2
hours to perform this example, which is intentionally complex to illustrate a more
complex scenario.
The required files for this example can be found at
https://www.ssontech.com/download.html: both land2dv.avi and DCP_103x.zip
are required. The zip file contains a series of digital stills, and should be
unpacked into the same working folder as the AVI. You can also download
multix.zip, which contains the .sni scene files for reference.

Prerequisites: You need to be able to do supervised tracking, create


survey shots, and handle coordinate system setup for this description;
it does not contain a beginner-level description.

Start with the digital stills, which are 9 pictures taken with a digital still
camera, each 2160 by 1440. Start SynthEyes and do a File/Add Survey Shot.
Add the DCP_####.JPG images.

393
MULTI-SHOT TRACKING

Create trackers for each of the balls: six at the top of the poles, six near
ground level on top of the cones. Create each tracker, and track it through the
entire (nine-frame) shot using the survey-shot workflow. You can use control-
drag to make final positioning easier on the high-resolution still. Create the
trackers in a consistent order, for example, from back left to front left, then back
right to front right. After completing each track, Lock the tracker.
The manual tracking stage will take less than an hour. The resulting file is
available as multi1.sni.
Set up a coordinate system using the ground-level (cone) trackers. Set the
front-left tracker as the Origin, the back-left tracker as a Lock Point at
X=0,Y=50,Z=0, and the front-right tracker as an XY Plane tracker.
You can solve for this shot now: switch to the Solver panel and hit Go!
You should obtain a satisfactory solution for the ball locations, and a rather
erratic and spaced out camera path, since the camera was walked from place to
place. (multi2.sni)
It is time for the second shot. On the Shot menu, select Add Shot (or
File/Import/Shot). Select the land2dv.avi shot. Set Interlacing to No; the shot
was taken was a Canon Optura Pi in progressive scan mode.
Bring the camera view full-screen, go to the tracker panel, and begin
tracking the same ball positions in this shot with bright-spot trackers. Set the Key
spinner to 8, as the exposure ramps substantially during the shot. The balls
provide low contrast, so some trackers are easiest to control from within the
tracker view window on the tracker panel . The back-right ground-level ball is
occluded by the front-left above-ground ball, so you do not have to track the
back-right ball. It will be easiest to create the trackers in the same order as in the
first shot. (multi3.sni)
Next, create links between the two sets of trackers, to tell SynthEyes what
trackers were tracking the same feature. You will need a bare minimum of six (6)
links between the shots. Switch to the coordinate system panel, and the Quad
view. Move far enough into the shot that all trackers are in-frame.

394
MULTI-SHOT TRACKING

Camera/Camera or Camera/Viewport Matching


To assign links, select a tracker from the AVI in the camera view. Go to
the top view and zoom in to find the matching 3-D point from the first shot, and
ALT-click it (Mac: Command-click). Select the next tracker in the camera view,
and ALT-click the corresponding point in the Top view; repeat until all are
assigned. If you created the trackers consistently, you can sequence through
them in order.
You can do this with a 3D view and camera view, or two camera views.
The disadvantage of the two camera views is that both are linked to the same
timebar. The perspective view affords a way around that...

Camera/Perspective View Matching


You can display both shots simultaneously using a perspective view and
the camera view. Use the Camera & Perspective viewport layout, or modify a
Quad layout to replace one of the viewports with a perspective window. Make the
reference shot active. On the Perspective views right-click menu, select Lock to
Current Camera. In the right-click View menu, select Freeze on this frame.
(You can adjust which frame it is frozen on using the A, s, d, F, period, or comma
keys within the perspective view.)
Change the user interface to make the main shot active using the toolbar
button or shot menu.
You can now select trackers in the camera view, then, with the Coordinate
System panel open, ALT-click the corresponding tracker in the reference view.
See the section on Stereo Supervised Tracking for information on color coding of
linked trackers using both the Camera and Perspective views.

Match By Name
Another approach is to give each tracker a meaningful name. In this case,
clicking the Target Point button will be helpful: it brings up a list of trackers to
choose from.
A more subtle approach is to have matching names, then use the
Track/Cross Link By Name menu item. Having truly identical names makes
things confusing, so the cross link command ignores the first character of each
name. You can then name the trackers lWindowBL and rWindowBL and have
them automatically linked. After setting up a number of matching trackers, select
the trackers on the video clip, and select the Cross Link By Name menu item.
Links will be created from the selected trackers to the matching trackers on the
reference shot.

Note! Cross Link By Name will not link to a camera/object that has its
solve mode set to Disabled, to allow you to control possible link targets
if needed. You must temporarily change the solve mode to link to a
disabled object.

395
MULTI-SHOT TRACKING

Details on links: a shot with links should have links to only a single other
shot, which should not have any links to other shots. You can have several shots
link to a single reference.

Ready to Solve
After completing the links, switch to the Solver panel. Change the solver
mode to Indirect, because this cameras solution will be based on the solution
initially obtained from the first shot.(multi4.sni) Make sure Constrain is off at this
time.
Hit Go! SynthEyes will solve the two shots jointly, that is, find the point
positions that match both shots best. Each tracker will still have its own position;
trackers linked together will be very close to one another.
In the example, you should be able to see that the second (tripod) shot
was taken from roughly the location of the second still. Even if the positions were
identical, differences between cameras and the exact features being tracked will
result in imperfect matches. However, the pixel positions will match satisfactorily
for effect insertion. The final result is multi5.sni.

396
Stereoscopic Movies
3-D movies have had a long and intermittent history, but recently they
have made a comeback as the technology improves, with polarized or frame-
sequential projectors, and better understanding and control over convergence to
reduce eyestrain. 3-D movies may be a major selling point to bring larger and
more lucrative audiences back to theaters.
Filmmakers would like to use the entire arsenal of digital techniques to
help produce more compelling and contemporary films. SynthEyes can and has
been used to handle stereo shots using a variety of techniques based on its
single-camera workflow, but there are now extensive specialized features to
support stereo filmmaking.
SynthEyes is designed to help you make stereo movies (Pro version, not
Intro). The stereo capabilities range from tracking and cross-camera linking to
solving, plus a variety of user-interface tweaks to simplify handling stereo shots.
A special Stereo Geometry panel allows constraints to be applied between the
cameras to achieve specific relative positions and produce smoother results.
Additional stereo capabilities will be added as the stereo market develops.

STOP! Stereo filmmaking requires a wide variety of techniques


from throughout SynthEyes: the image preprocessor, tracking, solving,
coordinate system setup, etc. The material here builds upon that
earlier material; it is not repeated here because it would be exactly
that, a repetition. If this is the first part of the manual you are reading,
expect to need to read the rest of it.

You will need to know a fair amount about 3-D movie-making to be able to
produce watchable 3-D movies. 3-D is a bleeding edge field and you should
allow lots of time for experimentation. SynthEyes technical support is necessarily
limited to SynthEyes; please consult other training resources for general
stereoscopic movie theory and workflow issues in other applications.
SynthEyes's perspective view can display anaglyph (red/cyan is a good
combination) or interlaced views from both cameras simultaneously, controlled
by the right-click menu's View/Stereo Display item and the Scene Settings panel.
You can select either normal color-preserving or gray-scale versions of the
anaglyph display. When using an interlaced display, you should probably have
more than one display and float the interlaced perspective to that monitor. Only
the actual perspective view will be interlaced, not the entire user interface.

Whats Different About Stereo Match-Moving?


Match-moving relies on have different views of the same scene in order to
determine the 3-D location of everything. With a single camera, that means that
the camera must physically move (translate) to produce different views.

397
STEREOSCOPIC MOVIES

24/7 Perspective
With stereo, there are two different views all the time, and even a single
frame from each camera is enough to produce a 3-D solve. At least in theory,
you never have to worry about tripod shots that do not produce 3-D. Every shot
can produce 3-D. Every stereo shot can also be used in a motion-capture setup
to produce a separate path for even a single moving feature. Thats clearly good
news.
But before you get too excited about that, recall that in a stereo camera
rig, the cameras are usually under 10 cm apart. Compare that to a dolly shot or
crane shot with several meters of motion to produce perspective. And each of the
hundreds of frames in a typical moving-camera shot produces additional data to
help produce a more accurate solution.
So, even though you can produce 3-D from a very short stereo shot, the
information will not be very accurate (thats the math, not a software issue), and
longer shots with a moving camera will always help produce better-quality 3-D
data.
On a short shot with no camera rig translation (with the rig on a tripod),
you can get 3-D solves for features near to the camera(s). Features that are far
from the cameras must still be configured as Far to SynthEyes, meaning that no
3-D depth can be determined. Similarly, for motion-capture points, accuracy in
depth will degrade as the points move away from the camera. The exact
definition of far depends on the resolution and field of view of the cameras, you
might consider something far if it is several hundred times the inter-ocular
distance from the camera.
Easier Sizing
If we know the inter-ocular distance (and we always should have a
measurement for the beginning or end of the shot), then we know the coordinate
system sizing immediately. There is no need for distance measurements from the
set, and no problem with consistency between shots.
That makes coordinate system setup much simpler. On a stereo shot,
when an inter-ocular distance is set up, the *3 coordinate system tool generates
a somewhat different set of constraints, one that aligns the axes, but does not
impose its own size, allowing the inter-ocular distance to have effect.
Keep in mind that the sizing is only as good as the measurement. If the
measurement is 68 +/- 1 mm, that is over 1% uncertainty. If you have some other
measurement that you expect to come out at 6000 mm and it comes out at 6055,
you shouldnt be at all surprised. Some scenes with little perspective will not vary
much depending on inter-ocular distance, so the inter-ocular distance may size
the scene accurately.
If you have a crucial sizing requirement, you should use a direct scene
measurement, it will be more accurate. (In that case, switch to a Fixed inter-
ocular distance, instead of Known.)

398
STEREOSCOPIC MOVIES

Basics of 3-D Camera Rigs


Different hardware and software manufacturers have their own
terminologies, technologies, and viewpoints; heres ours.
To make stereo movies, we need two cameras. They get mounted on
some sort of a rig, which holds the two cameras in place at some specific
relationship to one another. You can then mount that rig on a tripod, a dolly, a
crane, whatever, that carries the two cameras around as a unit.
The two cameras must be matched to one another in several ways in
order to be usable:
Same overall image aspect ratio
Same field of view
Same frame rate and synchronization
Same lens distortion (typically none)
Same overall orientation (geometric alignment)
Matching color and brightness grading
Most of these should be fairly obvious. Many can be manipulated in post,
and SynthEyes is designed to help you achieve the required matching, even from
very low-tech rigs.
Even the simplest rig will require matching work in post-production. It is
not possible to bolt two cameras together, even with any kind of mechanical
alignment feature, and have the cameras be optically aligned. Cameras are not
manufactured to be repeatable in this way; the circuit board and chip-in-socket
alignment within the camera is not sufficiently accurate or repeatable between
cameras to be directly useful.
Synchronization
The cameras should be synchronized so that they take pictures at exactly
the same instant. Otherwise, when you do the tracking and solving, you will by
definition have some very subtle geometric distortions and errors: basically you
cant triangulate because the subject is at two different locations, corresponding
to each different time.
To make life interesting, if the film will be projected using a frame-
sequential projector (or LCD glasses), then the two cameras should be
synchronized but 180 degrees out of phase. But that will mean you can not track
exactly, it is the worst possible synchronization error. Instead, for provable
accuracy you should film and track at twice the final rate (eg 48 or 60 fps
progressive), then have the projectors show only every other frame from each
final image stream.

399
STEREOSCOPIC MOVIES

If circumstances warrant that you shoot unsynchronized or anti-


synchronized footage, you must be aware that you (and the audience) will be
subject to motion-related problems.
CMOS cameras are also subject to the Rolling Shutter problem, which
affects monocular projects as well as stereoscopic ones. The rolling shutter
problem will also result in geometric errors, depending on the amount of motion
in the imagery. To cover a common misconception, this problem is not reduced
by a short shutter time. If at all possible, use synchronized CCD or film cameras.
Note that when CMOS cameras are used in common mirror-based stereo rigs,
the rolling shutter effect will go in the opposite direction (bottom to top) for the
mirrored image: if you have to mirror the image vertically, you have to correct the
rolling shutter with a negative values as well.
One-Toe vs. Two-Toe Camera Rigs
Ideally, a camera rig has two cameras next to each other, perfectly
aligned. If both camera viewing axes are perfectly parallel, they are said to be
converged at infinity, and this is a particularly simple case for manipulation.
Usually, one or both cameras toe in slightly to converge at some point closer to
the camera, just as our eyes converge to follow an approaching object.
Mechanically, this may be accomplished directly, or by moving a mirror. We refer
to the total inwards angle of the cameras as the vergence angle.
It might seem that there is no difference between one camera toeing in or
two, but there is. Consider the line between the cameras. With both cameras
properly aligned and converged at infinity, the viewing direction is precisely
perpendicular to the line between the cameras. If one camera toes in, the other
remains at right angles to the line between them. If both cameras toe in, they
both toe in an equal amount, with respect to the line between them.
If you consider an object approaching the rig along the centerline from
infinity, the two-toe rig remains stationary with both cameras toeing in. The one-
toe rig moves backwards and rotates slightly, in order to keep one camera at
right angles to the perpendicular line between the camera centers.
SynthEyes works with either kind of rig. Though the one-toe rigs seem a
little unnatural (effectively they make the audience turn their heads), the motions
are very small and not really an issue for people, except for those who are trying
to do their tracking to sub-pixel accuracy! The one-toe rigs are mechanically
simpler and seem more likely to actually produce the motion they are supposed
to (are the two-toe rigs really moving at exactly matching angles? Are the axes
parallel? Maybe, maybe not!).
From Where to Where?
The inter-ocular distance is a very important number in stereo movie-
making: it is the distance between the eyes, or the cameras, with a typical value
around 65 mm. It is frequently manipulated by filmmakers, however; more on that
in a minute.

400
STEREOSCOPIC MOVIES

Although you can measure the distance between your buddys eyes within
a few millimeters pretty easily, when we start talking about cameras it is a little
less obvious where to measure.
It turns out that this question is much more significant than you might think
as soon as you allow the camera vergence to change: if the cameras are tilted
inwards towards each other, the point at which you measure will have a dramatic
effect. Depending on where you measure, the distance will change more or less
or not at all.
The proper point to consider is what we call the nodal point, as used for
tripod mode shots and panoramic photos. Its not technically a nodal point for
opticians. It is the center of the camera aperture, as seen from the outside of the
camera. See this article on the pivot point for panoramic photography for more
details.
The inter-ocular distance (IOD) is the distance between the nodal points of
the cameras.
Dynamic Rigs
Though the simplest rigs bolt the two cameras together at a fixed location,
more sophisticated rigs allow the cameras to move during a shot.
The simplest and most useful motion may not be what you think: it is to
change the inter-ocular distance on the fly. This preserves the proper 3-D
sensation, while avoiding extreme vergence angles that make it difficult to keep
everything on-screen in the movie theater.
The more complex effect is to change the vergence angle on the fly. This
must be done with extreme caution: unless the rig is very carefully built, changing
the vergence angle may also change the inter-ocular distanceor even change
the direction between them as well. If a rig is to change the vergence angle, it
must be constructed to locate the camera nodal point exactly at the center of the
vergence angles rotation.
A rig that changes only the inter-ocular distance does not have to be
calibrated as carefully. A changing IOD should always be exactly parallel to the
line between the camera nodal points, which in turn means that on a one-toe
camera, the non-moving camera must be perpendicular to the translation axis, or
a two-toe camera must have equal toe-in angles relative to the translation axis.
The penalty for a rig that does not maintain a well-defined relationship
between the cameras is simple: it must be treated as two separate cameras. The
most dangerous shots and rigs are those with changing vergence, either with
mirrors or directly, where the center of rotation does not exactly match the nodal
point. Unless you have calibrated, it will be wrong. You will be in the same boat
as people who shoot green-screen with no tracking markersand that boat has
a hole

401
STEREOSCOPIC MOVIES

Camera/Camera Stereo Geometry Parameters


SynthEyes permits you to create constraints that limit the relative position
between the two cameras in sophisticated ways, so that you can ask for specific
relationships between the cameras, and eliminate unnecessary noise-induced
chatter in the position.
If you work in an animation package and have a child object attached to a
parent, you will have six numbers to adjust: 3 position numbers (X, Y, and Z),
and 3 angles (for example Pan, Tilt, and Roll, or Roll, Pitch, and Yaw). The same
six numbers are used for the basic position and orientation of any object.
Those particular six numbers are not convenient for describing the
relationship between the two cameras in a stereo pair, however! In the real world,
there is only one real position measurement that can be made accurately, the
inter-ocular distance, and it controls the scaling of everything.
Accordingly, SynthEyes uses spherical coordinateswhich have only a
single distance measurementto describe the relationship between the
cameras.
Of the two cameras, well refer to one as the dominant camera (the one
we want to think about the most, typically the right), and the other as the
secondary camera. The camera parameters describe the relationship of the
secondary (child) camera to the dominant (parent) camera. Which camera is
dominant is controlled on the Stereo Geometry panel. In each case, when we talk
about the position of a camera, we are talking about the position of its nodal point
(inside the front of the lens), not of the base of the camera, which doesnt matter.
You can think about the stereo parameters in the coordinate space of the
dominant camera. The dominant camera has a ground plane consisting of its
side vector, which flies out the right side from the nodal point, and its look
vector, which flies forward from the nodal point towards what it is looking at. The
camera also has an up vector, which points in the direction of the top of the
camera image. All of these are relative to the camera body, so if you turn the
camera upside down, the cameras up vector is now pointing down!
Here are the camera parameters. They have been chosen to be as
human-friendly as possible. Most of the time, you should be concerned mainly
with the Distance and Vergence; SynthEyes will tell you what the other values
are and they shouldnt be messed with much.
Distance. The inter-ocular distance between the cameras. Note that this
value is measured in the same units as the main 3-D workspace units. So if you
want an overall scene to be measured in feet, the inter-ocular distance should be
measured in feet as well. Centimeters is a reasonable overall choice.
Direction. This is the direction (angle) towards the nodal point of the
secondary camera from the dominant camera, in the ground plane. If the
secondary camera is directly next to the dominant camera, in the most usual
configuration, the direction value is zero. The Direction angle increases if the

402
STEREOSCOPIC MOVIES

secondary camera moves forward, so that at 90 degrees, the secondary camera


is in front of the primary camera (ignoring relative elevation). See additional
considerations in Two-Toe Revisited, below.
Elevation. This is the elevation angle (above the dominant cameras
ground plane). At zero, the secondary camera is on the dominant cameras
ground plane. At 90 degrees, the secondary camera is above the dominant
camera, on its up axis.
Vergence. This is the total toe-in angle by which the two cameras point in
towards each other. At zero, the look directions of the cameras are parallel, they
are converged at infinity. At 90 degrees, the look directions are at right angles.
See Two-Toe Revisited below.
Tilt. At a tilt of zero, the secondary camera is looking in the same ground-
plane as the dominant camera. At positive angles, the secondary camera is
looking increasingly upwards, relative to the dominant camera. At a tilt of 90
degrees, the secondary camera is looking along the dominant cameras Up axis,
perpendicular to the dominant camera viewing direction (they arent looking at
the same things at all!).
Roll. Relative roll of the secondary camera relative the dominant. At a roll
angle of zero, the cameras are arent twisted with respect to one another at all;
both camera look vectors point in the same direction. But as the roll angle
increases, the secondary camera rolls counter-clockwise with respect to the
dominant camera, as seen from the back.
You can experiment with the stereo parameters by opening a stereo shot,
opening the Stereo Geometry panel, clicking More and then one of the Live
buttons. Adjusting the spinners will then cause the selected camera to update
appropriately with respect to the other camera.

Two-Toe, Revisited
The camera parameters, as described above, describe the situation for
single-toed camera rigs, where only one camera (the secondary) rotates for
vergence. The situation is a little more complex for two-toe rigs, where both
cameras toe inwards for vergence. These modes are Center-Left and Center-
Right in the Stereo Geometry panels dominance selection.
The dominant camera never moves during two-toed vergence, yet we still
achieve the effect of both camera tilting in evenly. How is that possible?
Consider a vergence angle of 90 degrees. With a one-toe rig, the
secondary camera has turned 90 degrees in place without moving, and is now
looking directly at the primary camera.
With a two-toe rig at a vergence of 90 degrees, the secondary has turned
90 degrees so it is looking at right angles to the look direction of the dominant
camera.

403
STEREOSCOPIC MOVIES

But, and this is the key thing, at the same time the secondary camera has
swung forward to what would otherwise be Direction=45 degrees, even though
the Direction is still at zero. As a result, the secondary camera has tilted in 45
degrees from the nominal look direction, and the dominant camera is also 45
degrees from the nominal look directionwhich is the perpendicular to the line
between the two cameras.
The thing to keep in mind is that the line between the two cameras (nodal
points) forms the baseline; the nominal overall rig look direction is 90 degrees
from that. SynthEyes changes the baseline in centered mode to maintain the
proper matching vergence for the two cameras; it does that by changing the
definition of where the zero Direction is. The Direction value is offset by one-half
the vergence in centered mode.
If you put the stereo pair into one of the Centered modes and use Live
mode, youll see the camera swinging forward and backward in response to
changes in the vergence. Once you understand it, it should make sense. If it
seems a bit more complex and demanding than single-toe rigs youre right!

3-D Rig Calibration Overview


If you assemble two cameras onto a rigat its simplest a piece of metal
with cameras bolted to ityoull rapidly discover that the two cameras are
looking in different directions, with different image sizes, and usually with quite
different roll angles.
Using a wide field of view is important to achieving a good stereoscopic
effectsense of depthespecially since some of the view will be sacrificed in
calibration. The wide field of view frequently means substantial lens distortion,
and removing the distortion will eliminate some of the field of view also.
So an important initial goal of stereo image processing is to make the two
images conform to one another (match geometrically). (Color and brightness
should also be equalized.)
There are three basic methods:
1) Mechanical calibration before shooting, using physical adjustments on
the rig to align the cameras, in conjunction with a monitor that can
superimpose the two images
2) Electronic calibration, by shooting images of reference grids, then
analyzing that footage to determine corrections to apply to the real
footage to cause it to match up.
3) Take as-shot footage with no calibration, track and analyze it to
determine the stereo parameters, use them to correct the footage to
match properly.
Of these choices, the first is the best, because the as-shot images will
already be correct and will not need resampling to correct them. The downside is
that a suitably adjustable rig and monitor are more complex and expensive.

404
STEREOSCOPIC MOVIES

The second choice is reasonable for most home-built rigs, where two
cameras are lashed together. We recommend that you set up the shoot of the
reference grid, mechanically adjust the cameras as best you are able, then lock
them down and use electronic correction (applied by the image preprocessor) to
correct the remaining mismatch. With a little care, the remaining mismatch
should only require a zoom of a few percent, with minimal impact on image
quality.
The third case is riskiest, and is subject to the details of each shot: it may
not always be possible to determine the camera parameters accurately. We
recommend this approach only for rescuing un-calibrated shots at present.

Electronic Calibration
To electronically calibrate, print out the calibration grid from the web site
using a large-format black and white (drafting) printer, which can be done at
printing shops such as Fedex Kinkos. Attach the grid to a convenient wall,
perhaps with something like Joes Sticky Stuff from Cinetools. Position the rig on
a tripod in front of the wall, as close as it can get with the entire outer frame
visible in both cameras (zoom lenses wide open). Adjust the height of the rig so
that the nodal point of the cameras is at the same height as the center point of
the grid.
Re-aim the cameras as necessary to center them on the grid. This will
converge them at that distance to the wall; you may want to offset them slightly
outwards or inwards to achieve a different convergence distance, depending on
what you want.
Shoot a little footage of this static setup. Record the distance from the
cameras to the wall, and the width of the visible grid pattern (48 on our standard
grid at 100% size).
For camcorders with zoom lenses, you should shoot a sequence, zooming
in a bit at a time in each camera. You can use one remote control to control both
cameras simultaneously. This sequence will allow the optic center of the lens to
be determinedcamcorder lenses are often far off-center.
Once you open the shots in SynthEyes, create a full-width checkline and
use the Camera Field of View Calculator script to determine the overall field of
view. Use the Adjust tools on the image preprocessor to adjust each shot to have
the same size and rotation angle. Use the lens distortion controls to remove any
distortion. Correct any mirroring with this pass as well, see the mirror settings on
the image preprocessors Rez tab. Use the Cropping and re-sampling controls to
remove lens off-centering. A small Delta Zoom value will equalize the zoom. See
the tutorial for an overview of this process.
Your objective is to produce a set of settings that take the two different
images and make them look exactly the same, as if your camera rig was perfect.
Once youve done that, you can record all of the relevant settings (see the
Export/Stereo/Export Stereo Settings script), and re-use them on each of your

405
STEREOSCOPIC MOVIES

shots (see Import/Stereo/Import Stereo Settings) to make the actual images


match up properly. Then, you should save a modified version of each sequence
out to disk for subsequent tracking, compositing, and delivery to the audience.
Obviously this process requires that your stereo rig stay rigid from shot to
shot (or periodic calibrations performed). The better the shots match, the less
image quality and field of view will be lost in making the shots match.

Opening Shots
SynthEyes uses a control on the shot parameters panel to identify shots
that need stereo processing. Open the left shot, and on the shot settings panel,
click Stereo off until it says Left. After you adjust any other parameters and click
OK, SynthEyes will immediately prompt you to open the right shot. Any settings,
including image preprocessor settings, will be copied over to the right shot to
save time.
If you do not configure the stereo setting when you initially open the shot,
you can do so later using the shot settings dialog. You can turn it on or off as
your needs warrant. To get stereo processing, you must open the left shot first
and the right shot second, and set the first shot to left and the second to right.
Both shots must have the same shot-start and -end frame values.
Stereo rigs that include mirrors will produce reversed images. If the
camera was mechanically calibrated, use the Mirror Left/Right or Mirror
Top/Bottom checkboxes on the Rez tab of the image preprocessor to remove the
mirroring. (If the cameras are electronically calibrated using the image
preprocessor, you should remove mirroring then as described above.)
Once the shot is open, you can select a regular or stereo-friendly viewport
configuration in which to work, depending on what you are doing at the moment.
For supervised tracking, a stereo layout with two camera views is highly
recommended.
Note that you can use the Stereo view in the Perspective view to show
both images simultaneously as an anaglyph, see View/Stereo Display and
View/Perspective View Settings. You can select left-over-right mode here, but it
is intended for preview-movie output, not for interactive use. Use multiple
perspective or camera views to display both views simultaneously for interactive
use.

Stereoscopic Tracking
In a stereo setup, there are links from trackers on the secondary camera
to the corresponding trackers on the dominant camera; they tell SynthEyes which
features are tracked in both cameras. These links are always treated as peg-type
locks, regardless of the state of the Constrain checkbox.

406
STEREOSCOPIC MOVIES

Automatic
If you use automatic tracking, SynthEyes will track both shots
simultaneously and automatically link trackers between the two shots. For this to
work well, your two shots need to be properly matched, both in overall geometric
alignment, and in color and brightness grading. A quick fix, if needed, is to turn
on high-pass filtering in the image preprocessor. Be sure to set the proper
dominant camera before auto-tracking, as SynthEyes will examine that to
determine in which direction to place the links (from secondary trackers to
primary camera trackers, which can be left to right or right to left).

Tip: you can use the Copy Splines script to copy splines from one eye
to the other, if you want to mask some things out in both eyes.

Supervised
If you use supervised tracking, you will need to create both trackers and
the link between. There are a number of special features to make handling stereo
tracker pairs easier (for automatic trackers also). After you have created stereo
pairs, you will track them both, sequentially or simultaneously (the SimulTrack
view can help there).
It is easiest to do supervised stereo tracking with one of the viewport
configurations with two camera views, one for each eye, ie Stereo, Stereo SbS
(Side by Side), or Stereo SimulTrack.
To create stereo pairs from one of these configurations, turn on the Create
Tracker button on the Tracker Control panel, then click alternately in the camera
view of each eye. You can create 1 in the left, 2 in the right, then 2 in the left,
and back to 1 on the right. A link is created when a new tracker is created, and
there is one selected tracker on the other eye, which is not already linked to any
other tracker. If you forget to alternate, you can just click one of the existing
trackers to select it, then click to create the matching tracker in the other eye.

Tip: If you only need to create a single tracker pair, you can do that
quickly, without having to adjust the mode of either the camera or
perspective views: hold down the C key and click in the camera view or
other camera or perspective view.

Supervised Setup using Camera+Perspective Views


Though it is an older technique, it is also possible to use a perspective
window containing one eye, and a camera view containing the other eye,
typically the Camera+Perspective viewport setup or a custom configuration. This
older technique is also used with multi-camera shots that are not stereo. It has
the advantage that you make the perspective view display a specific frame,
independent of the main frame number.
With the right camera active, click Lock to Current Cam on the perspective
view's right-click menu. Then make the left camera active. Use control-home to

407
STEREOSCOPIC MOVIES

vertically center the camera view to match the perspective view. Use the shot
menu's Activate Other Eye (default accelerator key: minus sign) as needed to
flip-flop the eyes shown in the camera and perspective views. In the perspective
view, the right-click View/Show only locked setting can be helpful: it causes the
perspective view to show only the trackers on the camera that perspective view
is locked to.
Set the tracker control panel to Create Tracker mode. Set the perspective
view to Add Stereo 2nd mode (on the Other Modes submenu). Click and drag in
the camera view to create a tracker on one camera, and adjust whatever settings
you need to. Then click and drag in the perspective view to create the matching
tracker as a clone of the first. The two trackers will be linked together, and the
names adjusted to match. If you entered a name for the first tracker, that name
will be used, with L and R appended.
Creating Stereo Pairs by Linking
If you have already created many trackers on each eye, independently,
you can link them together to form stereo pairs (or link to a reference shot). To
link each pair, you should click the tracker in a camera view to select it, then
ALT-click the matching tracker in the other eye's view, which can be another
camera view or a perspective view. SynthEyes will automatically link the trackers
in the correct direction, depending on the camera dominance setting, and adjust
the tracker names.

Note: linking trackers in the camera view requires that the Coordinate
System Panel be open (so that you can see the results). The panel
does not have to be open to link trackers in a stereo shot from the
camera view to the opposing camera or perspective viewyou can link
them while keeping the Tracker panel open continuously, to save time.

Stereo Pair Selection


Trackers can only be selected on the active tracker host (active object).
The matching trackers in the other eye (cross-selected trackers) are shown in the
the camera and perspective view in a different color (orange by default, "CamVu
Cross Trackers" and "Persp, Opposite sel" in the preferences). This makes it
easy to check the matching.
If you click a tracker on a camera view that is not the active one, what
happens? If you have clicked into a second camera view, then the active tracker
host is changed to match that camera view, and the tracker becomes selected.

Fine point: if more than one tracker is selected, and you click on one
of the matching, cross-selected, trackers in the other view, then all of
the cross-selected trackers become selected, not just the one you
clicked, so you don't have to re-select that particular set.

408
STEREOSCOPIC MOVIES

The situation is a bit different if you are using the camera+perspective


view setup:

ACHTUNG! Stay awake, this next one is tricky: if you click on a tracker
in the perspective view, that tracker will not be selected, because it is
not on the currently-active camera, but instead the matching tracker on
the other camera (in the camera view) will be selected. That will in turn
make the tracker you just clicked (in the perspective window) turn
orange, because its matching tracker is now selected. The active
tracker host will not change. This will be more clear when you try it. If
you want to select and edit a tracker displayed in the perspective view
(on the opposite eye), you should switch the views with minus sign
the camera view is the place to do that.

Doing the Stereo Tracking


Once you have created stereo pairs, you need to track both left and right
trackers. If both camera views are open, tracking will proceed in both views
simultaneously (on multiple cores, if available). To track only on the left or right
eye, switch to a single-camera viewport configuration.
When tracking stereo pairs, you may find it helpful to use the Stereo
SimulTrack viewport configuration (two camera views, two SimulTracks). It will
allow you to adjust multiple trackers on each camera simultaneously. If you are
tracking a few tracker pairs at a time, "Force row mode" in the SimulTracks may
be helpful.
You can also use Pan To Follow; it will affect both camera views based on
the selected or cross-selected trackers.
Note that the settings of the left and right trackers in a pair are
independent once the pair has been created. You will need to adjust each
separately, and that may well be necessary depending on the shot.
Stereo ZWTs
You can use zero-weighted -trackers to help you track: once you set up a
pair and if the shot is already solved, any two frames of valid data, on the same
camera or on the opposing cameras, is enough to determine the tracker's 3D
location. If you have it enabled, Track/Search from solved will predict where a
tracker will show up in both images, even if it goes off- and back on- screen. If
your solve is not good, it may predict an unusable location, in which case you
may need to turn off Search from solved temporarily.
Checking Stereo Correspondence
In a stereo tracking setup, the trackers on each eye may be tracked
perfectly, but the track can be a disaster if the trackers on each eye do not track
the same feature on each side.

409
STEREOSCOPIC MOVIES

The SimulTrack view can help identify mismatches. Select the Stereo
SimulTrack layout, which has two camera views and two SimulTracks. You can
select trackers in one camera, and not only will you see the matching trackers in
the other camera view (in orange), you will see the active side's tracker on one
SimulTrack, and the other side's tracker on the other SimulTrack). The ordering
of trackers within each SimulTrack will be the same (driven by the active side), so
you can look from one to the other view to identify mismatches.
After you have a reasonable solve, you can also take a look at the error on
the trackers to identify mismatches, using the Sort by Error option and the
SimulTrack or Graph Editor.

Configuring a Stereo Solve


When solving a stereo shot, you can use either of three different setups:
the Stereo solving mode, Automatic/Indirectly, or the Tripod/Indirectly setup. The
Stereo mode works for stationary tripod-like shots or even stills when there are
nearby features, while the Automatic/Indirectly approach requires a moving
camera in order to produce (generally more reliable) startup results. The
Tripod/Indirectly setup is required when all trackable features are distant from the
camera pair.
You will need to pay more attention to the solving setup for stereo shots
than for normal monocular shots, so please read on in the following subsections.
If you hit the big green Auto button, or the Run Auto-tracker button,
SynthEyes will prompt you to determine the desired mode of these three. If you
are working manually or later need to change the modes, you can do so
individually for each camera. Be sure to keep Indirectly, if present, on the
secondary camera.
Note that far trackers can be an issue with stereo shots that are on a
tripod: once a tracker is many times the interocular distance from the cameras,
its distance can no longer be determined, and the stereo solve goes from a nice
stereo tripod situation to a combination of two tripod shots, see the section on
Stereo Tripod Shots.
Stereo Solving Mode
The stereo mode uses the trackers that are linked between cameras to get
things going. It does not rely on any camera motion at all: the camera can be
stationary, and even a single still for each camera can be used, as long as there
are enough nearby features (compared to the inter-ocular distance).

Important: The Begin and End frames on the Solver panel should be
configured directly for Stereo Solving modesomewhat differently
than for a usual automatic solve, so please keep reading. To begin
with, the checkboxes should be checked so that the values can be set
manually.

410
STEREOSCOPIC MOVIES

The Stereo solve will literally start with the Begin frame from both
cameras; it should be chosen at a frame with many trackers in common between
the two cameras.
However, this offers a limited pool of data with which to get started. A
much larger, and thus more reliable, pool of data is considered when the End
frame is set as well. The Stereo solver startup considers all the frames between
the Begin and End frames as source data.
The one caveat: none of the camera parameters must change between
the begin and end frames, including the distance or vergence, even if they were
marked as changing (see the next section).
If any of the camera parameters are constantly changing throughout the
shot, or it can not be determined that they do not, then you must set the End
frame and Begin frame to the same frame, and forego having any additional data
for startup. Such a frame should have as many trackers as possible, and they
should be carefully examined to reduce errors.
If you do not select the Begin/End frames manually (leave them in
automatic mode), then SynthEyes will select a single starting/ending frame that
has as many trackers in common as possible. But as described, supplying a
range is a better idea.
Note that you might be able to use the entire shot as a range, though
probably this will increase run time and a shorter period may produce equivalent
results.
Automatic/Indirectly Solving Mode
The stereo mode effectively uses the inter-ocular distance as the baseline
for triangulating to initially find the tracking points. If the camera is moving, a
larger portion of the motion can be used to get solving started, producing a more
accurate starting configuration.
To do that, use the normal Automatic solving mode for the dominant
camera, and the Indirectly mode for the secondary camera. Assuming the
moving camera path is reasonable, SynthEyes will solve for the dominant
camera path, then for the secondary path, applying the selected camera/camera
constraints at that time.
This approach will probably work better on shots where most of the
trackers are fairly far from the cameras, and the camera moves a substantial
distance, thus establishing a baseline for triangulation. If the camera moves
(translates) little, you should use the Stereo solving mode.
Tripod/Indirectly Solving Mode
With two cameras, nodal tripod shots are less an issue because distances
and 3-D coordinates can be determined if there are enough nearby features.
However, you may encounter shots that are nodal by virtue of not having
anything nearby; call them "all-far" shots. For example, consider a camera on the

411
STEREOSCOPIC MOVIES

top of a mountain, which must be attacked by CG birds. With no nearby features,


the shot will be nodal, and there will be no way to determine the inter-ocular
distance. Any inter-ocular distance can be used, with no way to tell if it is right or
wrong.
Like a (monocular) tripod shot, no 3-D solve is possible, only what
amounts to two linked tripod solves.
Use the Tripod/Indirectly setup (tripod mode on dominant camera,
indirectly on secondary). When refining, use Refine Tripod mode for both
cameras.
On the stereo geometry panel (see below), you should set up your best
estimate of the inter-ocular distance, either from on-set measurements or from
other shots. You can animate it if you have the information to do so. Set the
Direction and Elevation numbers to zero, or known values from other shots.
SynthEyes will solve the shot to produce two synchronized tripod solves.
Then, it will compute adjusted camera paths, based on the interocular
distance and the pointing direction of the camera, as if the camera had been on a
tripod. These will typically be small arc-like paths. If you need to later adjust the
inter-ocular distance, Refine (Tripod) the shot to have the paths recalculated.
As a result, you will have two matching camera paths so that you can add
CG effects that come close to the camera. Since SynthEyes has regenerated
the camera paths at a correct inter-ocular distance, even though all the tracked
features are far, you will still be able to add effects nearby and have them come
out OK.

Setting Up Constraints
The Stereo Geometry panel can be used to set up constraints between
the two cameras. If you will be using the inter-ocular distance to set the overall
scale of the scene, then you should do that initially, before setting up a
coordinate system using the *3 tool. The *3 tool will recognize the inter-ocular
distance constraint, and generate a modified set of tracker constraints to avoid
creating a conflict with the inter-ocular distance constraint.
The left-most column on the Stereo Geometry panel sets the solving mode
for each of the six stereo parameters; they can be configured individually and
often will be. The default As-Is setting causes no constraint to be generated for
that parameter. To constrain the Distance, change its mode to Known, and set
the Lock-To Value to the desired value.
The Lock-To value can be animated, under control of the Make Key button
at top left of the panel. With Make Key off, the lock value shown and animated is
that at the beginning of the shot. Beware, this can hide any additional keys you
have already created.
Usually it will be best to solve a shot once first, with at most a Distance
constraint, and examine the resulting camera parameters. The stereo parameters

412
STEREOSCOPIC MOVIES

can be viewed in the graph editor under the node Stereo Pairs. The colors of
the parameters are shown on the stereo panel for convenience.
Sudden jumps in a parameter will usually indicate a tracking problem,
which should be addressed directly. The error is like an air bubble under
plasticyou can move it around, but not eliminate it. The stereo locks are all
soft and can not necessarily overcome an arbitrarily large error. If you do not fix
the underlying errors in the tracking data, even if you force the stereo parameters
to the values you wish, the error will appear in other channels or in the tracker
locations.
Usually, the other four stereo parameters (other than distance and
vergence) are constant at an unknown value. Use the Fixed mode to tell
SynthEyes to determine the best unknown value (like the Fixed Unknown lens-
solving mode).
If you are very confident of your calibration, or wish to have the best solve
for a specific set of parameters, you can use the Known mode for them also.
In the Varying solving mode, you can create constraints for specific
desired ranges of frames, by animating the respective Lock button on or off. The
parameter will be locked to the Lock-To value for those specific frames. The Hold
button may also be activated (for vergence and distance); see the following
section on Handling Shots with Changing IOD or Vergence.
Note that usually you should keep solving from scratch after changing
the stereo constraint parameters, rather than switching to Refine mode. Usually
after a change the desired solution will be too far away from the current one to be
determined without re-running the solve.

Weights
Each constraint has an animated weight track. Weights range from 0 to
120, with 60 being nominal. The value is decibels, meaning 20 units changes the
weight by a factor of 10. Thus, the total weight range is 0.001 to 1000.0.
Excessively large weight values can de-stabilize the equations, producing
a less-accurate result. We advise sticking with the default values to begin with,
and only increasing a weight if needed to reduce difference values after a solve
has been obtained. On a difficult solve where there is much contradictory
information, it may be more helpful to reduce the weight, to make the equations
more flexible and better able to find a decent solution.

Identical Lens FOV Weight


You might wish to keep the two camera fields of view close to one
another. The Identical Lens Weight spinner on the Lens Panel allows you to do
that. As with the constraint weights, 60 is the nominal value, with a range from 0
to 120.

413
STEREOSCOPIC MOVIES

Note that truly identical camera lenses are very unlikely, even if you have
done some pre-processing. It is probably best to stay with a moderate weight.
If you want truly identical fields of view, you should solve with a identical-
FOV weight first, then average the two values and set that value as the known
FOV for each camera.
SynthEyes notices even a very small identical-FOV weight and uses it to
help produce a more reliable initial solution, so this may sometimes be a good
idea even if the values are not necessarily identical.

Handling Shots with Changing IOD and/or Vergence


When the inter-ocular distance or vergence changes during the shot, it
may be desirable to clean up the curve to reduce chatter. Since inter-ocular
distances are typically small, there will be some chatter most of the time,
especially if the shot is difficult. It is especially desirable to reduce the chatter if
new CG objects are being added close to the camera, as is often the case.
When the vergence or distance is changing rapidly, any chatter will be
difficult to see. It is most useful to squash any jitter while the vergence or
distance is stationary, and allow it to change freely only while it is actively
changing.
Conveniently, the Distance and Verge Hold controls allow you to do
exactly that. The parameter must be set to Varying mode to enable the Hold (and
Lock) controls. You can animate the respective Hold button to be on during the
frames when the respective parameter is fixed, and keep the Hold off when the
parameter is changing. (It is better to Hold on too few frames than too many.)
After solving a shot, you may want to create a specific IOD or vergence
curve. You can animate it directly, or create it with Approximate Keys in the
graph editor. Use Vary mode and animate a Lock to get lock only some frames,
or use Known mode to lock then entire shot if the whole curve is known.

Post-Stereo-Solve Checkup
After a stereo solve with constraints, you should verify that they have been
satisfied correctly, using the graph editor. If the tracking data is wrong or calls for
a stereo relationship too different than the constraints, they may not be satisfied
and adjusting the constraints or tracking data may be necessary.

IMPORTANT! On some marginal shots, the constraints may not be


satisfied on the very first frame as a consequence of the left/right
constraint weights being too high, resulting in a one-frame glitch in the
camera/camera relationship. You can reduce the weights, or, once you
have the tracking fairly complete, and have set up a coordinate
system on the dominant camera, switch to Refine mode, turn on the
Constrain checkbox, and hit Go!

414
STEREOSCOPIC MOVIES

You can render stereo preview movies using left-over-right stereo mode
using the right-click/Preview Movie capability. This will let you verify tracking on
other stereo displays in real time.

Object Tracking
SynthEyes can perform stereo object tracking, where the same rigid object
is tracked from both cameras. (It can also do motion capture from stereo
imagery, where the objects do not have to be rigid, and each feature is tracked
separately.) Or you can do single-eye object tracking on a stereo shot, if the
object moves enough for that to work well.
To set up the stereo moving-object setup, do a Shot/Add Moving Object
when a stereo shot is active. You will be asked whether to create a regular or
stereo moving object. The latter is similar to adding a moving object twice, once
when each camera (left and right) is selected as the main active object (the
currently-selected camera is used as the parent when a new object is created),
except that SynthEyes records that the two are linked together. Each object will
have the Stereo solving mode.
An object can only be in one place at a time: there should only be a single
path for the object in world space, the same path as seen in the left and right
cameras, just like each 3-D tracker pair only has a single location. The world
path depends on the camera path, though! This creates challenging what comes
first issues. Object solves always start with an initial camera-type estimate,
before being converted to an object-type solve, and that is what happens with
object solves as well.
Simple object solves start out with two separate motion paths, one for the
left object and one for the right object. As the camera path becomes available,
additional constraints are applied that force the left and right paths, in world
space, to become identical by adjusting the camera and object positioning.
Because of this inherent interaction, it is wise to work on the camera solve
first, before proceeding to the objects.

Helpful Hint: in addition to using the "Activate Other Eye" menu item
or its minus-key accelerator, it can also be helpful to right-click the
active camera/object button on the toolbar to rotate backwards through
the collection of objects and cameras when there are multiple cameras
and objects present.

To perform motion-capture style tracking in a stereo shot, after completing


the camera tracking, do an Add Moving Object, then set the solver mode for the
object(s) to Individual Mocap. Add stereo tracking pairs on the moving object,
and each pair will be solved to create a separate independent path. See the
online tutorial for an example, and see the Motion Capture section of the manual.

415
STEREOSCOPIC MOVIES

Interactions and Limitations


SynthEyes has many different features and capabilities; not all make
sense for stereo shots, or have been enhanced to work with stereo shots.
Following is a list of some of these items:
User interface on a stereo shot, the important settings are
generally those configured on the left camera, for example the
solvers Begin and End frames. You might find yourself changing
the right-camera controls and be surprised they seem to have no
effect, because they do not! Generally we will continue to add code
to prevent this kind of mishap when a stereo shot is active.
Lens distortion should be addressed before solving as part of
camera alignment.
Zoom vs Fixed vs Known Both lenses must be fixed, or both must
be zooming. It is far preferable if both lenses are Known, from
calibration, to produce a more stable and reliable solve.
Tracker Cleanup and Coalesce Trackers are not stereo-aware.
Hold mode for camera paths in a hold region, the camera is
effectively switched to tripod mode, which doesnt make sense for
stereo shots. With an inter-ocular distance, any change in
orientation always produces a change in position. In the future,
might be modified to make the dominant camera (only) stationary,
as a way to reduce chatter.
Object tracking with no camera trackers with a single camera, it is
sometimes useful to do an object track relative the camera, with the
camera disabled. The same approach is not possible with stereo
if the cameras are disabled, they can not be solved or moved to
achieve the proper inter-ocular distance. You can solve the shot as
a moving-camera, store the stereo parameters on the Known
tracks, reset the cameras, transfer the stereo parameters to the
secondary camera via Set 1f or Set All, then change the setup to
moving-object (see the tutorial) and solve as a moving object.
Moving Object locking you can not lock the coordinates of a
moving object to world coordinates.
Exporting exports will contain the separate cameras and objects.
Some exports may export only one camera at a time. In the future,
we will probably have modified stereo exporters that parent the
cameras to assemble a small rig. In any case, if you want to use
some particular rig in your animation package, you will need to use
some tool to convert the path information to drive your particular rig.

416
417
360 Degree Virtual Reality Tracking
SynthEyes can help stabilize and add 3D effects to 360 degree spherical
virtual reality (VR) footage. SynthEyes helps you work with 360 VR shots even if
your 3D app has no 360 VR capabilities at all.
Some other buzzwords that describe this type of footage: equirectangular,
360x180, or even plate carre. The defining feature is that a single frame shows
the view in all directions according to a latitude/longitude scheme; viewing
software/devices typically extract a portion of the image as a conventional
camera image in the desired direction to show the user.
When you work with 360 VR images, you don't have to be concerned
about determining distortion, field of view, or plate/sensor sizes, which is
definitely convenient!
Note that SynthEyes does not perform stitching, ie removing distortion and
combining multiple images to initially create a 360 VR image. That is handled by
specialized applications.
You tell SynthEyes that an incoming shot is 360 VR using that setting on
the SynthEyes Shot Setup panel: None or Present, plus the further option of
Remove!
We'll start the discussion by describing how to do a simple stabilization to
360 VR shots, if you will not be doing 3D effects.

ATTENTION! If this is the first part of the SynthEyes manual that


you are reading, you'll need to eventually read quite a bit more of it!
Working with 360 VR footage requires a wide variety of techniques
from throughout SynthEyes: the image preprocessor, automatic and
supervised tracking, solving, coordinate system setup, etc. The
material here builds upon that earlier material; it is not repeated here
because it would be exactly that, a repetition.

Simple Stabilization
If you need to only stabilize a 360 VR shot, you can do that relatively
simply in the image preprocessor using a 2.5D-type technique, rather than a full
3D solve. The drawback is that longer shots can drift, though you can easily
hand-animate corrections or art direction.
Open the shot, selecting the 360 VR mode of "Present."
On the Summary panel, click Run Auto-tracker.
Use shift-lasso while scrubbing through the shot to select all the distant
trackers, ie by the horizon line. Don't select any independently-moving
trackers, such as those on vehicles.
(If there aren't many trackers to select, start over but go to the Advanced
tab on the Features panel and increase the maximum tracker count to

419
360 DEGREE VIRTUAL REALITY TRACKING

300, 500, or more depending on the length of the shot; then rerun the
Auto-tracker).
Open the Image pre-processor (hit the P key)
On the Stabilize tab, click Get Tracks.
Change the two Stabilize Axes dropdowns to either Peg or Filter. Peg
stabilization causes the initial orientation to be maintained; Filter does that,
controlled by the Cut Frequency spinner.
To examine the stabilization, scrub through the shot using the spinner at
bottom of the image preprocessor.
You can adjust the aim direction of the stabilized image using the spinners
on the Adjust tab. (Note that these spinners and the Filter frequency can
be animated by first turning on the add-key button at lower right of the
image preprocessor window.)
To have the trackers match up with the modified footage, click Apply to
Trkers on the Output tab before closing the image preprocessor. If you
modify the stabilizer settings later, you must hit Remove f/Trkers before
doing so.
You can close the image preprocessor and experiment and play back the
shot in the main camera view.
Once the shot is ready, reopen the image preprocessor, go to the Output
tab, and click Save Sequence.
Or, use the exports to AfterEffects or the generic Save Sequence
approach.

3D Tracking, Stabilization, and Effects Workflow


The handling of any 360 VR shot is of course determined by what you
want to do with it. Scripts in the "360 VR" submenu will assist you. Here are the
general steps involved:
Open the shot, marking it with a 360 VR mode of "Present."
Do automatic and/or supervised tracking of the shot, producing a 3D
camera path and orientation and 3D locations for the tracked features.
Solve the shot to produce a camera path and 3D locations for all the
tracked features.
Set up a coordinate system using the 3D locations to orient the overall
scene: "Which way is up?!"
Run the Stabilize from Camera Path script to reorient and stabilize the
original footage, with additional options for path-aligned cameras.
If desired, create secondary linear shots to facilitate adding 3D objects to
the shot using non-360VR-capable rendering applications.
Export to other applications, either so the stabilization can be applied in
After Effects, for example, or for adding 3D objects into the footage.
It's important to note that 360 VR solves, whether done natively or via
linearization, typically will have much higher final errors than a typical
conventional shot, due to 1) uncorrected residual distortion in the stitching; 2)

420
360 DEGREE VIRTUAL REALITY TRACKING

synchronization errors because the cameras typically aren't genlocked; and 3)


the rolling shutter effect in the small CMOS cameras typically used in VR rigs.
While SynthEyes will let you do an amazing job stabilizing the footage by
reorienting it on each frame, it cannot repair the image damage done by
problems with stitching, unsynchronized cameras, or the rolling shutter effect.

Native 360VR Solving


You can track and solve 360VR shots directly as-is within SynthEyes,
using its native 360VR solving skills. The workflow for this is as follows.
Open the shot, marking it with a 360 VR mode of "Present."
Do automatic and/or supervised tracking of the 360VR shot.
Solve the shot, producing a 3D camera path and orientation and 3D
locations for the tracked features.
Set up a coordinate system using the 3D locations, orienting the overall
scene: "Which way is up?!"
If you wish to perform automatic tracking, you will almost always have to
perform preliminary roto-tracking work, to identify the areas that should be
tracked and not tracked. The following areas must be excluded:
The vehicle, drone, rig, etc holding the camera.
The sky or clouds.
Any moving objects or actors in the scene.
Anything else that is not stationary and rigidly connected together.
The roto-tracking step can be very rough, as it is intended only to delimit
areas for tracking, unlike the super-precise rotoscoping required to matte objects
in and out of a scene.
Note that you can do rigid-body tracking of 3D objects in 360VR shots just
like you can in SynthEyes for regular shots. Similarly, the solver handles 360VR
tripod shots, where the camera rotates but does not translate.

3D Solves from a Linearized Shot


You can use SynthEyes to convert a 360VR shot to a conventional linear
perspective camera, then track and solve that camera. This can be easier than
doing a lot of roto work on some shots, if there is a nicely trackable "clear" area
of the shot to use. This workflow also provides a segue to the 3D insertion
workflow for downstream applications that don't have a 360VR camera themself.
Open the shot, marking it with a 360 VR mode of "Present."
Use the 3D Tracking Setup script to create a synthetic view from a (virtual)
conventional linear camera with a large known field of view. (This view is
generated and stays internal to SynthEyes by the image preprocessor,
using the "Remove" setting for 360 VR mode.)
Do automatic and/or supervised tracking of the synthetic linear shot,=.

421
360 DEGREE VIRTUAL REALITY TRACKING

Solve the linearized shot, producing a 3D camera path and orientation and
3D locations for the tracked features.
Set up a coordinate system using the 3D locations, orienting the overall
scene: "Which way is up?!"
Convert back to a 360VR shot
Add additional trackers as desired, to locate additional elements in the
overall scene.
SynthEyes will have determined the 3D location of a variety of features in
the scene, but only in the direction that the linearized camera was looking. We'd
often like to have more trackers, in all directions. There are two workflows for
doing that.
Using Add Many Trackers
This method is a little ad hoc, but straightforward.
Run Stabilize from Camera Path to back to the full 360 VR view and
accurately stabilize the imagery to the world coordinate system. The
camera still follows its 3D path, but its orientation is now locked to the
Front view, corresponding to the stabilized imagery. The trackers are
updated to match the 360 VR footage.
On the Features panel, Clear all blips and Blips all frames. In the camera
view, lasso an area of the image where you'd like more trackers, then use
the main menu's Track/Add Many Trackers and select the option for Only
within the last lasso. Keep "Regular, not ZWT" off. You can lasso and add
additional trackers multiple times throughout the shot. Since the new
trackers are ZWTs, they will immediately have a 3D location.
Use the Sort by error option and the hierarchy or graph editors to identify
and delete any problematic ZWTs.
Using a Roto-d Auto-Track
This is a more hard-core approach. Since doing the roto step is time-
consuming (even though the roto can be very rough), we recommend it mainly in
situations where the roto is very simple, where the 360 VR camera was rigidly
mounted on a vehicle or the camera has a blind spot, and therefore the mask
doesn't move.
Run the 3D Tracking Wrapup script to convert back to the full 360 VR
view, including updating the trackers.
Go to the Roto panel, set up animated roto mask(s) to mark as Garbage
any camera platform visible in the imagery, the sky, and any other
unproductive areas, so that no trackers are generated in the next step. (It's
easiest to do this on un-stabilized footage, as described, which is why the
Wrapup script is used instead of the Stabilize from Camera Path.)
On the Features panel, Clear all blips and Blips all frames, then Peel All.
This will give you many more tracked 3D features throughout the scene,
not just in the area of the original linear track.

422
360 DEGREE VIRTUAL REALITY TRACKING

Enter "make all unsolved trackers zwt" into Synthia, to obtain 3D positions
for all the tracked features.
Use the Sort by error option and the hierarchy or graph editors to identify
and delete any problematic ZWTs.
Run the Stabilize from Camera Path script.

Manipulating the Viewing Direction


We expect that typically the camera platform will not be particularly stable,
so that you want to control the nominal viewing direction presented to the user. (If
the user stands still and looks in one direction, do they keep seeing the same
thing, or does that viewing direction change.) Regardless of the stability of the
original footage, the nominal viewing direction is an artistic decision.
If you haven't already (in the Using Add Many Trackers process), you
should run Stabilize from Camera Path to accurately stabilize the imagery to the
world coordinate system. The camera follows its 3D path, but its orientation is
locked to the Front view. A viewer who remains physically looking in one
direction will continue to see the same viewing direction in the VR environment.
In most cases, this is probably the easiest approach for viewers, especially
where the scenery is of most interest.
The other optional artistic choice is to run Align Camera to Path, if you'd
like the final 360 VR product to maintain its orientation based on the camera path
(heading), rather than world coordinate system. The typical use here is to provide
a view out of the front of a plane, for example. The stationary viewer sees where
the plane is heading, even as the plane veers about. This is better when a
dynamic stomach-churning experience is desired. The camera viewing direction
can be determined from heading and tilt, or just heading.
For final tweaks to the horizon line, use the Re-aim stabilized shot script. It
can make an overall tweak to the scene/horizon alignment, but after you run the
script, the Adjust tracks are available so that you can hand-animate the horizon
line over the course of the shot to make any desired fine artistic adjustments, or
to compensate for any long-term drift in the solve.
When you've finalized the path, use the Image Preprocessor's Save
Sequence to write the stabilized version of your 360 VR shot.

Inserting Rendered 3D Objects


With the 3D camera path and position information, you can insert 3D
rendered objects into your footage. SynthEyes provides the workflow to be able
to do that using ordinary 3D applications, without requiring them to support 360
VR directly. The overall process is similar to that for handling lens distortion. (See
the next section if your app has an equirectangular panoramic camera.)
Repeat the following process to render each mesh you want to insert in
your 3D application:

423
360 DEGREE VIRTUAL REALITY TRACKING

Place an imported, created, or bounding-box proxy 3D mesh into the


(SynthEyes) scene.
Position the mesh accurately in the 3D environment, using the 3D trackers
as references (Parent it to an animated moving object if needed.)
Make a copy of the scene file. Or, use Shot/Add shot to add another copy
of the original shot, then with the new camera active, select the original
camera, and run Copy Stabilization to make the additional shot the same
as the original.
Use the Follow Mesh script to change the shot to a conventional linear
version that tightly watches the mesh.
(Optional) Run the Create Spherical Screen script if you want the
background imagery available in your 3D app.
Export the scene to your 3D app using an appropriate SynthEyes
exporter.
Import the exported scene in your 3D app.
Use the desired full 3D mesh in your 3D app, shade it, animate it, etc, as
needed.
Render the shot in your 3D app over black with an alpha channel. Do
NOT include a flat or spherical screen in the render! Notice that you can
use any conventional 3D app; it does not have to have any 360 VR
capabilities.
Back in SynthEyes, use Shot/Change Shot Images with the Re-distort
CGU---switch to Apply mode setting. Select your rendered imagery. The
resulting images could be composited over the original unstabilized plates.
To obtain a world-stabilized version, run the Unfollow Mesh script, which
restores the camera path and stabilizer to produce world-stabilized
images.
If you earlier ran the Align Camera to Path script on the main footage, run
it again now with the same settings so that the insert matches.
Use Save Sequence to store the 360 VR version of the rendered shot.
Use your favorite compositing application to composite the stabilized 360
VR shot with the 360 VR renders of the meshes to be inserted. These are
just 2D images, so no 360 VR capabilities are required in the compositing
application.
Overall, working with 360 VR footage involves quite a few steps! They'll
make more sense as you see them in tutorials and work through them yourself.
Using External 360 VR Cameras
You can use a simpler object insertion strategy if your 3D app contains a
360 VR (equirectangular, spherical, panoramic, ...) camera.
In this case, you can export the scene without running the Follow Mesh
script: export the globally-stabilized scene. Then, in the 3D application, carefully
change the imported camera from a perspective camera to the 360VR camera.
Be sure to not to affect the position or orientation of the camera on any frame. If

424
360 DEGREE VIRTUAL REALITY TRACKING

you can't change the camera's type, consider creating an additional camera and
parenting it exactly to the existing one.
You can then render the scene through the 360VR camera, and the
results can be composited directly with the stabilized footage. Note that you can
do this once for all inserted meshes; you don't have to repeat it for each.
If you want to verify that images rendered this way are correct, do a
Shot/Change Shot Images with the Other---Don't do anything special option.
Then, on the image preprocessor's Adjust tab, go to frame zero, then control-
right-click the Delta U and Delta Rotation spinners to remove the stabilization.
The rendered footage will then match the meshes in the camera viewport.
The SynthEyes to Blender exporter has some limited support for using
Blender's Cycles panorama camera when exporting shots as above (ie with no
Follow Mesh). It will configure for Cycles and the proper camera. You will have to
set up any mesh texturing and other materials within the Cycles environment
yourself.

Areas of SynthEyes Supporting 360 VR Footage


SynthEyes contains many different areas of code. Some of it doesn't
depend on whether the footage is 360 VR or not, for example simply displaying
an image. Other portions depend critically on where elements in the 3D
environment appear in the image, for example to display a mesh. Some code has
been specially adapted for 360 VR images, while other code cannot or has not.
This is a quick introduction to what has been adapted, and some mention of what
has not. This can only be an approximation, not a complete listing, and items that
aren't affected aren't listed (ie things that are all 2D or all 3D). Note that we've
tried to adapt important elements, or provide ways to accomplish them; less
common or more complex functions are less likely to be adapted.
Adapted for 360 VR: image preprocessor, camera view display of 3D
meshes (including mesh overlay for Save Sequence), zero-weighted-tracker
solving, Add Many processing, tracker cleanup, drop onto mesh, texture
extraction (areas near poles may be problematic), error view.
The perspective view is always a linear (perspective!) image, but when a
360VR image is present, the 360VR image is automatically displayed on a
spherical backdrop.

Tip: Use the normal "Lock" to lock the perspective view to your camera
and show the image on the spherical backdrop. On the right-click/View
menu, turn on "Lock position only". You can then use "Look" mode to
re-aim the camera to explore in all directions. And when you scrub, you
will continue looking in that direction. Do not leave Lock position only
on for normal operations as it may have indeterminate effects. You can
use right-click/Other modes/Field of view to change the perspective
view's field of view.

425
360 DEGREE VIRTUAL REALITY TRACKING

Settings for the built-in projection screen are accessed via the Perspective
Projection Screen Adjust script. The Screen Distance is the main parameter of
interest. Horizontal and vertical grid sizes are multiplied by 6 and 4 respectively
for the spherical screen. Don't forget to click the Apply Settings button!
NOT adapted for 360 VR: Motion capture processing (mocap from 360VR
cameras?), planar and GeoH tracking, coalesce nearby trackers. While the
supervised and automatic tracking code works fine on 360VR images, it
processes them as regular images, so they do not compensate for the distortion
at the top and bottom of a 360VR image, or the wrapping from left edge to right.
Irrelevant for 360 VR: just a reminder that items such as distortion,
cropping/padding, region of interest, and "scale" in the image preprocessor are
irrelevant for 360 VR footage, and should not be changed!
If there are items that seem crucial and well-suited for 360 VR adaptation,
you can bring them to our attention.

360 VR Scripts Reference


There are a number of scripts provided for 360 VR processing, either to
perform or simplify it. The scripts may be found in "360 VR" submenus of the
Scripts, File/Import, and File/Export menus. They are listed below.
3D Tracking Setup
Use this script to prepare a 360 VR shot for linearized 3-D tracking. You'll
configure the desired linear field of view for the conventional camera, the
direction it will point within the image, and the resolution and aspect ratio of the
linearized image. Note that all these controls are available elsewhere in
SynthEyes; this script simply consolidates them to save time. You can access
them directly if you need to animate the camera direction or field of view during
the shot.
3D Tracking Wrap Unstabilized
You run this script after linearizing it and solving it without assigning a
coordinate system. This script converts back to the 360 VR mode such that the
3-D solve corresponds to the original unstabilized shot.
3D Tracking Wrapup
This script changes a shot, linearized for 3D tracking, back to 360 VR
mode after the completion of 3D tracking. It updates the trackers to correspond to
the 360 VR view.
After running this script, you can do roto-masking to block out the camera
mount, then re-run autotracking on the entire shot to generate zero-weighted
trackers, and finally run Stabilize from Camera Path. Alternatively you can run
Stabilize from Camera Path immediately.

426
360 DEGREE VIRTUAL REALITY TRACKING

Align Camera to Path


After a Stabilize from Camera Path operation (see below), the orientation
of the VR sphere is fixed in space: the viewer will always perceive "north"(and
every other direction) to correspond to a particular fixed direction in their viewing
environment. That world-oriented view makes it easy for the viewer to
understand and follow particular items in their environment.
Alternatively, if the camera is mounted on a moving vehicle, you may wish
to portray a "front looking" (cockpit), "side looking" (bus), or downwards (satellite)
view. You can achieve that using the Align Camera to Path script.
You run this script late in your processing and effects process, upon a
world-oriented scene. (The camera path should be filtered already, if it will be.)
The script re-keys the camera orientation relative to the camera path. You can
set up a fixed pan/tilt offset to the path, for example 0,0 is straight ahead; 90,0 is
off to the left; 180,0 is backwards; 0,-45 is forwards but down a bit.
You have the option to have the path create only the heading(pan), or
both heading and tilt. By default only the heading is created, and the horizon
stays level. But if you wish to better portray a plane diving towards the ground,
say, you can have the tilt follow the path as well.
To minimize disruptive jitter, path keys are generated only so often, with
interpolation between them. And the path's direction is determined over the
course of several frames (typically +/-3), rather than just the adjacent frame.
This script can also be run on renders of meshes that are to be inserted,
once they have been world-stabilized, so that they can match the path-alignment
of the main footage.
AfterEffects 360VR Stabilization
Exports a javascript that creates a comp to stabilize the footage within
AfterEffects, using the data from SynthEyes. When using this, you don't have to
render the footage from SynthEyes. You can do all your (2D) compositing inside
of AfterEffects instead. This export relies on an AfterEffects plugin included with
SynthEyes that can perform the required image manipulation. For installation
instructions, see the section on Installing for AfterEffects CC/CS6 in the main
Exporting section.
Copy ONLY Adjustment Tracks
This script copies ONLY the selected camera's shot's image preprocessor
adjustment tracks and camera keys to the current active object's shot. It does not
copy the (effect of) the stabilization data. This is useful for copying a setup that
has not been stabilized, since it doesn't produce a key for each frame.
Copy Stabilization
This script copies the selected camera's shot's stabilization (including
adjustments) and camera keys to the current active object's shot. This is used to
create additional copies of a stabilized and 3D tracked shot in order to create a

427
360 DEGREE VIRTUAL REALITY TRACKING

linearized subwindow that tracks with a specific mesh. The resulting subwindow
can be exported to another 3D package, a render produced, and the resulting
render warped back into the 360 VR image.
Create Spherical Screen
This script creates a physical spherical placeholder mesh that represents
the spherical shot's "image plane." The generated sphere (more specifically a
moving object that holds it) is animated to track along with the camera and is
textured with the shot imagery.
You have the option to have the sphere display the as-stabilized footage
(in which the sphere does not rotate), or the un-stabilized footage, where the
sphere rotates to show the effect of the 360 VR image stabilizer. The first option
is useful when you are still working inside of SynthEyes, since the stabilizer
remains active for use in the camera view. The second option is useful when you
will export to other applications, and you want to have those applications use the
unstabilized original footage. (Note that the only way to have unstabilized footage
available is to disable the stabilizer.)
If you write the stabilized footage from SynthEyes for use in downstream
applications, then you can use the first option for export.
A few details: spherical screens are not shown in the camera view, to
avoid mucking it up. Also, in the first, for use in SynthEyes, the sphere and the
moving object that holds it are configured to not be exported by default. So you
must make both Exportable on the 3-D panel.
If you're really tricky, you can duplicate a shot and copy the stabilization to
it, then you can have one shot/sphere with stabilized footage, and one without.
But it's up to you to keep track of what's what!
Export Stabilization
Exports stabilization data in a simple text format that can be reread by the
corresponding importer, or potentially by other applications or your own scripts.
The data contains the net adjusted stabilization data (as 3 rotations), the
corresponding field of view, three camera positions, and three camera
orientations. This is enough to reproduce stabilization and 3D tracks for
linearization, ie exporting and importing is equivalent to the "Copy Stabilization"
script. For details of the format, see the script itself.
Fine Tune Rotation
This script is an experiment and cross-check, an approximation of a full
360VR optimization. It can be run after a shot has been 3D solved, converted
back to a full 360 VR shot, and ZWTs added. The script computes an updated
rotation for each frame of the shot which reduces the overall errors. It can be run
multiple times; each time it reports what the initial error was and what the
maximum change was. Those reduce as you iterate. Empirically, with a decent
initial solve, this script has little to do, so there's little point running it.

428
360 DEGREE VIRTUAL REALITY TRACKING

Follow Mesh
This script re-aims and re-configures the current shot so that the camera
follows (exactly watches) a specific mesh(es) that you pre-select before running
this script. This is part of the workflow that allows renders from other 3D
applications to be smoothly integrated with 360VR shots, even if those apps do
not have 360 VR capabilities.
The script offers control over the aspect ratio of the image produced: it can
either be chosen automatically (recommended) or you can choose a specific
value. You can choose whether to use a worst-case (widest) camera field of
view, or to have the generated camera field of view exactly track the size of the
mesh as it approaches and/or recedes. You also control the amount of margin
around the mesh. It's good to have some safe area, and also a more substantial
margin is convenient when evaluating your tracking.
While you can select multiple meshes, they should be close together. If
they are spread apart, separate renders area likely a better idea. (And it's likely to
not work at all if meshes on opposite sides of the camera are involved!)
Note that the time required for Follow Mesh depends on the complexity of
the mesh(es) being followed. So don't select meshes that don't matter to
determining the area to be rendered, such as interior meshes. If you need to, you
can replace the mesh with a lower-resolution proxy for this operationeven a
simple bounding box.
Import Stabilization
Imports the stabilization data written by the Export Stabilization script,
which includes stabilization data (3 rotations), and camera field of view, position,
and rotation angles. You can select which of this data you want to actually read
from the file (vs being ignored). For convenience, it also gives you the option to
set the VR mode to Remove once the file has been read.
Re-aim stabilized shot
After you've stabilized a shot, you may want to "re-aim" it so that a
somewhat different direction is "true north" (centered) in the stabilized image.
Nominally you could achieve that by picking a somewhat different coordinate
system originally, but this script allows you to tweak it later. You can pan
(usually) and also tilt and roll the stabilized shot (the latter two bend the horizon).
The trackers' 2D locations are updated to maintain consistency with the imagery,
and optionally the 3D positions and camera path can be updated to maintain the
true north orientation (try it both ways to see the difference).
The Re-aim script also consolidates the adjustments into the stabilizer
portion of the image preprocessor, leaving the Adjust tracks subsequently free,
so that you can hand-animate the horizon line throughout the shot to compensate
for any drift. Don't do that if you are planning to add 3D effects, though, because
that added hand-animation will break the match!

429
360 DEGREE VIRTUAL REALITY TRACKING

Stabilize from Camera Path


This script is used after a linearized shot has been tracked, solved, and a
coordinate system applied to it. Either immediately, or after running 3D Tracking
Wrapup script, this script rigidly stabilizes the shot to the overall world coordinate
system (it changes to 360VR Present mode if needed). With global stabilization,
the sun will always be precisely fixed at a certain location in the resulting
stabilized images (determined by your coordinate system setup), no matter what
the original camera platform did. After the script has been run, the camera will
always be in the default Front-facing orientation, staring straight ahead, so that
its local coordinate system orientation matches the world coordinate system.
Note that the new world-coordinate stabilization bakes in any existing
stabilization or camera animation.
Unfollow Mesh
This script is run during the workflow for rendering a mesh externally. After
the mesh has been rendered, the images are loaded back into SynthEyes for
conversion to 360VR format using Shot/Change Shot Images with the Re-distort
option. The resulting images are match up with the original unstabilized images.
The Unfollow Mesh script is then run; it reloads the stabilizer from the
camera path so that world-stabilized 360 VR images (and matching camera path)
result. The Align Camera to Path script can be re-run at this time, if that was
done for the original footage.

430
Motion Capture and Face Tracking
SynthEyes offers the exciting capability to do full body and facial motion
capture using conventional video or film cameras.

STOP! Unless you know how to do supervised tracking and


understand moving-object tracking, you will not be able to do motion
tracking. The material here builds upon that earlier material; it is not
repeated here because it would be exactly that, a repetition.

First, why and when is motion capture necessary? The moving-object


tracking discussed previously is very effective for tracking a head, when the face
is not doing all that much, or when trackable points have been added in places
that dont move with respect to one another (forehead, jaws, nose). The moving-
object mode is good for making animals talk, for example. By contrast, motion
capture is used when the motion of the moving features is to be determined, and
will then be applied to an animated character. For example, use motion capture
of an actor reading a script to apply the same expressions to an animated
character. Moving-object tracking requires only one camera, while motion
capture requires several calibrated cameras.

Note: The Geometric Hierarchy(GeoH) Tracking capability (described


in the separate manual of that name, see the Help menu), which
requires only one camera, can be used in some cases instead of
motion capture, and in other cases after motion capture tracking, to
produce "joint angles" for export, instead of point clouds. There are
references to that here, plus see the GeoH tracking manual.

Second, we need to establish a few very important points: this is not the
kind of capability that you can learn on the fly as you do that important shoot,
with the client breathing down your neck. This is not the kind of thing for which
you can expect to glance at this manual for a few minutes, and be a pro. Your
head will explode. This is not the sort of thing you can expect to apply to some
musty old archival footage, or using that old VHS camera at night in front of a
flickering fireplace. This is not something where you can set up a shoot for a
couple of days, leave it around with small children or animals climbing on it, and
get anything usable whatsoever. This is not the sort of thing where you can take
a SynthEyes export into your animation software, and expect all your work to be
done, with just a quick render to come. And this is not the sort of thing that is
going to produce the results of a $250,000 custom full body motion capture
studio with 25 cameras.
With all those dire warnings out of the way, what is the good news? If you
do your homework, do your experimentation ahead of time, set up technically
solid cameras and lighting, read the SynthEyes manual so you have a fair
understanding what the SynthEyes software is doing, and understand your 3-D

431
MOTION CAPTURE AND FACE TRACKING

package well enough to set up your character or face rigging, you should be able
to get excellent results.
In this manual, well work through a sample facial capture session. The
techniques and issues are the same for full body capture, though of course the
tracking marks and overall camera setup for body capture must be larger and
more complex.

Introduction
To perform motion capture of faces or bodies, you will need at least two
cameras trained on the performer from different angles. Since the performers
head or limbs are rotating, the tracking features may rotate out of view of the first
two cameras, so you may need additional cameras to shoot more views from
behind the actor.

Tip: if you can get the field of view and accuracy you need with only
two cameras, that will make the job simpler, as you can use stereo
features, which are simpler and faster because only two cameras are
involved.

The fields of view of the cameras must be large enough to encompass the
entire motion that the actor will perform, without the cameras tracking the
performer (OK, experts can use SynthEyes for motion capture even when the
cameras move, but only with care).
You will need to perform a calibration process ahead of time, to determine
the exact position and orientation of the cameras with respect to one another
(assuming they are not moving). Well show you one way to achieve this, using
some specialized but inexpensive gear.

Very Important: Youll have to ensure that nobody knocks the


cameras out of calibration while you shoot calibration or live action
footage, or between takes.

Youll need to be able to resynchronize the footage of all the cameras in


post. Well tell you one way to do that.
Generally the performer will have tracker markers attached, to ensure the
best possible and most reliable data capture. The exception to this would be if
one of the camera views must also be used as part of the final shot, for example,
a talking head that will have an extreme helmet added. In this case, markers can
be used where they will be hidden by the added effect, and in locations not
permitting trackers, either natural facial features can be used (HD or film
source!), or markers can be used and removed as an additional effect.
After you solve the calibration and tracking in SynthEyes, you will wind up
with a collection of trajectories showing the path through space of each individual
feature. When you do moving-object tracking, the trackers are all rigidly

432
MOTION CAPTURE AND FACE TRACKING

connected to one another, but in motion capture, each tracker follows its own
individual path.
You will bring all these individual paths into your animation package, and
will need to set up a rigging system that makes your character move in response
to the tracker paths. That rigging might consist of expressions, Look At
controllers, etc; its up to you and your animation package.
Alternatively, you can set up a rig in SynthEyes using the GeoH tracking
facilities. By attaching your motion-capture trackers to it, the rig will be animated
to match up with the trackers in 3D. You can then export the rig in BVH format,
and import it into character animation software.

Comparison of Motion Capture and GeoH Tracking


As just described, (hybrid) GeoH tracking can be used to convert motion
capture data to joint angles. But GeoH tracking can also be used as a direct
alternative to motion capture, to track secondary motions (deformations) upon
the primary mesh, as long as those secondary motions have limited degrees of
freedom.
Here's a quick comparison of different tracking types.
Motion Capture. Multiple pre-calibrated cameras with known FOVs.
Individual features being tracked can move arbitrarily and completely
independently, producing an animated point cloud (to which object locations can
be fitted).
GeoH Tracking. Single camera with known FOV. There must be an
excellent existing 3D mesh for the object(s) being tracked. Secondary tracking for
sub-objects such as jaws, arms, hands, car doors, etc maintains exact defined
relationships between the child and parent objects. The limited degrees of motion
between the child and parents makes extracting more information possible from
the single camera view.
Regular Object Tracking. Single camera with potentially unknown FOV.
Does not require a pre-existing mesh. If there are multiple objects or subobjects,
they must be tracked separately, and there is no specific relationship maintained
between them. (It's crucial to get the scaling right for each one.)

Camera Types
Since each cameras fields of view must encompass the entire
performance (unless there are many overlapping cameras), at any time the actor
is usually a small portion of the frame. This makes progressive DV, HD, or film
source material strongly suggested.
Progressive-scan cameras are strongly recommended, to avoid the factor
of two loss of vertical resolution due to interlacing. This is especially important
since the tracking markers are typically small and can slip between scan lines.

433
MOTION CAPTURE AND FACE TRACKING

While it may make operations simpler, the cameras do not have to be the
same kind, have the same aspect ratio, or have the same frame rate.
Lens distortion will substantially complicate calibration and processing. To
minimize distortion, use high-quality lenses, and do not operate them near their
maximum field of view, where distortion is largest. Do not try to squeeze into the
smallest possible studio space.

Camera Placement
The camera placements must address two opposing factors: one, that the
cameras should be far apart, to produce a large parallax disparity with good
depth perception, and that the cameras should be close together, so that they
can simultaneously observe as many trackers as possible.
Youll probably need to experiment with placement to gain experience,
keeping in mind the performance to be delivered.
Cameras do not have to be placed in any special pattern. If the
performance warrants it, you might want coverage from up above, or down
below.
If any cameras will move during the performance, they will need a visible
set of stationary tracking markers, to recover their trajectory in the usual fashion.
This will reduce accuracy compared to a carefully calibrated stationary camera.

Lighting
Lighting should be sufficient to keep the markers well illuminated, avoiding
shadowing. The lighting should be enough to be able to keep the shutter time of
the cameras as low as possible, consistent with good image quality.

Calibration Requirements and Fixturing


In order for motion tracking footage to be solved, the camera positions,
orientations, and fields of view must be determined, independent of the live
footage, as accurately as possible.
To do this, we will use a process based on moving-object tracking. A
calibration object is moved in the field of view of all the cameras, and tracked
simultaneously.
To get the most data fastest and easiest, we constructed a prop we call a
porcupine out of a 4 Styrofoam ball, 20-gauge plant stem wires, and small 7
mm colored pom-pom balls, all obtained from a local craft shop for under $5.
Lengths of wire were cut to varying lengths, stuck into the ball, and a pom-pom
glued to the end using a hot glue gun. Retrospectively, it would have been
cleverer to space two balls along the support wire as well, to help set up a
coordinate system.

434
MOTION CAPTURE AND FACE TRACKING

The porcupine is hung by a support wire in the location of the performers


head, then rotated as it is recorded simultaneously from each camera. The
porcupines colored pom-poms can be viewed virtually all the time, even as they
spin around to the back, except for the occasional occlusion.
Similar fixtures can be built for larger motion capture scenarios, perhaps
using dolly track to carry a wire frame. It is important that the individual trackable
features on the fixture not move with respect to one another: their rigidity is
required for the standard object tracking.
The path of the calibration fixture does not particularly matter.

Camera Synchronization
The timing relationship between the different cameras must be
established. Ideally, all the cameras would all be gen-locked together, snapping
each image at exactly the same time. Instead, there are a variety of possibilities
which can be arranged and communicated to SynthEyes during the setup
process.

Motion capture has a special solver mode on the Solver Panel :


individual mocap. In this mode, the second dropdown list changes from a
directional hint to control camera synchronization.

435
MOTION CAPTURE AND FACE TRACKING

If the cameras are all video cameras, they can be gen-locked together to
all take pictures identically. This situation is called Sync Locked.

If you have a collection of video cameras, they will all take pictures at
exactly the same (crystal-controlled) rate. However, one camera may always be
taking pictures a bit before the other, and a third camera may always be taking
pictures at yet a different time than the other two. The option is Crystal Sync.
If you have a film camera, it might run a little more or a little less that 24
fps, not particularly synchronized to anything. This will be referred to as Loose
Sync.
In a capture setup with multiple cameras, one can always be considered
to be Sync Locked, and serve as a reference. If it is a video camera, other video
cameras are in Crystal Sync, and any film camera would be Loose Sync.
If you have a film camera that will be used in the final shot, it should be
considered to be the sync reference, with Sync Locked, and any other cameras
are in Loose Sync.
The beginning and end of each cameras view of the calibration sequence
and the performance sequence must be identified to the nearest frame. This can
be achieved with a clapper board or electronic slate. The low-budget approach is
to use a flashlight or laser pointer flash to mark the beginning and end of the
shot.

Camera Calibration Process


Were ready to start the camera calibration process, using the two shot
sequences LeftCalibSeq and RightCalibSeq. You can start SynthEyes and do a
File/New for the left shot, and then Add Shot to bring in the second. Open both
with Interlace=Yes, as unfortunately both shots are interlaced. Even though these
are moving-object shots, for calibration they will be solved as moving-camera
shots.
You can see from these shots how the timing calibration was carried out.
The shots were cropped right before the beginning of the starting flash, and right
after the ending flash, to make it obvious what had been done. Normally, you
should crop after the starting flash, and before the ending flash.
On your own shots, you can use the Image Preprocessing panels Region-
of-interest capability to reduce memory consumption to help handle long shots
from multiple cameras.

436
MOTION CAPTURE AND FACE TRACKING

You should supervise-track a substantial fraction of the pom-poms in each


camera view; you can then solve each camera to obtain a path of the camera
appearing to orbit the stationary pom-pom.
Next, we will need to set up a set of links between corresponding trackers
in the two shots. The links must always be on the Camera02 trackers, to a
Camera01 tracker. This can be achieved at least four different ways.
Matching Plan A: Temporary Alignment
This is probably easiest, and we may offer a script to do the grunt work in
the future.
Begin by assigning a temporary coordinate system for each camera, using
the same pom-poms and ordering for each camera. It is most useful to keep the
porcupine axis upright (which is where pom-poms along the support wire would
come in useful, if available); in this shot three at the very bottom of the porcupine
were suitable.
With matching constraints for each camera, when you re-solve, you will
obtain matching pairs of tracker points, one from each camera, located very
close to one another.

Now, with the Coordinate System panel open, Camera02 active,


and the Top view selected, you can click on each of Camera02s tracker points,
and then alt-click (or command-click) on the corresponding Camera01 point,
setting up all the links.
As you complete the linking, you should remove the initial temporary
constraints from Camera02.
Matching Plan B: Side by Side
In this plan, you can use the Camera & Perspective viewport
configuration. Make Camera01 active, and in the perspective window, right-click
and Lock to current camera with Camera01s imagery, then make Camera02
active for the camera view. Now camera and perspective views show the two
shots simultaneously. (Experts: you can open multiple perspective windows and
configure each for a different shot. You can also freeze a perspective window on
a particular frame, then use the key accelerators to switch frame as needed.)

You can now click the trackers in the camera(02) view, and alt-click the
matching (01) tracker in the perspective window, establishing the links.

Reminder: The coordinate system control panel must be open


for linking. This will take a little mental rotation to establish the right
correspondences; the colors of the various pom-poms will help.

437
MOTION CAPTURE AND FACE TRACKING

Matching Plan C: Multiple Camera Views


This is basically the same as Plan B, except that you can use multiple
camera views instead, one for each camera. Note that for more than two
cameras, if there isn't any stereo pair, you can still use LCamera and RCamera
just use the right-click menu to select the view to be displayed.
This approach is a bit easier in that you only have one kind of view to
worry about, but a bit less flexible because the perspective view has some
additional tricks to help you find the right match, such as changing the frame
number or unlocking to examine the 3D view.
Matching Plan D: Cross Link by Name
This plan is probably more trouble than it worth for calibration, but can be
an excellent choice for the actual shots. You assign names to each of the pom-
poms, so that the names differ only by the first character, then use the
Track/Cross-Link by Name menu item to establish links.
It is a bit of pain to come up with different names for the pom-poms, and
do it identically for the two views, but this might be more reasonable for other
calibration scenarios where it is more obvious which point is which.
Completing the Calibration
Were now ready to complete the calibration process. Change Camera02

to Indirectly solving mode on the Solver panel .


Note: the initial position of Camera01 is going to stay fixed, controlling the
overall positions of all the cameras. If you want it in some particular location, you
can remove the constraints from it, reset its path from the 3-D panel, then move it
around to a desired location
Solve the shot, and you have two orbiting cameras remaining at a fixed
relative orientation as they orbit.
Run the Motion Capture Camera Calibration script from the Script
menu, and the orbits will be squished down to single locations. Camera01 will be
stationary at its initial location, and Camera02 will be jittering around another
location, showing the stability of the offset between the two. The first frame of
Camera02s position is actually an average relative position over the entire shot;
it is this location we will later use.
You should save this calibration scene file (porcupine.sni); it will be the
starting point for tracking the real footage. The calibration script also produces a
script_output.txt file in a user-specific folder that lists the calibration data.

Body and Facial Tracking Marks


Markers will make tracking faster, easier, and more accurate. On the
face, markers might be little Avery dots from an office supply store, magic
marker spots, pom-poms with rubber cement(?), mascara, or grease paint. Note

438
MOTION CAPTURE AND FACE TRACKING

that small colored dots tend to lose their coloration in video images, especially
with motion blur. Make sure there is a luminance difference. Single-pixel-sized
spots are less accurate than those that are several pixels across.
Markers should be placed on the face in locations that reflect the
underlying musculature and the facial rigging they must drive. Be sure to include
markers on comparatively stationary parts of the head.
For body tracking, a typical approach is to put the performer in a black
outfit (such as UnderArmour), and attach table-tennis balls as tracking features
onto the joints. To achieve enough visibility, placing balls on both the top and
bottom of the elbow may be necessary. Because the markers must be placed on
the outside of the body, away from the true joint locations, character rigging will
have to take this into account.

Preparation for Two-Dimensional Tracking


Were ready to begin tracking the actual performance footage. Open the

final calibration scene file. Open the 3-D panel . For each camera, select the
camera in the select-by-name dropdown list. Then hit Blast and answer yes to
store the field of view data as well. Then, hit Reset twice, answering yes to
remove keys from the field of view track also. The result of this little dance is to
take the solved camera paths (as modified by the script), and make them the
initial position and orientation for each camera, with no animation (since they
arent actually moving).
Next, replace the shot for each camera with LeftFaceSeq and
RightFaceSeq. Again, these shots have been cropped based on the light flashes,
which would normally be removed completely. Set the End Frame for each shot
to its maximum possible. If necessary, use an animated ROI on the Imaging
Preprocessing panel so that you can keep both shots in RAM simultaneously. Hit
Control-A and delete to delete all the old trackers. Set each Lens to Known to
lock the field of view, and set the solving mode of each camera to Disabled, since
the cameras are fixed at their calibrated locations.
We need a placeholder object to hold all the individual trackers. Create a
moving object, Object01, for Camera01, then a moving object, Object02, for
Camera02. On the Solving Panel, set Object01 and Object02 to the Individual
mocap solving mode, and set the synchronization mode right below that.

Two-Dimensional Tracking
You can now track both shots, creating the trackers into Object01 and
Object02 for the respective shots. If you dont track all the markers, at least be
sure to track a given marker either in both shots, or none, as a half-tracked
marker will not help. The Hand-Held: Use Others mode may be helpful here for
the rapid facial motions. Frequent keying will be necessary when the motion

439
MOTION CAPTURE AND FACE TRACKING

causes motion blur to appear and disappear (a lot of uniform light and short
shutter time will minimize this).

Linking the Shots


After completing the tracking, you must set up links. The easiest approach
will probably be to set up side-by-side camera and perspective views. Again, you
should link the Object02 trackers to the Object01 trackers, not the other way
around.
Doing the linking by name can also be helpful, since the trackers should
have fairly obvious names such as Nose or Left Inner Eyebrow, etc.

Solving

Youre ready to solve, and the Solve step should be very routine,
producing paths for each of the linked trackers. The final file is facetrk.sni.

Fine Point! Normally SynthEyes will produce a position for each


tracker that has an equal amount of error as seen from each camera.
That's the best choice for general motion capture. However, if you
have one primary camera where you want an exact match, and several
secondary cameras to produce 3-D, you can adjust the Overall Weight
of the reference camera, on the Solver panel, to be 10.0 or similar
large value. Adjust the Overall Weight of the secondary cameras down
to 0.1, for example. The resulting solution will be much more accurate
for Camera01, and less so for the secondary cameras.

Afterwards, you can start checking on the trackers. You can scrub through
the shot in the perspective window, orbiting around the face. You can check the
error curves and XYZ paths in the graph editor . By switching to Sort by Error

mode , you can sequence through the trackers starting from those with the
highest error.

Exports & Rigging


When you solve a scene with motion capture trackers, each of them will
have a 3D key frame on every frame of the shot, animating the tracker's path.
The result is an animated point cloud.
Generally you will want to be able to apply the motion capture data to a rig
in your 3D animation package, to make the matching rig move the same way.
The rig's dimensions may match the original character being tracked, or be
different. You can do that using SynthEyes, or your animation package.
Using SynthEyes, you create a rig and cause it to match the motion of
your motion capture trackers. Those are done using the Geometric Hierarchy

440
MOTION CAPTURE AND FACE TRACKING

features of SynthEyes. Then, you export using the "Biovision" BVH format, which
is a good way to communicate rig animation between packages, as it contains
only the joint angles of the character. Finally, in your animation package, you can
import the BVH data to your character. See the Geometric Hierarchy Tracking
manual for more information, see the section "GeoH Track with 3D Motion
Capture Trackers."
Using your animation package, it is up to you to determine a method of
rigging your character to take advantage of the animated tracker paths. The
method chosen will depend on your character and animation software package. It
is likely you will need some expressions (formulas) and some Look-At controls.
For full-body motion capture, you will need to take into account the offsets from
the tracking markers (ie balls) to the actual joint locations.

Modeling
You can use the calculated point locations to build models. However, the
animation of the vertices will not be carried forward into the meshes you build.
Instead, when you do a Convert to Mesh or Assemble Mesh operation in the
perspective window, the current tracker locations are frozen on that frame.
If desired, you can repeat the object-building process on different frames
to build up a collection of morph-target meshes.
Since the tracker/vertex linkage information is stored, you can use the
MDD Motion Designer Mesh Animation export script to export animated meshes.
You must export the mesh itself, typically in obj format. Then in your target
application, you will read the mesh and apply the MDD data as an animated
deformer. Note that it is crucial that the same exact obj mesh model be used as
was used to generate the mdd file.

Single-Camera Motion Capture


In carefully controlled shoots, you can do a type of facial motion capture
using only a single camera.
You need to have a accurate and complete head modelbut one
that is purposefully devoid of expression.
You must be able to find enough trackers that are rigidly connected
(not animated independently) to get a rigid-body moving object
solve for the motion of the overall pre-existing head model.
Add a second disabled moving object and add additional trackers
on the independently-moving facial features that you want to track
to get expressions for.
Run the Animate Trackers by Mesh Projection script with the head
model selected, while the disabled object (and its trackers) are
active in the viewport, creating motion-capture 3D tracks for those
trackers on the surface of the mesh.

441
MOTION CAPTURE AND FACE TRACKING

Method #1: Create a new triangulated mesh using the motion-


capture trackers, for example using Assemble mode or Convert to
Mesh and Triangulate.
Method #2: Link the trackers to specific vertices that you want
animated on the reference mesh. Any vertex that is not linked will
not move!
Export the mesh as an OBJ file
Export the mesh animation using the MDD Mesh Animation script.
In your target animation package, import the OBJ file, and apply the
MDD file as an animated deformation.
Note that it is very important to re-export the mesh if you use method #2:
you must always add the MDD deformation to exactly the mesh that was used in
SynthEyes, since the vertex numbering will not match exactly. SynthEyes does
renumber and adjust vertices as needed when it reads OBJs, to match its
internal vertex processing pipeline, especially if there is normal or texture
coordinate data.
You may also find it helpful to export the 3D path of a tracker by itself,
using the Tracker 3D on Mesh exporter. You can use that data to drive bones or
other rigging if desired.

442
Light Tracking
After you have solved the scene, you can optionally use SynthEyes to
calculate the position of, or at least direction to, principal lights affecting the
scene. You might determine the location of a spotlight on the set, or the direction
to the sun outdoors. You can also use SynthEyes to track fluctuations in on-set
lighting due especially to explosions or flashing lights, which can otherwise wreak
havoc for matching. In these cases, knowing the lighting will help you match your
computer-graphic scene to the live footage.
SynthEyes can use either shadows or highlights to locate the lights. For
shadow tracking, you must track both the object casting the shadow, and the
shadow itself, determining a 3-D location for each. For highlight tracking, you will
track a moving highlight (mainly in 2-D), and you must create a 3-D mesh
(generally from an external modeling application, or a SynthEyes 3-D primitive)
that exactly matches the geometry on which the highlight is reflected.

Light Illumination
The basic approach in light illumination tracking is to put one or more
trackers into the scene, then have the average intensity level calculated frame by
frame and stored. This process is performed by the Set Illumination from
Trackers script.
The calculated illumination data can be stored on a light, so it can
illuminate all the objects in a downstream scene; on a mesh, so the mesh's color
can directly match the measured color; or on the tracker itself, which is most
useful as a way to transfer a number of measured colors to downstream
applications.
Trackers can be placed anywhere in the scene, but a flat white wall is a
good choice and corresponds to typical white-balancing techniques. If a
substantially-colored location is desired, you can have a monochrome light
intensity calculated. Since a flat wall provides nothing to track, you can use offset
tracking, or alternatively if there is a mesh there, use Drop on Mesh to put a 3D
seed point there, and that point can be used to determine the location to be
examined. The tracker's size is the area measured (and can be animated).
If you are calculating color for a mesh, you should put the tracker(s) on a
relatively flat area of the mesh. SynthEyes can calculate the average interior
color of a planar tracker as affected by in-plane masks, so to get fine control over
the measured area, you can track a larger planar tracker, then before running the
script, add a specific in-plane mask for that area. (Lock the tracker and don't
retrack without deleting that mask.)
The selected trackers are averaged to create the illumination track. When
the 2D tracker location is examined, the tracker can be used only when it has a
valid 2D location. When the 3D location is used, the 3D location can be used
throughout the shot (except when it goes offscreen), which might be undesirable

443
FINDING LIGHT POSITIONS

if the tracker location is occluded by something else. Accordingly a "3D masked


by enable" option allows the tracker's Enable track to control whether or not the
tracker's position is examined, even if the tracker has not been tracked because
its 3D location is being used to determine where to analyze the lighting.
Note that while trackers do not have to last for the duration of the shot,
there will be consistency issues when trackers appear or disappear. If there
aren't any usable trackers on a given frame, no illumination key will be generated
on that frame.
The script provides a variety of options to offset and re-scale the data, to
remove black levels and to be able to increase the illumination level contrast.
You can control whether illumination keys are generated for the entire
shot, the playback range, or a single frame, and can have keys generated on
every frame, every other frame, every fifth frame, etc, depending on how rapidly
the light levels change.
You can use the graph editor or the color swatch on the 3-D or lighting
panels to edit the generated animated illumination level curves, including to
change from linear to spline interpolation. Note that there is a separate static
color for a light, mesh, or tracker that is shown on swatches elsewhere in
SynthEyes, such as the graph editor and hierarchy view.
Once the illumination curve has been created, it can be exported via
selected exports, most notably Filmbox (FBX), which is the generally the most
capable and modern export, and Export/Illumination as text to obtain a plain text
file.

Lights from Shadows


Consider the two supervised trackers in the image below from the BigBall
example scene:

444
FINDING LIGHT POSITIONS

One tracks the spout of a teacup, the other tracks the spouts shadow on
the table. After solving the scene, we have the 3-D position of both. The
procedure to locate the light in this situation is as follows.

Switch to the Lighting Control Panel . Click the New Light button,
then the New Ray button. In the camera view, click on the spout tracker, then on
the tracker for the spouts shadow.
We could turn on the Far-away light checkbox, if the light was the sun, so
that the direction of the light is the same everywhere in the scene. Instead, well
leave the checkbox off, and instead set the distance spinner to 100, moving the
light away that distance from the target.
The light will now be positioned so that it would cast a shadow from the
one tracker to the next; you can see it in the 3-D views. The lighting on any mesh
objects in the scene changes to reflect this light position, and you see the
shadows in the perspective view. You can repeat this process for the second
light, since the spout casts two shadows. This scene is Teacup.sni.
If the scene contained two different teapot-type setups due to the same
single light, you can place two rays on one light, and the 3-D position of the light
will be triangulated, without any need for a distance.
SynthEyes handles another important case, where you have walls, fences,
or other linear features casting shadows, but you can not say that a single point
casts a shadow at another single point. Instead, you may know that a point casts
a shadow somewhere on a line, or a line casts a shadow onto a point. This is
tantamount to knowing that the light falls somewhere in a particular 3-D plane.
With two such planes, you can identify the lights direction; with four you may be
able to locate it in 3-D.
To tell SynthEyes about a planar constraint, you must set up two different
rays, one with the common tracker and one point on the wall/fence/etc., and the
other ray containing the common tracker and the other point on the
wall/fence/etc.

Lights from Highlights


If you place a mesh into the scene that exactly matches that portion of the
scenes geometry (for example using the Perspective View's Pinning mode), and
if there is a specular highlight reflected from that geometry, you can determine
the direction to the light, and potentially its position as well.
To illustrate, well overview an example shot, BigBall. After opening the
shot, it can be tracked automatically or with supervised trackers (symmetric
trackers will work well). If you auto-track, kill all the trackers in the interior of the
ball, and the reflections on the teapot as well.

445
FINDING LIGHT POSITIONS

Set up a coordinate system as shown abovethe tracker at lower left is


the origin, the one at lower right is on the left/right axis at 11.75, and the tracker
at center left is on the floor plane. Solve the shot. [Note: no need to convert units,
the 11.75 could be cm, meters, etc.]
Create symmetric supervised trackers for the two primary light reflections
at center top of the ball and track them though the shot. Change them both to
zero-weighted trackers (ZWT) on the tracker panelwe dont want them to affect
the 3-D solution.
To calculate the reflection from the ball, SynthEyes requires matching
geometry. Create a sphere. Set its height coordinate to be 3 and its size to be
12.25. Slide it around in the top view until the mesh matches up with the image
of the ball. You can zoom in on the top view for finer positioning, and into the
camera view for more accurate comparison.
The lighting calculations can be more accurate when vertex normals are
available. In your own shots, you may want to import a known mesh, for
example, from a scan. In this case, be sure to supply a mesh that has vertex
normals, or at least, use the Create Smooth Normals command of the
Perspective window.

On the lighting control panel , add a new light, click the New Ray
button, then click one of the two highlight trackers twice in succession, setting
that tracker as both the Source and Target. The target button will change to read
(highlight) Raise the Distance spinner to 48, which is an estimated value (not
needed for Far-away lights). From the quad view, youll see the light hanging in

446
FINDING LIGHT POSITIONS

the air above the ball, as in reality. Add a second light for the second highlight
tracker.
If you scrub through the shot, youll see the lights moving slightly as the
camera moves. This reflects the small errors in tracking and mesh positioning.
You can get a single average position for the light as follows: select the light,
select the first ray if it isnt already by clicking >, then click the All button.
This will load up your CPU a bit as the light position is being repeatedly averaged
over all the frames. This can be helpful if you want to adjust the mesh or tracker,
but you can avoid further calculations by hitting the Lock button. If you later
change some things, you can hit the Lock button to cause a recalculation.

In favorable circumstances, you will not need an approximate light


height or distance. The calculation SynthEyes is making with All or Lock
selected is more than just an averageit is able to triangulate to find an exact
light position. As it turns out, often, as in this example shot, the geometry of the
lights, mesh, and camera does not make that accurately possible, because the
shift in highlight position as the camera moves is generally quite small. (You can
test this by turning the distance constraint down to zero and hitting Lock again.)
But it may be possible if the camera is moving extensively, for example, dollying
along the side of a car, when a good mesh for the car is available.

447
Curve Tracking and Analysis in 3-D
While the bulk of SynthEyes is concerned with determining the location of
points in 3-D, sometimes it can be essential to determine the shape of a curve in
3-D, even if that curve has no trackable points on it, and every point along the
curve appears the same as every other. For example, it might be the curve of a
highway overpass to which a car chase must be added, the shape of a window
opening on a car, or the shape of a sidewalk on a hilly road, which must be used
as a 3-D masking edge for an architectural insert.
In such situations, acquiring the 3-D shape can be a tremendous
advantage, and SynthEyes can now bring it to you using its novel curve tracking

and flex solving capability, as operated with the Flex/Curve Control Panel .

Note: The flex room is not part of the normal default set. To use the
flex panel, use the room bar's Add Room to create a Flex room that
uses the Flex panel.

Terminology
Theres a bit of new terminology to define here, since there are both 2-D
and 3-D curves being considered.
Curve. This refers to a spline-like 2-D curve. It will always live on one
particular shots images, and is animated with a different location on each frame.
Flex. A spline-like 3-D curve. A flex resides in 3-D, though it may be
attached to a moving object. One or more curves will be attached to the flex;
those curves will be analyzed to determine the 3-D shape of the flex.
Rough-in. Placing control-point keys periodically and approximately.
Tuning a curve. Adjusting a curve so it matches edges exactly.

Overview
Heres the overall process for using the curve and flex system to
determine a 3-D curve. The quick synopsis is that we will get the 2-D curves
positioned exactly on each frame throughout the shot, then run a 3-D solving
stage. Note that the ordering of the steps can be changed around a bit, and
additional wrinkles added, once you know what you are doing this is the
simplest and easiest to explain.
1. Open the shot in SynthEyes
2. Obtain a 3-D camera solution, using automatic or supervised tracking
3. At the beginning of the shot, create a (2-D) curve corresponding to the
flex-to-be.

449
CURVE TRACKING AND ANALYSIS IN 3-D

4. Rough-in the path of the curve, with control-point animation keys


throughout the shot. There is a tool that can help do this, using the
existing point trackers.
5. Tune the curve to precisely match the underlying edges (manual or
automatic).
6. Draw a new flex in an approximate location. Assign the curve to it.
7. Configure the handling of the ends of the flex.
8. Solve the flex
9. Export the flex or convert it to a series of trackers.

Shot Planning and Limitations


Determining the 3-D position of a curve is at the mercy of underlying
mathematics, just as is the 3-D camera analysis performed by the rest of
SynthEyes. Because every point along a curve/flex is equivalent, there is
necessarily less information in the curve data than in a collection of trackers.
As a result, first, flex analysis can only be performed after a successful
normal 3-D solve that has determined camera path and field of view. The curve
data can not help obtain that solve; it does not replace and is not equivalent to
the data of several trackers.
Additionally, the camera motion must be richer and more complex than for
a collection of trackers. Consider a flex consisting of a horizontal line, perhaps a
clothesline or the top of a fence. If the camera moves left to right so that its path
is parallel to the flex, no 3-D information (depth) can be produced for the flex. If
the camera moves vertically, then the depth information can be obtained. The
situation is reversed for a vertical line: a vertical camera motion will not produce
any depth information.
Generally, both the shape of the flex and camera path will be more
complex, and you will need to ensure that the camera path is sufficiently complex
to produce adequate depth information for all of the flex. If the flex is circular,
and the camera motion horizontal, then the top and bottom of the circle will not
have well-defined depth. The flex will prefer a flat configuration, which is often,
but not necessarily, correct.
Note that a simple diagonal motion will not solve this problem: it will not
explore the depth in the portion of the circle that is parallel to the motion path.
The camera path must itself curve to more completely identify the depth all the
way around the circle hence the comment that the camera motion must itself
be more complex than for point tracking.
Similarly, tripod (nodal pan) shots are not suitable for use with the curve &
flex solving system. As with point tracking, tripod shots do not produce any depth
information.

450
CURVE TRACKING AND ANALYSIS IN 3-D

Flexes and curves are not closed like the letter O they are open like the
letter U or C. Also, they do not contain corners, like a V. Nor do they contain
tangency handles, since the curvature is controlled by SynthEyes.
Generally, the curve will be set up to track a fairly visible edge in the
image. Very marginal edges can still be used and solved to produce a flex, if you
are willing to do the tracking by hand.

Initial Curve Setup


Once you have identified the section of curve to be tracked and made into

a 3-D flex, you should open the Flex Control Panel , which contains both
flex and curve controls, and select the camera view.
Click the New Curve button, then, in the Camera View, click along the
section of curve to be tracked, creating control points as you go. Place additional
control points in areas of rapid curvature, and at extremal points of the curve.
Avoid area where there is no trackable edge if possible.
When you have finished with the last control point, right-click to exit the
curve creation mode.

Roughing in the Curve Keys


Next, we will approximately position the curve to track the underlying
edge. This can be done manually or automatically, if the situation permits.
Manual Roughing
For manual roughing, you move through the shot and periodically re-set
the position of the curve. By starting at the ends, and then successively
correcting the position at the most extremely-wrong positions within the shot,
usually this isnt too time consuming (unless the shot is a jumpy hand-held one).
SynthEyes splines the control point positions over time.
To re-set the curve, you can drag the entire curve into an approximate
position, then adjust the control points as necessary. If you find you need
additional control points, you can shift-click within the curve to create them.
You should monitor the control point density so that you dont bunch many
of them in the same place. But you do not have to worry about control points
chattering in position along the curve. This will not affect SynthEyes or the
resulting flex.
Automatic Roughing
SynthEyes can automatically rough the curve into place with a special tool
as long as there is a collection of trackers around the curve (not just one end),
such that the trackers and curve are all roughly on the same plane.

451
CURVE TRACKING AND ANALYSIS IN 3-D

When this is the case, shift-select all the trackers you want to use, click
the Rough button on the Flex control panel, then click the curve to be roughed
into place.
The Rough Curve Import panel will appear, a simple affair.

The first field asks how many trackers must be valid for the roughing
process to continue. In this case, 5 trackers were selected to start. As shown, it
will continue even if only one is valid. If the value is raised to 5, the process will
stop once any tracker becomes invalid. If only a few trackers are valid (especially
less than 4), less useful predictions of the curve shape can be made.
The Key every N frames setting controls how often the curve is keyed. At
the default setting of 1, a key will be placed at every frame, which is suitable for a
hand-held shot, but less convenient to subsequently refine. For a smooth shot, a
value of 10-20 might be more appropriate.
The Rough Curve Importer will start at the current frame, and begin
creating keys every so often as specified. It will stop if it reaches the end of the
shot, if there are too few trackers still valid, or if it passes by any existing key on
the curve. You can take advantage of this last point to fill in keys selectively as
needed, using different sets of trackers at different times, for example.
After youve used the Rough Curve Import tool, you should scrub through
the shot to look for any places where additional manual tweaking is required.
The curve may go offscreen or be obscured. If this happens, you can use
the curve Enable checkbox to disable the curve. Note that it is OK if the curve
goes partly offscreen, as long as there is enough information to locate it while it is
onscreen.

Curve Tuning
Once the curve has been roughed into place, youre ready to tune it to
place it more accurately along the edge. Of course, you can do this all by hand,
and in adverse conditions, that may be necessary. But it is much better to use
the automated Tune tool.

452
CURVE TRACKING AND ANALYSIS IN 3-D

You can tune either a single frame, with the Tune button, or all of the
frames using of course the All button. When a curve is tuned on a frame, the
curve control points will latch onto the nearby edge.
For this reason, before you begin tuning, you may wish to create
additional control points along the curve, by shift-clicking it.
The All button will bring up a control panel that controls both the single-
and multi-frame tuning. If you want to adjust the parameters without tuning all the
frames, simply close the dialog instead of hitting its Go button.

You can adjust to edges of different widths, control the distance within
which the edge is searched, and alter the trade-off between a large distant edge,
and a smaller nearby one. Clearly, it is going to be easier to track edges with no
nearby edges of similar magnitude.
The control panel allows you to tune all frames (potentially just those
within the animation playback range), only the frames that already have keys (to
tune your roughed-in frames), or only the frames that do not have keys (to
preserve your previously-keyed frames).
You can also tell the tracking dialog to use the tuned locations as it
estimates (using splining) where the curve is in subsequent frames, by turning on
the Continuous Update checkbox. If you have a simple curve well-separated
from confounding factors, you can use this feature to track a curve through a shot
without roughing it in first. The drawback of doing this is that if the curve does get
off course, you can wind up with many bad keys that must be repaired or
replaced. [You can remove erroneous keys using Truncate.] With the Continuous
Update box off, the tuning process is more predictable, relying solely on your
roughed-in animation.

453
CURVE TRACKING AND ANALYSIS IN 3-D

Flex Creation
With your curve(s) complete, you can now create a flex, which is the 3-D
splined curve that will be made to match the curve animation. The flex will be
created in 3-D in a position that approximately matches its actual position and
shape. It is usually most convenient to open the Quad view, so that you can see
the camera view at the same time you create the flex in one of the 3-D views
(such as the Top view).
Click the New Flex button, then begin clicking in the chosen 3-D view to
lay out a succession of control points. Right-click to end the mode. You can now
adjust the flex control points as needed to better match the curve. You should
keep the flex somewhat shorter than the curve.
To attach the curve to the flex, select the curve in the camera view, then,
on the flex control panel, change the parent-flex list box for the curve to be your
flex. (Note: if you create a flex, then a curve while the flex is still selected, the
curve is automatically connected to the flex.)

Flex Endpoints
The flexs endpoints must be nailed down so that the flex can not just
shrivel up along the length of the curve, or pour off the end. The ends are
controlled by one of several different means:
1. the end of the flex can stay even with its initial position,
2. the end of the flex can stay even with a specific tracker, or
3. the end of the flex can exactly match the position of a tracker.
The first method is the default. The last method is possible only if there is
a tracker at the desired location; this arises most often when several lines
intersect. You can track the intersection, then force all of the flexes to meet at the
same 3-D location.
To set the starting or ending tracker location for a flex, click the Start Pt or
End Pt button, then click on the desired tracker. Note that the current 3-D
location of the tracker will be saved, so if you re-track or re-solve, you will need to
reset the endpoint.
The flex will end even with the specified point, meaning so that the point
is perpendicular to the end of the flex. To match the position exactly, turn on the
Exact button.

Flex Solving
Now that youve got the curve and flex set up, you are ready to solve. This
is very easy click the Solve button (or Solve All if you have several flexes
ready to be solved).
After you solve a flex, the control points will no longer be visiblethey are
replaced by a more densely sampled sequence of non-editable points. If you

454
CURVE TRACKING AND ANALYSIS IN 3-D

want to get back to the original control points to adjust the initial configuration,
you can click Clear.

Flex Exports
Once you have solved the flex, you can export it. At present, there are two
principal export paths. The flexes are not currently exported as part of regular
tracker exports.
First, you can convert the flex into a sequence of trackers with the Convert
Flex to Trackers script on the Script menu. The trackers can be exported directly,
or, more usefully, you can use them in the Perspective window to create a mesh
containing those trackers. For example, on a building project where the flex is the
edge of the road, you can create a ground mesh to be landscaped, and still have
it connect smoothly with the road, even if the road is not planar.
Second, you can export the coordinates of the points along the flex into a
text file using the Flex Vertex Coordinates exporter. Using that file is up to you,
though it should be possible to use it to create paths in most packages.

455
Merging Files and Tracks
When you are working on scenarios with multiple shots or objects, you
may wish to combine different SynthEyes .sni files together. For example, you
may track a wide reference shot, and want to use those trackers as indirect links
for several other shots. You can save the tracked reference shot, then use the
File/Merge option to combine it with each of several other files.
Alternatively, you can transfer 2-D or 3-D data from one file to another, in
the process making a variety of adjustments to it as discussed in the second
subsection. You can track a file in several different auto-track sections, and
recombine them using the scripts.

File/Merge
After you start File/Merge and select a file to merge, you will be asked
whether or not to rename the trackers as necessary, to make them unique. If the
current scene has Camera01 with trackers Tracker01 to Tracker05, and the
scene being merged also has Camera01 with trackers Tracker01 to Tracker05,
then answering yes will result in Camera01 with Tracker01 to Tracker05 and
Camera02 with Tracker06 to Tracker10. If you answer no, Camera01 will have
Tracker01 to Tracker05 and Camera02 will also have (different) Tracker01 to
Tracker05, which is more confusing to people than machines.
As that example shows indirectly, cameras, objects, meshes, and lights
are always renamed to be unique. Renaming is always done by appending a
number: if the incoming and current scenes both have a TrashCan, the incoming
one will be renamed to TrashCan1.
If you are combining a shot with a previously-tracked reference, you will
probably want to keep the existing tracker names, to make it easiest to find
matching ones. Otherwise, renaming them with yes is probably the least
confusing unless you have a particular knowledge of the TrackerNN assignments
(in which case, giving them actual names such as Scuff1 is probably best).
You might occasionally track one portion of a shot in one scene file, and
track a different portion of the same shot in a separate file. You can combine the
scene files onto a single camera as follows:
1. Open the first shot
2. File/Merge the second shot.
3. Answer yes to make tracker names unique (important!)
4. Select Camera02 from the Shot menu.
5. Hit control-A to select all its trackers.

6. Go to the Coordinate System Panel .

457
MERGING FILES AND TRACKS

7. Change the trackers host object from Camera02 to *Camera01.


(The * before the camera name indicates that you are moving the
tracker to a different, but compatible, shot.)
8. Delete any moving objects, lights, or meshes attached to
Camera02.
9. Select Remove Object on the Shot menu to delete Camera02.
All the trackers will now be on the single Camera01. Notice how Remove
Object can be used to remove a moving object or a camera and its shot. In each
case, however, any other moving objects, trackers, lights, meshes, etc, must be
removed first or the Remove Object will be ignored.

Tracker Data Transfer


You can transfer tracking data from file to file using SynthEyes export
scripts, File/Export/Export 2-D Tracker Paths, and File/Import/Import 2-D
Tracker Paths. These scripts can be used to interchange with other programs
that support similar tracking data formats. The scripts can be used to make a
number of remedial transforms as well, such as repairing track data if the source
footage is replaced with a new version that is cropped differently.
The simple data format, a tracker name, frame number, horizontal and
vertical positions, and an optional status code, also permits external
manipulations by UNIX-style scripts and even spreadsheets.
Exporting
Initiate the Export 2-D Tracker Paths script, select a file, and a script-
generated dialog box will appear:

458
MERGING FILES AND TRACKS

As can be seen, it affords quite a bit of control.


The first three fields control the range of frames to be exported, in this
case, frames 10 from 15. The offset allows the frame number in the file to be
somewhat different, for example, -10 would make the first exported frame
appear to be frame zero, as if frame 10 was the start of the shot.
The next four fields, two scales and two offsets, manipulate the horizontal
(U) and vertical (V) coordinates. SynthEyes defines these to range from -1 to +1
and from left to right and top to bottom. Each coordinate is multiplied by its scale
and then the offset added. The normal defaults are scale=1 and offset=0. The
values of 0.5 and 0.5 shown rework the ranges to go from 0 to 1, as may be used
by other programs. A scale of -0.5 would change the vertical coordinate to run
from bottom to top, for example.
The scales and offsets can be used for a variety of fixes, including
changes in the source imagery. Youll have to cook up the scale and offset on
your own, though. Note that if you are writing a tracker file on SynthEyes and will
then read it back in with a transform, it is easiest to write it with scale=1 and
offset=0, then make changes as you read in, since if you need to try again you
can retry the import, without having to reexport.
Continuing with the controls, Even when missing causes a line to be
output even if the tracker was not found in that frame. This permits a more
accurate import, though other programs are less likely to understand the file.
Similarly, the Include Outcome Codes checkbox controls whether or not a small
numeric code appears on each line that indicates what was found; it permits a
more accurate import, though is less likely to be understood elsewhere.
The 2-D tracks box controls whether or not the raw 2-D tracking data is
output; this is not necessarily mandatory, as youll see.
The 3-D tracks box controls whether or not the 3-D path of each tracker is
includedthis will be the 2-D path of the solved 3-D position, and is quite
smooth. In the example, 3-D paths are exported and 2-D paths are not, which is
the reverse of the default. When the 3-D paths are exported, an extra Suffix for
3-D can be added to the tracker names; usually this is _3D, so that if both are
output, you can tell which is which.
Finally, the Extra Points box controls whether or not the 2-D paths of an
extra helper points in the scene are output.
Importing
The File/Import/Import 2-D Tracker Paths import can be used to read the
output of the 2-D exporter, or from other programs as well. The import script
offers a similar set of controls to the exporter:

459
MERGING FILES AND TRACKS

The import runs roughly in reverse of the export. The frame offset is
applied to the frame numbers in the file, and only those within the selected first
and last frames are stored.
The scale and offset can be adjusted; by default they are 1 and 0
respectively. The values of 2 and -1 shown undo the effect of the 0.5/0.5 in the
example export panel.
If you are importing several different tracker data files into a single moving
object or camera, you may have several different trackers all named Tracker1,
for example, and after combining the files, this would be undesirable. Instead, by
turning on Force unique names, each would be assigned a new unique name.
Of course, if you have done supervised tracking in some different files to
combine, you might well leave it off, to combine the paths together.
If the input data file contains data only for frames where a tracker has
been found, the tracker will still be enabled past the last valid frame. By turning
on Truncate enables after last, the enable will be turned off after the last valid
frame.
After each tracker is read, it is locked up. You can unlock and modify it as
necessary. The tracking data file contains only the basic path data, so you will
probably want to adjust the tracker size, search size, etc.
If you will be writing your own tracker data file for this script to import, note
that the lines must be sorted so that the lines for each specific tracker are
contiguous, and sorted in order of ascending frame number. This convention
makes everyones scripts easier. Also, note that the tracker names in the file
never contain spaces, they will have been changed to underscores.

460
MERGING FILES AND TRACKS

Transferring 3-D Paths


The path of a camera or object can be exported into a plain file containing
a frame number, 3 positions, 3 rotations, and an optional zoom channel (field of
view or focal length).
Like the 2-D exporter, the File/Export/Plain Camera Path exporter
provides a variety of options:
First Frame. First frame to export
Last Frame. Last frame to export.
Frame Offset. Add this value to the frame number before storing it in the file.
World Scaling. Multiplies the X,Y, Z coordinates, making the path bigger or
smaller.
Axis Mode. Radio-buttons for Z Up; Y Up, Right; Y Up, Left. Adjust to select the
desired output alignment, overriding the current SynthEyes scene setting.
Rotation Order. Radio buttons: XYZ or ZXY. Controls the interpretation of the 3
rotation angles in the file.
Zoom Channel. Radio buttons: None, Field of View, Vertical Field of View, Focal
Length. Controls the 7th data channel, namely what kind of field of view
data is output, if any.
Look the other way. SynthEyes camera looks along the Z axis; some systems
have the camera look along +Z. Select this checkbox for those other
systems.
The 3-D path importer, File/Import/Camera/Object Path, has the same
set of options. Though this seems redundant, it lets the importer read flexibly
from other packages. If you are writing from SynthEyes and then reading the
same data back in, you can leave the settings at their defaults on both export and
import (unless you want to time-shift too, for example). If you are changing
something, usually it is best to do it on the import, rather than the export.

Writing 3-D Tracker Positions


You can output the trackers 3-D positions using the File/Export/Plain
Trackers script with these options:
Tracker Names. Radio buttons: At beginning, At end of line, None. Controls
where the tracker names are placed on each output line. The end of line
option allows tracker names that contain spaces. Spaces are changed to
underscores if the names are at the beginning of the line.
Include Extras. If enabled, any helper points are also included in the file.
World Scaling. Multiplies the coordinates to increase or decrease overall
scaling.
Axis Mode. Temporarily changes the coordinate system setting as selected.

Reading 3-D Tracker Positions


On the input side, there is an File/Import/Tracker Locations option and
an File/Import/Extra Points option. Neither has any controls; they automatically

461
MERGING FILES AND TRACKS

detect whether the name is at the beginning or end of the line. Putting the
names at the end of each line is most flexible, because then there is no problem
with spaces embedded in the file name. A sample file might consist of lines such
as:
0 0 0 Origin
10 0 0 OnXAxis
13 -5 0 OnGroundPlane
22 10 0 AnotherGroundPlane
3 4 12 LightPole
When importing trackers, the coordinates are automatically set up as a
seed position on the tracker. You may want to change it to a Lock constraint as
well. If a tracker of the given name does not exist, a new tracker will be created.

462
Batch File Processing
The SynthEyes batch file processor lets you queue up a series of shots for
match-moving or file-sequence rendering, over lunch or over night. Please follow
these steps:
1. In SynthEyes, do a File/New and select the first/next shot.
2. Adjust shot settings in SynthEyes as needed, for example, set it to
zoom or tripod mode, and do an initial export the same kind will
be used at the completion of batch processing. You can skip the
export if you want to use the export type and export folder from the
preferences. If you have multiple exports configured, they will all be
run, producing output in the same folder as any earlier export, or in
the default export folder if none.
3. Hit File/Submit for Batch to submit a file for tracking and solving.
4. To render image sequences for shots that have been tracked,
converged, undistorted, or otherwise manipulated in the image
preprocessor, configure the Save Sequence dialog, close it, then hit
File/Submit for Rendering.
5. Repeat steps 1-3 or 4 for each shot.
6. Start the SynthEyes batch file processor with File/Batch Process
or from the Windows Start menu, All Programs, Andersson
Technologies LLC, SynthEyes Batcher.
7. Wait for one or more files to be completed.
8. Open the completed files from the Batch output folder.
9. Complete shot tracking as needed, such as assigning a coordinate
system, tracker cleanup, etc. followed by a Refine pass.
While the batcher runs, you can continue to run SynthEyes interactively
(only on the same machine), which is especially useful for setting up additional
shots, or finishing previously-completed ones.
Note: it is more efficient to use the batcher to process one shot while you
work on another one, instead of starting two windows of SynthEyes, because the
batcher does not attempt to load the entire shot into your RAM. Because the
batcher does not use playback RAM, most RAM is available for your interactive
SynthEyes window.

Details
SynthEyes uses two folders for batch file processing: an input folder and
an output folder. Submit for Batch places them into the input folder; exports are
written to the exports folder, completed files are written to the output folder, and
the input file removed. You can set the location of the input, export, and output
folders from the Preferences panel.

463
BATCH FILE PROCESSING

Thanks for reading this far!

464
SynthEyes Reference Material

System Requirements
Installation and Registration
Customer Care Features and Automatic Update
Menu Reference
Control Panel Reference
Additional Dialogs Reference
Viewport Features Reference
Perspective Window Reference
Overview of Standard Tool Scripts
Preferences and Scene Settings Reference
Keyboard Reference
Viewport Layout Manager
Support

465
System Requirements
Windows
Intel or AMD x86 processor with SSE2, such as i7, i5, i3, Core, Core 2
Duo, Pentium 4, Athlon 64, or Opteron. Note: SSE2 support is a
requirement for SynthEyes.
64- or 32-bit version of Windows 10, 8, 7, maybe Vista, and Windows XP.
2 GB RAM minimum. 4-12 GB suggested for pro, HD, and film users. 4+
GB are strongly suggested for 8-core and 64-bit machines.
3-button mouse with middle scroll wheel/button. See the viewport
reference section for help using a trackball.
1024x768 or larger display, 32 bit color, with OpenGL support. Large
multi-head configurations require graphics cards with sufficient memory.
DirectX 9.0c or later recommended, required for DV and usually MPEG.
Quicktime recommended, required to read/write .mov files.
Approximately 50 MB disk space to install. Tutorial and other learning
materials are separate downloads.
A supported 3-D animation or compositing package to export paths and
points to. Can be on a different machine, even a different operating
system, depending on the target package.
A user familiar with general 3-D animation techniques such as key-
framing.

Mac OS X

Mac OS 10.11 (El Capitan), 10.10 (Yosemite), 10.9 (Mavericks), or 10.8


(Mountain Lion).
Intel Mac (32 or 64-bit).
2 GB RAM minimum. 4-12 GB suggested for pro, HD, and film users. 4+
GB are strongly suggested for 8-core and 64-bit machines.
3 button mouse with middle scroll wheel/button. See the viewport
reference section for help using a trackball or Microsoft Intellipoint mouse
driver.
1024x768 or larger display, 32 bit color, with OpenGL support. Large
multi-head configurations require graphics cards with sufficient memory.
Approximately 50 MB disk space to install. Tutorial and other learning
materials are separate downloads.
A supported 3-D animation or compositing package to export paths and
points to. Can be on a different machine, even a different operating
system, depending on the target package.
A user familiar with general 3-D animation techniques such as key-
framing.

467
BATCH FILE PROCESSING

Linux
Redhat/CentOS 6.4+ or Ubuntu 12.04 and 14.04 LTS. Other versions are
likely to work but are not officially supported. See the website's Linux page
for the latest information.
64-bit Intel x64 architecture processor
2 GB RAM minimum. 2-4 GB suggested for pro, HD, and film users. 4+
GB are strongly suggested for 8-core and 64-bit machines.
3 button mouse with scroll wheel. See the viewport reference section for
help using a trackball or Microsoft Intellipoint mouse driver.
1024x768 or larger display, 32 bit color, with OpenGL support. Large
multi-head configurations require graphics cards with sufficient memory.
Approximately 50 MB disk space to install. Tutorial and other learning
materials are separate downloads.
A supported 3-D animation or compositing package to export paths and
points to. Can be on a different machine, even a different operating
system, depending on the target package.
A user familiar with general 3-D animation techniques such as key-
framing.

Interchange
The different platforms can read each other's files in general, though there
may be differences due to character encoding (iso-latin-1 vs UTF-8). Note that
Windows, Linux, and OS X licenses must be purchased separately; normal
licenses are not cross-platform.

468
Installation and Registration
Following sections describe installation on Windows, OS X, and Linux,
separately. After installation, follow the directions in the Registration page to
activate the product.

Windows Installation
Please uninstall SynthEyes Demo before installing the actual product.
Run the installer such as Syn####Pro64Setup.exe (with variations such
as Syn####Intro32Setup.exe etc), where #### is a version number, such as
1604.
You can install to the default location, or any convenient location. The
installer will create shortcuts on the desktop for the SynthEyes program and
HTML documentation.
If you have a trackball or tablet, you may wish to turn on the No middle-
mouse preference setting to make alternate mouse modes available. See the
viewport reference section. You should turn on Enhance Tablet Response if
you have trouble stopping playback or tracking (Wacom appears to have fixed
the underlying issue in recent drivers, so getting a new tablet driver may be
another option.)
Proceed to the Registration section below.

Windows Fine Print


If you receive this error message:
Error 1327.Invalid Drive E:\ (or other drive)
then Windows Installer wants to check something on that drive. This can occur if
you have a Firewire, network, or flash drive with a program installed to it, or an
important folder such as My Documents placed on it, if the drive is not turned on
or connected. The easiest cure is to turn the device on or reconnect it.
This behavior is part of Windows, see
http://support.installshield.com/kb/view.asp?articleid=q107921
http://support.microsoft.com/default.aspx?scid=kb;en-us;282183

Windows XP - DirectX
Windows XP: SynthEyes requires Microsofts DirectX 8 or later to be able
to read DV and MPEG shots. (DirectX is already part of Windows Vista and
later.) DirectX is a free download from Microsoft and is already a component of
many games and applications. You may be able to verify that you already have it
by searching for the DirectX diagnostic tool dxdiag.exe, located in
\windows\system or \winnt\system32. If you run it, the system tab shows the
DirectX version number at the bottom of the system information.

469
INSTALLATION AND REGISTRATION

To download and install DirectX, go to http://www.microsoft.com and


search for DirectX..

Windows - QuickTime
If you have shots contained in QuickTime (Apple) movies (ie .mov files),
you must have Apples QuickTime installed on your computer. If you use a
capture card that produces QuickTime movies, you will already have QuickTime
installed. SynthEyes can also produce preview movies in QuickTime format.
You can download QuickTime from
http://www.apple.com/quicktime/download/
Quicktime Pro is not required for SynthEyes to read or write files.
Note that at present Apple does not offer a 64-bit version of Quicktime, so
reading Quicktime files is only available by a 32-bit server for 64-bit SynthEyes.

Mac OS X Installation
1. Download the Syn####Pro.dmg or Syn####Intro.dmg file (where #### is
the version number, such as 1504) to a convenient location on the Mac.
2. Double-click it to open it and expose the SynthEyes installation package.
3. Double-click the installer to run it.
4. Proceed through a normal install; you will need root permissions.
5. Eject the .dmg file from the finder; it will be deleted.
6. Start SynthEyes from your Applications folder. You can create a shortcut
on your desktop if you wish.
7. Proceed to the Registration directions below.
Note that pictures throughout this document are based on the Windows
version; the Mac version will be very similar. In places where an ALT-click is
called for on Windows, a Command-click should be used on the Mac, though
these should be indicated in this manual.
If you have a trackball or Microsofts Intellipoint mouse driver, you may
wish to turn on the No middle-mouse preference setting to make alternate
mouse modes available. See the viewport reference section. You should turn on
Enhance Tablet Response if you have trouble stopping playback or tracking
(Wacom appears to have fixed the underlying issue in recent drivers, so getting a
new tablet driver may be another option.)

Linux Installation
You can double-check the current installation instructions and notes on
the website's Linux page.

470
INSTALLATION AND REGISTRATION

1. Unpack the .gz file into any convenient folder. The details may vary with your
browser and system; for example, you might double-click the .gz file, then
drag the whole SynthEyes folder inside it onto your desktop. While it's
possible to run from there for quick testing, the following installation steps are
necessary for SynthEyes to appear on the menus, have file icons, be
available to other users, etc.
2. Open the Terminal window.
3. Use "cd" to go to the unpacked folder.
4. If you are using Kubuntu, edit SynthEyes.sh to uncomment the
UBUNTU_MENU_PROXY line. You can also do this if you don't want to use
Unity's global menu for SynthEyes.
5. Type "sudo ./install.sh" and hit enter. You'll need to type in your password
before the script will run. If you aren't on your system's sudoer list, you'll need
someone is who is, or you'll need the superuser password. If your system
does not use sudo, type "su" and hit enter to become superuser instead.
6. On Redhat/CentOS/Kubuntu/Mint systems, you'll find SynthEyes in the
Graphics submenu of the main Applications menu.
7. On Ubuntu Unity systems, click on "Dash Home" then type SynthEyes into
the search field; click the SynthEyes Pro icon. Once you've started
SynthEyes, you can right-click its icon in the Dash bar and select "Lock to
Launcher" if you like.
8. Read and accept the license agreement that will pop up.
9. Non-demo only: Enter your SynthEyes license information in the standard
way.
10. Start tracking!
11. Existing SynthEyes ".sni" files may not open (by double-clicking them) in the
file browser until you have restarted your window manager.

Registration and Authorization Overview


After you order SynthEyes, you must register to receive your
permanent program authorization data. For your convenience, some
temporary registration data is automatically supplied as part of your order
confirmation, so you can put SynthEyes immediately to work.
For an online tutorial on registration and authorization, see
https://www.ssontech.com/content/regitut.html
The overall process (described in more detail later) is this:
1. Order SynthEyes
2. Receive order confirmation with download information and
temporary authorization.
3. Download and install SynthEyes
4. Start SynthEyes, fill out registration form, and send data to
Andersson Technologies LLC.

471
INSTALLATION AND REGISTRATION

5. Restart SynthEyes, enter the temporary authorization data.


6. Wait for the permanent authorization data to arrive.
7. Start SynthEyes and enter the permanent authorization data.

Registration
When you first start SynthEyes, a form will appear for you to enter
registration information. (Floating licenses: see the SyFlo documentation for
alternate directions.) If youve entered the temporary authorization data already,
you can access the registration dialog from the Help/Register menu item.

Important: if you have renewed your existing license, you still need to
register. Click "Register" on the Help menu to access this panel.

Proceed as follows:
1. Use copy and paste to transfer the entire serial number (looks like
S6-1601-12345-6789X, starting with SN- for Windows Intro, S6- for
Windows Pro, IM- for OS X Intro, M6- for OS X Pro, L6- for Linux
Pro, or C6- for cross-platform Pro) from the email confirmation of
your purchase to the form.
2. Fill out the remainder of the form. Sorry if this seems redundant to
the original order form, but it is necessary. This data should
correspond to the user of the software. If the user has no clear
relationship to the purchaser (a freelancer, say), please have the
purchaser email us to let us know, so we dont have to check to see
who to issue the license to.

472
INSTALLATION AND REGISTRATION

3. Hit Register, and SynthEyes will place a block of data onto the
clipboard. Be sure to hit Register, not the other button, this is a
frequent cause of confusion, simple though it may seem. If you
receive a message about filling out all the fields, be sure to check
them and supply full, accurate, answers, paying special attention to
those marked with an asterisk (*).
4. Create an email message entitled SynthEyes Registration
addressed to register@ssontech.com. Please use your normal
email address and program and send plain-text emails. Some web
mail programs can re-code mails in ways that are difficult to read.
Click inside the new- message windows text area, then hit control-
V (command-V on Mac) to paste the information from SynthEyes
into the message. If the email is blank, or contains temporary
authorization information, be 300% sure you clicked Register after
filling out the registration form.
5. If you are re-registering, after getting a new workstation, say, or
are not the person originally purchasing the software, please add a
remark to that effect to the mail.
6. Send the e-mail. Please use an email address for the organization
owning the license, not a personal gmail, hotmail, etc address. We
cannot send confidential company authorization data to your
personal email; use of personal emails frequently causes problems
for license owners.
7. You will receive an e-mail reply, typically the next business day,
containing the authorization data. Be sure to save the mail for
future reference.

Authorization
You'll use this authorization procedure twice: once to install the temporary
license that you receive initially, so you can get tracking right away; and a second
time to install the permanent license.
1. View the email containing the authorization data.
2. Highlight the authorization information everything from the left
parentheses bracket ( to the right parentheses ) and including
both parentheses in your e-mail program, and select Edit/Copy
in your mail program. Note: the serial number (SN-, IM-, etc) is not
part of the authorization data but is included above it only for
reference, especially for multiple licenses.
3. Start SynthEyes. If the registration dialog box appears, click the
Install License from Clipboard button. If your temporary
registration is still active, the registration dialog will not appear, so
click Help/Authorize instead.

473
INSTALLATION AND REGISTRATION

4. Windows 8/Windows 7/Vista: the User Account Control popup will


appear; click Yes to allow SynthEyes to work.
5. A Customer Care Login Information dialog will appear. If you have
a temporary license, you should hit Cancel on this panel. It will be
pre-filled with your serial and the support login and password that
came in the email with the authorization data. (The user ID looks
like jan02, and the password looks like jm323kxthese two will not
work, use the ones from your mail.) We just point these out so you
know what they are, and where they gothey are also used for
logging into the customer-only area of the website. You can change
the update check frequency if desired, then click OK.
6. SynthEyes will acknowledge the data, then exit. When you restart
it, you should see your permanent information listed on the splash
screen, and youre ready to go.

Windows Uninstallation
Like other Windows-compatible programs, use the Add/Remove Programs
tool from the Windows Control Panel to uninstall SynthEyes.

Mac Uninstallation
Delete the folders /Applications/SynthEyes and (if desired) your
preferences etc /Users/YourName/Library/Applications Support/SynthEyes

Linux Uninstallation
Delete the directories /opt/SynthEyes and optionally ~/.SynthEyes

474
Customer Care Features and Automatic Update
SynthEyes features an extensive customer care facility, aimed at helping
you get the information you need, and helping you stay current with the latest
SynthEyes builds, as easily as possible.
These facilities are accessed through 3 buttons on the main toolbar, and a
number of entries on the Help menu.
These features require internet access during use, but internet access is
not required for normal SynthEyes operation. You can use them with a dialup
line, and you can tell SynthEyes to use it only when you ask.
We strongly recommend using these facilities, based on past customer
experience! Note: some features operate slightly differently or are not available
from the demonstration version of the software. Also Vista throws some
wrenches into the works.
For more information on accessing SynthEyes updates, see the tutorial on
configuring auto-update.

Customer Care Setup


The auto-update, messaging, and suggestions features all require access
information to the customer-only web site to operate. The necessary login and
password arrive with your SynthEyes authorization data (with "SAVE THIS MAIL"
in the subject line), and you are prompted to enter them immediately after
authorizing, or by selecting the Help/Set Update Info menu item. On Windows,
you must have started SynthEyes with the right-click "Run as Administrator"
process, described in Authorization above, in order for the auto-update
information to be saved.
Customer Care uses the same login information as for accessing the
support site. If you do not have the customer care login information yet, hit
Cancel instead of entering it; this will not affect program operation at all. The
customer care facility also uses the full serial number; it will be shown with a ...
after you have entered it the first time.
Note: leave the Extreme Scripting field blank.
If the D/L button is red when you start SynthEyes or check for updates,
internet operations are failing. You should check your serial number and login
information, if it is the first time, or check that you are really connected to the
internet.
Also, if you have an Internet firewall program on your computer, you must
permit SynthEyes to connect to the internet for the customer-care features to
operate. Youll have to check with your firewall softwares manual or support for
details.

475
CUSTOMER CARE FEATURES AND AUTOMATIC UPDATE

Checking for Updates


The update info dialog allows you to control how often SynthEyes checks
for updates from the ssontech.com web site. You can select never, daily, or on
startup, with daily the recommended selection.
SynthEyes automatically checks for updates when it starts up, each time
in on startup mode, but only the first time each day in daily mode. The check
is performed in the background, so that it does not slow you down. (Note: on
Vista, the daily setting will check each startup.)
You can easily check for updates manually, especially if you are in never
mode. Click the D/L button on the main toolbar, or Help/Check for updates.

Automatic Downloads
SynthEyes checks to determine the latest available build on the web site.
If the latest build is more current than its own build, SynthEyes begins a
download of the new version. You'll see the new build number displayed in the
button. The download takes place in the background as you use SynthEyes. The
D/L button will be Yellow during the download.
Once the download is complete, the D/L button will turn green. When you
have reached a convenient time to install the new version, click the D/L button or
select the Help/Install Updated menu item. After making sure your work is saved,
and that you are ready to proceed, SynthEyes closes and opens the folder
containing the new installer.
Depending on your system and security settings, the installer may or may
not start automatically. If it does not start automatically, click it to begin
installation.
If there isn't usable update information configured via Help/Set Update
Data, or there is some other connection problem (firewall, no Wifi etc), the D/L
button will turn red.
The D/L button will turn blue with a build number displayed if a new build
is available, but you need to renew support to be able to download and use it.
The same process occurs when you check for updates manually by
clicking the D/L button, with a few more explanatory messages.

Messages from Home


The Msg button and Help/Read Messages menu item are your portal to
special information from Andersson Technologies LLC to bring you the latest
word of updated scripts, tutorials, operating techniques, etc.
When the Msg button turns green, new messages are available; click it
and they will appear in a web browser window. You can click it again later too, if
you need to re-read something.

476
CUSTOMER CARE FEATURES AND AUTOMATIC UPDATE

If valid in-support access information is present (from Help/Set Update


Info), then the customer-only version of the page will be shown; if not, the more
public demo version will be shown.

Suggestions
We maintain a feature-suggestion system to help bring you the most
useful and best-performing software possible. Click the Sug button on the
toolbar, or Help/Suggest a Feature menu item.
This miniature forum not only lets you submit requests, but comment and
vote on existing feature suggestions. (This is not the place for technical support
questions, however, please dont clog it up with them.)
Demo version customers: this area is not available. Send email to support
instead. Past experience has shown that most suggestions from demo customers
are already in SynthEyes; please be sure to check the manual first!

Web Links
The Help menu contains a number of items that bring up web pages from
the https://www.ssontech.com web site for your convenience, including the main
home page, the tutorials page, and the forum.

E-Mail Links
The Help/Tech Support Mail item brings up an email composition window
preaddressed to technical support. Please investigate matters thoroughly before
resorting to this, consulting the manual, tutorials, support site, and forum.
If you do have to send mail, please include the following:
Your name and organization
An accurate subject line summarizing the issue
A detailed description of your question or problem, including
information necessary to duplicate it, preferably from File/New
Screen captures, if possible, showing all of SynthEyes.
A .sni scene file, after Clear All Blips, and ZIPped up (not RAR).
The better you describe what is happening, the quicker your issue can be
resolved.
Help/Report a Credit brings up a preaddressed email composition
window so that you can let us know about projects that you have tracked using
SynthEyes, so we can add them to our As Seen On web page. If you were
wondering why your great new project isnt listed there this is the cure.

477
Menu Reference
File Menu
New. Create a new scene, opening the Shot/Add Shot dialog to supply initial
imagery. (Saves any existing scene first.)
Open. Open an existing .sni file.
Merge. Merges a previously-written SynthEyes .sni scene file with the currently-
open one, including shots, objects, trackers, meshes, etc. Most elements
are automatically assigned unique names to avoid conflicts, but a dialog
box lets you select whether or not trackers are assigned unique names.
Save. Saves the scene to the current file name (asks if there is none).
Save Next Version. Increments the current file name, saves the scene to the
new name.
Save a Copy. Asks for a file name, saves the scene there, then resumes using
the original file name.
Save As. Asks for a file name, saves the scene there, then continues using that
new filename as the filename for the scene.
Import/Shot. Clears the scene and opens the Shot/Add Shot dialog, if there are
no existing shots, or adds an additional shot if one or more shots are
already present.
Import/Mesh. Imports a DXF, Alias/Wavefront OBJ, SynthEyes SBM, or Lidar
XYZ mesh file.
Import/Reload mesh. Reloads the selected mesh, if any. If the original file is no
longer accessible, allows a new location to be selected.
Import/Replace mesh. With a single mesh selected, opens the file picker to
select a new mesh file (of any type) to be read, replacing the existing
vertices, faces, normals, and texture coordinates. Other parameters of the
mesh, such as its position and scaling, are unaffected. Use this to replace
a heavy mesh with a decimated proxy, for example. Note that this is
unwise for GeoH tracking where the vertices have been lassoed, painted,
or airbrushed, as the changing mesh will require that the GeoH weights be
cleared.
Import/Tracker Locations. Imports a text file composed of lines: x_value
y_value z_value Tracker_name. For each line, if there is an existing
tracker with that name, its seed position is set to the coordinates given. If
there is no tracker with that name, a new one is created with the specified
seed coordinates. Use to import a set of seed locations from a pre-existing
object model or set measurements, for example. New trackers use
settings from the tracker panel, if it is open. See the section on merging
files.
Import/Extra Points. Imports a text file consisting of lines with x, y, and z values,
each line optionally preceded or followed by an optional point name. A
helper point is created for each line. The points might have been
determined from on-set surveying, for example; this option allows them to
be viewed for comparison. See the section on merging files.

479
MENU REFERENCE

Find New Scripts. Causes SynthEyes to locate any new scripts that have been
placed in the script folder since SynthEyes started, making them available
to be run.
Export Again. Redoes the last export, saving time when you are exporting
repeatedly to your CG application.
Export Multiple. Exports the current scene to all the exporters currently
configured for multiple export.
Configure Multi-export. Brings up the Multiple Export Configuration dialog to
configure multiple exports.
File Info. Shows the full file name of the current file, its creation and last-written
times, full file names for all loaded shots, file names for all imported
meshes (and the time they were imported), and file names for mesh
textures (being extracted or only displayed). Plus, allows you to add your
own descriptive information to be stored in the file.
User Data Folder. Opens the folder containing preferences, the batch, script,
downloads, and preview movie folders, etc.
Make New Language Template. Creates a new XML file for user editing to
customize SynthEyes's dialogs and menus for non-English display. Enter
the name of the language, as it is to appear on the preferences panel,
then select a file in which to store the XML data (to be visible to
SynthEyes, it must be in the system or user script folders).
Submit for Batch. The current scene is submitted for batch processing by
writing it into the queue area. It will not be processed until the Batch
Processor is running, and there are no jobs before it.
Submit for Render. The current scene is submitted for batch processing: the
Save Sequence process will be run on the active shot to write out the re-
processed image sequence to disk as a batch task. Use the Save
Sequence dialog to set up the output file and compression settings first,
close it without saving, then Submit for Batch. You will be asked whether
or not to output both image sequences simultaneously for stereo. Other
multiple-shot renderings can be obtained by Sizzle scripting, or by
submitting the same file several times with different shots active.
Batch Process. SynthEyes opens the batch processing window and begins
processing any jobs in the queue.
Batch Input Queue. Opens a Windows Explorer to the batch input queue folder,
so that the queue can be examined, and possibly jobs removed or added.
Batch Output Queue. Opens a Windows Explorer to the batch output queue
folder, where completed jobs can be examined or moved to their final
destinations.
Exporter Outputs. Opens a Windows Explorer to the default exporter folder.

Edit Menu
Undo. Undo the last operation, changes to show what, such as Undo Select
Tracker. See the Undo button, which can be right-clicked to open a menu
allowing multiple operations to be redone at once.

480
MENU REFERENCE

Redo. Re-do an operation previously performed, then undone. See the Redo
button, which can be right-clicked to open a menu allowing multiple
operations to be redone at once.
Select same color. Select all the (un-hidden) trackers with the same color as the
one(s) already selected.
Select All etc affect the tracker selections, not objects in the 3-D viewports.
Invert Selection. Select unselected trackers, unselect selected trackers.
Clear Selection. Unselect all trackers.
Lock Selection. Lock the selection so it can not be changed.
Delete. Delete selected objects and trackers.
Hide unselected. Hide the unselected trackers
Hide selected. Hide the selected trackers
Reveal selected. Reveal (un-hide) the selected trackers (typically from the
lifetimes panel).
Reveal nnn trackers. Reveal (un-hide) all the trackers currently hidden, ie nnn
of them.
Flash selected. Flashes all selected trackers in the viewports, making them
easier to find.
Polygonal Selections. Lassos follow the mouse motion to create irregular
shapes.
Rectangular Selections. Lassos sweep out a rectangular area.
Lasso Trackers. The lasso selects trackers
Lasso Meshes Instead. The lasso selects meshes
Edit Pivots. When checked, the pivot points of meshes and GeoH objects can
be moved in the 3-D and perspective windows. Hit-testing against the
meshes is disabled when active, only the pivot points or handles can be
selected. A mesh's pivot point is snapped to the center, sides and corners
of its bounding box, or to any vertex if it is sufficiently close to the vertex in
all three dimensions (ie not just a single 3-D viewport). You can adjust
the snap distance in the Mesh area of the preferences. Equivalent to
menu items on the 3-D viewport and perspective view's right-click menu,
and the GeoH perspective toolbar.
Lock Children. Turn this on if you want to move a pivot without moving the
pivots of its children. Normally, when you move the parent, the children
are carried along automatically as a consequence of their hierarchy. If the
child pivots are correct, and only the parent needs adjustment, turning this
on lets you do that, by creating opposite motions for the child pivots, so
they stay put. This setting is shared system wide; you can see it on the
main Edit menu and the perspective and 3D views' right-click menu.
Adjust Rig. Normally, if you drag a GeoH object's handles in the perspective or
3D views, only the unlocked joints move and receive keys. That prevents
you from messing up your carefully-constructed hierarchy. Turning on
Adjust Rig lets you move the rig in the perspective view, setting keys on all
joints lock values. This is quite dangerous. This setting is shared system
wide; you can see it on the main Edit menu and the perspective and 3D
views' right-click menu.

481
MENU REFERENCE

Update Textures Now. All mesh textures will be re-extracted from the shot.
Redo Textures at Solve. Enable control, when on, texture calculations will re-
run whenever the scene is solved.
Add Notes. Creates a new note in the camera view and brings up the Notes
Editor to configure it.
Spinal aligning. Sets the spinal adjustment mode to alignment.
Spinal solving. Sets the spinal adjustment mode to solving.
Edit Scene Settings affects the current scene only.
Edit Preferences contains some of the same settings; these do not affect the
current scene, but are used only when new scenes are created.
Reset Preferences. Set all preferences back to the initial factory values. Gives
you a choice of presets for a light- or dark-colored user interface,
appropriate for office or studio use, respectively.
Edit Keyboard Map. Brings up a dialog allowing key assignments to be altered.

View Menu
Reset View. Resets the camera view so the image fills its viewport.
Expand to Fit. Same as Reset View.
Reset Time Bar. Makes the active frame range exactly fill the displayable area.
Rewind. Set the current time to the first active frame.
To End. Set the current time to the last active frame.
Play in Reverse. When set, replay or tracking proceeds from the current frame
towards the beginning.
Frame by Frame. Displays each frame, then the next, as rapidly as possible.
Quarter Speed. Play back at one quarter of normal speed.
Half Speed. Play back at one half of normal speed.
Normal Speed. Play back at normal speed (ie the rated frame per second
value), dropping frames if necessary. Note: when the Tracker panel is
selected, playback is always frame-by-frame, to avoid skipping frames in
the track.
Double Speed. Play back at twice normal speed, dropping frames if necessary.
Show Image. Turns the main images display in the camera view on and off.
Show Trackers. Turns on or off the tracker rectangles in the camera view.
Only Camera01s trackers. Show only the trackers of the currently-selected
camera or object. When checked, trackers from other objects/cameras are
hidden. The camera/object name changes each time you change the
currently-selected object/camera on the Shot menu.
Only selected trackers. Shows only selected trackers and their stereo spouse,
all others are hidden. Along with the shift-O key, this gives a quick way to
isolate on specific trackers during tracking. Note that you can still lasso
unselected trackers, which is an easy way to switch to other trackers
without having to exit and re-enter this mode.
The next items occur in the Tracker Appearance submenu.
Show All Tracker Names. When turned on, tracker names will be displayed for
all trackers in the camera, perspective, and 3D viewports.

482
MENU REFERENCE

Show Supervised Names. When turned on, tracker names will be displayed for
(only) the supervised trackers, in the camera, perspective, and 3D
viewports.
Show Selected Names. The tracker names are displayed for each selected
tracker.
Show Names in Viewport. The tracker names (as controlled by the above) are
also displayed in the 3D viewports.
Use alternate colors. Each tracker has two different colors. The alternate set is
displayed and editable when this menu item is checked, generally under
control of the Set Color by RMS Error script.
With Central Dot. Trackers are shown with a central dot in the camera view,
even for auto-trackers, offset trackers, and locked trackers, where the dot
would not normally be shown.
Show Only as Dots. All locked trackers are shown as solely a dot in the camera
view, reducing clutter.
Show as Dots in 3D Views. Trackers are shown as dots, not X's, in the 3D
viewports.
Show as Dots in Perspective. Trackers are shown as dots, not X's, in the
Perspective view.
Show Tracker Trails. When on, trackers show a trail into the future(red) and
past(blue).
Show 3-D Points. Controls the display of the solved position marks (Xs).
Show 3-D Seeds. Controls the display of the seed position marks (+s).
Show axes for stereo links. When checked, the cyan axis mark indicating that a
tracker has a coordinate-system lock is shown for stereo trackers.
Normally, these axis marks are suppressed to reduce clutter. You might
turn this on to help identify unpaired trackers on a stereo camera.
Show Tracker Radar. Visualization tool shows a circle at each tracker reflecting
the tracker's error on the current frame.
Show Reference Crosshairs. Enables display of animatable (per-tracker)
crosshairs for use as a reference for the selected tracker. Default
accelerator key: shift-+.
Show Planar 3D Pyramids. Enables the 3-D pyramid visualization display for 3-
D planar trackers.
Show Object Paths Submenu:
Show no paths. Paths aren't shown for any cameras or moving objects.
Show all paths. Paths are shown for all cameras and moving objects.
Show selected object. The path is shown for the selected camera or moving
object, if any.
Show selected and children. The paths are shown for the selected camera or
moving object, plus its GeoH children.
The following items resume in the main View menu.
Show Seed Paths. When on, values for the seed path and field of view/focal
length of the camera and moving objects will be shown and edited. These
are used for Use Seed Paths mode and for camera constraints. When
off, the solved values are displayed.

483
MENU REFERENCE

Show Meshes. Controls display of object meshes in the camera viewport.


Meshes are always displayed in the 3-D viewports.
Solid Meshes. When on, meshes are solid in the camera viewport, when off,
wire frame. Meshes are always wireframe in the 3-D viewports.
Outline Solid Meshes. Solid meshes have the wire frame drawn over top, to
better show facet locations.
Cartoon Wireframe Meshes. A special wireframe mode where only the outer
boundary and any internal creases are visible, intended for helping align
set and object models.
Only texture alpha. When on, meshes will display the alpha channel of their
texture, instead of the texture itself, simplifying over-painting.
Shadows. Show ground plane or on-object shadows in perspective window. This
setting is sticky from SynthEyes run to run.
Show Lens Grid. Controls the display of the lens distortion grid (only when the
Lens control panel is open).
Show Notes. Enables or disables the display of notes in all camera views.
Timebar background. Submenu with the following two entries. There is a
preference in the User Interface section to control which is selected at
startup.
Show cache status. The color of each frame of the timebar background
depends on whether the frame is in-cache or not (pink).
Show tracker count. The color of each frame of the timebar background
depends on the number of trackers active on that frame for the active
object, following a sequence corresponding to the graph editor
background. Can reduce performance if there are many trackers and
frames.
OpenGL Camera View. When enabled, the camera view is drawn using
OpenGL. When off, built-in graphics are used, possibly with an assist from
the Software mesh render below. The fastest option will depend on your
scene and mesh.
OpenGL 3-D Viewports. When enabled, the 3D viewports are drawn using
OpenGL. When off, built-in graphics are used, possibly with an assist from
the Software mesh render below. The fastest option will depend on your
scene and mesh.
Software mesh render. Applies only to camera and 3D viewports that are not
using OpenGL. When on, 3D meshes are rendered using a SynthEyes-
specific internal software renderer. For contemporary multi-core machines,
this will be much faster than the operating system's drawing routines, and
can be faster than OpenGL. Takes effect at startup, after that, see the
Software mesh render item on the View menu.
Double Buffer. Slightly slower but non-flickery graphics. Turn off only when
maximal playback speed required.
Sort Alphabetic. Trackers are sorted alphabetically, mainly for the up/down
arrow keys. Updated when you change the setting in the graph editor.
Sort by Error. Trackers are sorted from high error to low error.
Sort by Time. Trackers are sorted from early in the shot to later in the shot.

484
MENU REFERENCE

Sort by Lifetime. Trackers are sorted from shortest-lived to longest-lived.


Group by Color. In the sort order, all trackers with colors assigned will come
first, with each color grouped together, each sorted by the specified order,
followed by trackers at the default color. When this is off, trackers are not
grouped together; the order is determined solely by the sort order.
Only Selected Splines. When checked, the selected spline, and only the
selected spline, will be shown, regardless of its Show This Spline status.
Safe Areas. This is a submenu with checkboxes for a variety of safe areas you
can turn on and off individually (you can turn on both 90% and 80% at
once, for example). Safe areas are defined in the file safe14.ini in the
main SynthEyes folder; you can add your own safe14.ini to add your own
personal safe area definitions. Change the color via the preferences.

Track Menu
Add Many Trackers. After a shot is auto-tracked and solved, use Add Many
Trackers to efficiently added additional trackers.
Clean Up Trackers. The Clean Up Trackers dialog finds bad trackers or frames
and deletes them.
Coalesce Nearby Trackers. Brings up a dialog that searches for, and
coalesces, multiple trackers that are tracking the same feature at different
times in the shot.
Combine Trackers. Combine all the selected trackers into a single tracker, and
delete the originals.
Cross Link by Name. The selected trackers are linked to trackers with the same
name, except for the first character, on other objects. If the trackers object
is solved Indirectly, it will not link to another Indirectly-solved object. It also
will not link to a disabled object.
Drop onto mesh. If a mesh is positioned appropriately in the camera viewport,
drops all selected trackers onto the mesh, setting their seed coordinates.
Similar to Place mode of Perspective window.
Fine-tune Trackers. Brings up the fine-tune trackers dialog to automatically re-
track automatic trackers using supervised tracking. Reduces jitter on some
scenes.
Selected Only. When checked, only selected trackers are run while tracking.
Normally, any tracker which is not Locked is processed.
Stop on auto-key. Causes tracking to stop whenever a key is added as a result
of the Key spinner, making it easy to manually tweak the added key
locations.
Preroll by Key Smooth. When tracking starts from a frame with a tracker key,
SynthEyes backs up by the number of Key Smooth frames, and retracks
those frames to smooth out any jump caused by the key.
Do not auto-generate keys. Do not auto-generate keys (see next two entries).
Auto-generate for ZWTs. Auto-generate keys only for zero-weighted trackers
(ZWTs). (See next entry.)

485
MENU REFERENCE

Auto-generate for all. Auto-generate keys every Key (every) frame (from the
tracker control panel) based on a rough 3D location when the second key
is added to a tracker, if the camera/object is already solved.
Smooth after keying. When a key is added or changed on a supervised tracker,
update the relevant adjacent non-keyed frames, based on the Key Smooth
parameter. Defaults to OFF in versions after 1502, unlike 1502 and earlier.
Pan to Follow. The camera view pans automatically to keep selected trackers
centered. This makes it easy to see the broader context of a tracker.
Pan to Follow 3D. This variant keeps the solved 3-D point of the tracker
centered, which can be better for looking for systematic solve biases.
ZWT auto-calculation. The 3-D position of each zero-weighted tracker is
recomputed whenever it may have changed. With many ZWTs and long
tracks, this might slow interactive response; use this item to temporarily
disable recalculation if desired.
Lock Z-Drop on. Mimics holding down the 'Z' key in the camera view, so that the
Z-Drop feature is engaged: a selected tracker is immediately dropped at a
clicked-on location, rather than having to be dragged there. Saves wear
and tear on pinky finger. For convenience, meshes will not be selected if
you click on them when this control is engaged. The status bar will show if
this control is on.
Steady Camera. Predicts the next location of the tracker based on the last
several frames. Use for smooth and steady shots from cranes, dollies,
steadi-cams.
Hand-Held: Sticky. Use for very irregular features poorly correlated to the other
trackers. The tracker is looked for at its previous location. With both hand-
held modes off, trackers are assumed to follow fairly smooth paths.
Hand-Held: Use others. Uses previously-tracked trackers as a guide to predict
where a tracker will next appear, facilitating tracking of jittery hand-held
shots.
Re-track at existing. Use this mode to re-track an already-tracked tracker. The
search will be centered at the previously-determined location, preventing
large jumps in position. Used for fine-tuning trackers, for example.
Search from solved. When enabled, the search for an already-solved tracker
will begin at its prediction location, as long as the camera or object is also
solved on that frame. Useful for extending tracks after a solve, or for
rapidly tracking ZWTs. Not available for trackers with offsets. On by
default, it is a sticky preference.
No resampling. Supervised tracking works at the original image resolution.
Linear x 4. Supervised tracking runs at 4 times the original image resolution, with
linear interpolation between pixels. Default setting, suitable for usual DV
and prosumer cameras.
Mitchell2 x 4. Tracking runs at 4x resolution with B=C=1/3 Mitchell-Netravali
filtering, which produces sharper images than bilinear but less than
Lanczos: an intermediate setting if there are too many noise artifacts with
Lanczos2.

486
MENU REFERENCE

Lanczos2 x 4. Tracking runs at 4x resolution with N=2 Lanczos filtering, which


produces sharper imagesof the image and the noise. Suitable primarily
for clean uncompressed source footage. Takes longer than Linear x 4.
Lanczos3 x 4. Tracks at 4x with N=3 Lanczos, which is even sharper, but takes
longer too.
Linear x 8. Supervised tracking runs at 8x the original resolution. Not necessarily
any better than running at 4x.
Mitchell2 x 8. Tracking runs at 8x resolution with B=C=1/3 Mitchell-Netravali
filtering, which produces sharper images than bilinear but less than
Lanczos: an intermediate setting if there are too many noise artifacts with
Lanczos2 x 8.
Lanczos2 x 8. Tracks at 8x with N=2 Lanczos.
Lanczos3 x 8. Tracks at 8x with N=3 Lanczos.
(Tool Scripts). Tool scripts were listed at the end of the track menu in earlier
versions of SynthEyes. They now have their own Script menu.

Shot Menu
Add Shot. Adds a new shot and camera to the current workspace. This is
different than File/New, which deletes the old workspace and starts a new
one! SynthEyes will solve all the shots at the same time when you later hit
Go, taking links between trackers into account. Use the camera and object
list at the end of the Shot menu to switch between shots.
Edit Shot. Brings up the shot settings dialog box (same as when adding a shot)
so that you can modify settings. Switching from interlaced to noninterlaced
or vice versa will require retracking the trackers.
Change Shot Images. Allows you to select a new movie or image sequence to
replace the one already set up for the present shot. Useful to bring in a
higher or lower-resolution version, or one with color or exposure
adjustments. Warning: changes to the shot length or aspect ratio will
adversely affect previously-done work.
Save Sequence. Brings up the dialog to save the image sequence being output
from the image preparation dialog (stabilized, cropped, undistorted, etc)
to disk as a sequence or movie. Equivalent to buttons of the same name
on the Summary panel and Output tab of image preprocessor(preparation
dialog).
Image Preparation. Brings up the image preparation dialog (also accessed from
the shot setup dialog), for image preparation adjustments, such as region-
of-interest control, as well as image stabilization.
Enable Prefetch. Turns the image prefetch on and off. When off, the cache
status in the timebar will not be updated as accurately.
Read 1f at a time. Preference! Tells SynthEyes to read only one frame at a
time, but continue to pre-process frames in parallel. This option can
improve performance when images are coming from a disk or network that
performs poorly when given many tasks at once.
Prefer DirectShow. Preference! Windows only. Tells SynthEyes to use the
Window's DirectShow movie-reading subsystem to read AVIs instead of

487
MENU REFERENCE

the older but simpler and more reliable AVI subsystem. DirectShow is
required to read AVI files >2GB.
Activate other eye. When the camera view is showing one of the views from a
stereo pair, switches to the other eye. Additionally, if there is a perspective
window locked to the other (now-displayed) eye, it is switched to show the
original camera view, swapping the two views.
Stereo Geometry. Brings up the Stereo Geometry control panel.
Add Moving Object. Adds a new moving object for the current shot. Add
trackers to this object and SynthEyes will solve for its trajectory. The
moving object shows as a diamond-shaped null in the 3-D workspace.
Remove Moving Object. Removes the current object and trackers attached to it.
If a camera, the whole shot goes with it.
Create Lens Grid Trackers. Part of the lens calibration workflow. The images
should be a big grid of spots, this creates a regular grid of trackers.
Lens Master Calibration. Runs the new fancy lens calibration system, based on
dot grids, checkerboard grids, or random dot patterns.
Process Lens Grid. Largely replaced by Lens Master Calibration, available for
backwards compatibility. Runs the lens calibration calculation, once
trackers are set up and fully tracked.
Write Distortion Maps. Creates forward and inverse lens distortion map images
for the current lens distortion configuration. You'll be prompted to set the
file location and type (from the list of 16-bit/float exporters). The out-of-
bounds handling and extension for the inverse map are set from
preferences.
(Camera and Object List). This list of cameras and objects appears at the end
of the shot menu, showing the current object or camera, and allowing you
to switch to a different object or camera. Selecting an object here is
different than selecting an object in a 3-D viewport.

Script Menu
User Script Folder. Opens your personal folder of custom scripts in the Explorer
or Finder. Handy for making or modifying your own. SynthEyes will mirror
the subfolder structure to produce a submenu tree, so you can keep yours
separate, for example.
System Script Folder. Opens SynthEyess folder of factory scripts. Helpful for
quickly installing new script releases. SynthEyes will mirror the subfolder
structure to produce a submenu tree, so you can put all the unused scripts
into a common folder to simplify the view, for example.
Run Script. Runs a Sizzle, python, or Synthia script, supplying it with information
to connect with this SynthEyes instance.
Script bars. Sub-menu allowing you to open script bars, which give you buttons
to push to whatever scripts you want.
Script Bar Manager. The Script Bar Manager allows you to create, edit, and
delete script bars, which contain buttons that run scripts.

488
MENU REFERENCE

(Most-recent scripts area.) Shows the last few scripts you've run, for quick
access including a keyboard accelerator. You can control the number from
the Save/Export section of the preferences.
(Tool Scripts). Any tool scripts will appear here; selecting one will execute it.
Such scripts can reach into the current scene to act as scripted importers,
gather statistics, produce output files, or make changes. Standard scripts
include Filter Lens F.O.V., Invert Perspective, Select by type, Motion
capture calibrate, Shift constraints, etc. Scripts in the user's area are listed
with an asterisk (*) as a prefix. If a user's script has the same name as a
system script, thus replacing it, the user's script will be used, and it is
prefixed with three asterisks(***) to note this situation.Note that importers
and exporters have their own submenus on the File menu. See the Sizzle
reference manual for information on writing scripts.

Window Menu
(Control Panel List). Allows you to change the control panel using standard
Windows menu accelerator keystrokes.
No floating panels. The current active panel is docked on the left edge of the
main application window.
Float One Panel. The active panel floats in a small carrier window and can be
repositioned. If the active panel is changed, the carrier switches to the
new panel. This may makes better use of your screen space, especially
with larger images or multiple monitor configurations
Many Floating Panels. Each panel can be floated individually and
simultaneously. Clicking each panels button either makes it open, or if it is
already open, closes it. Only one panel is the official active panel.
Important note: mouse, display, and keyboard operations can depend on
which panels are open, or which panel is active. These combinations may
not make sense, or may interact in undesirable ways without warning. If in
doubt, keep only a single panel open.
No Panel. Closes all open floating panels, or removes the fixed panel. Note that
one panel is still active for control purposes, even though it is not visible.
Useful to get the most display space, and minimize redraw time, when
using SynthEyes for RAM playback.
Hold Region Tracker Prep. Launch the Hold Tracker Preparation dialog, used
to handle shots with a mix of translation and tripod-type nodal pans.
Solver Locking. Launch the solvers lock control dialog, used to constrain the
camera path directly.
Path Filtering. Launch the solver's object path (and FOV) filtering dialog, which
controls any post-solve filtering.
Spinal Editing. Launch the spinal editing control dialog, for real-time updates of
solves.
Texture panel. Opens the texture extraction control panel.
Notes Editor. Brings up the Notes Editor.

489
MENU REFERENCE

Floating Camera. Click to create a new floating camera view. If the "Only one
floating camera, persp. view" preference (User Interface section) is on,
this menu item will alternately open and close a single camera view.
Floating Graph editor. Opens a new graph editor. If the "Only one floating
(other)" preference (User Interface section) is on, this menu item will
alternately open and close a single graph editor.
Floating Perspective. Click to open a new floating perspective window. If the
"Only one floating camera, persp. view " preference (User Interface
section) is on, this menu item will alternately open and close a single
perspective view.
Floating SimulTrack. Click to open a new floating SimulTrack window. If the
"Only one floating (other)" preference (User Interface section) is on, this
menu item will alternately open and close a single SimulTrack view.
Floating Hierarchy View. Click to open a new floating Hierarchy View. If the
"Only one floating (other)" preference (User Interface section) is on, this
menu item will alternately open and close a single Hierarchy View.
Synthia. Opens the Synthia instructible assistant, see Help/Synthia PDF.
Synthia Helpers. These menu items are also found on the right-click menu of
the IA button, and are here mainly to make them usable with keyboard
accelerators.
Abracadabra. You make your own magic here.
Legilimens. You make your own magic here.
Listen. Enables speech recognition on Windows.
Don't listen. Turns off speech recognition on Windows.
Talk. Allows Synthia to use speech synthesis.
Don't talk. Turns off speech synthesis.
Use the cloud. See the Synthia manual for information and possible
implications.
Don't use the cloud.
Forget you heard that. Purges the cloud buffer, in case inadvertent speech
or proprietary rules have been captured.
Minimize Synthia. Sends Synthia down to the taskbar/dock.
Exit Synthia. Causes Synthia to stop running. Use Minimize in most cases,
as starting and stopping Synthia is fairly intensive.
Show Time Bar. Turns the time-bar of the main window on or off, for example, if
you are using a graph editors time bar on a second monitor, you can turn
off the time bar on the main display.
Viewport Layout Manager. Starts the viewport layout manager, which allows
you to change and add viewport configurations to match your working
style and display system geometry.
Click-on/Click-off. Quick toggle for click-on/click-off ergonometric mode, see
discussion in the Preferences panel.

Help Menu
Commands labeled with an asterisk(*) require a working internet
connection, those with a plus sign(+) require a properly-configured support login

490
MENU REFERENCE

as well. An internet connection is not required for normal SynthEyes operation,


only for acquiring updates, support, etc.
User Manual PDF. Opens the main SynthEyes User Manual PDF file. Be sure to
use the PDFs bookmarks as an extended table of contents, and the
search function to help find things.
Planar Tracking PDF. Opens the 3-D Planar Tracking Manual.
Phase PDF. Opens the Phase Reference manual.
Synthia PDF. Opens the manual for the Synthia Instructible Assistant. (Start
Synthia with the IA button)
Sizzle PDF. Opens the Sizzle scripting language manual.
SyPy Python PDF. Opens the manual for the SyPy python interface to
SynthEyes.
Recent Change List PDF. Opens the roster of changes to this version and prior
versions (as also found in the support area of the website).
License Agreement. Opens up the SynthEyes license agreement in a separate
viewer for your reference.
Read Messages+. Opens the web browser to a special message page
containing current support information, such as the availability of new
scripts, updates, etc. This page is monitored automatically; this is
equivalent to the Msg button on the toolbar.
Suggest Features+. Opens the Feature-Suggestion page for SynthEyes,
allowing you to submit suggestions, as well as read other suggestions and
comment and vote on them. (Not available on the demo version: send mail
to support with questions/comments/suggestions.)
Tech Support Site*. Opens the technical support page of the web site.
Tech Support Mail*. Opens an email to technical support. Be sure to include a
good Subject line! (Email support is available for one year after purchase.)
Report a credit*. Hey, we all want to know! Drop us a line to let us know what
projects SynthEyes has been used in.
Website/Home*. Opens the SynthEyes home page for current SynthEyes news.
Website/Tutorials*. Opens the tutorials page.
Website/Forum*. Opens the SynthEyes forum.
Purchase or Renew*. Takes you to web pages to purchase SynthEyes (demo)
or renew the license if it is nearing expiration. Convenience options that
are no different than heading to the ssontech.com website yourself.
Register. Launches a form to enter information required to request SynthEyes
authorization. Information is placed on the clipboard. See the registration
and authorization tutorial on the web site.
Authorize. After receiving new authorization information, copy it to the
clipboard, then select Authorize to load the new information.
Set Update Info. Allows you to update your support-site login, and control how
often SynthEyes checks for new builds and messages.
Check for Updates+. Manually tells SynthEyes to go look for new builds and
messages. Use this periodically if you have dialup and set the automatic-
check strategy to never. Similar to the D/L button on the toolbar.

491
MENU REFERENCE

Install Updated. If SynthEyes has successfully downloaded an updated build


(D/L button is green), this item will launch the installation.
About. Current version information.

492
Control Panel Reference
SynthEyes has the following control panels:
Summary Panel
Rotoscope Control Panel
Feature Control Panel
Tracking Control Panel.
Lens Control Panel.
Solver Control Panel.
Coordinate System Control Panel.
3-D Control Panel.
Lighting Control Panel.
Flex/Curve Control Panel.
Select via the control panel selection portion of the main toolbar.

The Graph Editor icon appears in the toolbar area to indicate a


nominal workflow, but it launches a floating window.
Additional panels are described below:
Add Many Trackers Dialog
Advanced Features
Clean Up Trackers
Coalesce Nearby Trackers
Curve tracking control
Finalize Trackers
Fine-Tuning Panel
Green-screen control
Hard and Soft Lock Controls
Hold Tracker Preparation Tool
Image Preparation
Spinal Editing Control
The shot-setup dialog is described in the section Opening the Shot.

Spinners
SynthEyes uses spinners, the stacked triangles on the right of the
following graphic ( ), to permit easy adjustment of numeric fields on the
control panels. The spinner control provides the following features:
Click either angle arrow (<, >) to increase or decrease the value in steps,
Drag up or right within the control to smoothly increase the value, or down
or left to decrease it,

493
CONTROL PANEL REFERENCE

Has a red underline on key frames,


Right-click to remove a key, or if none, to reset to a predefined default
value,
Shift-drag or -click to change the value much more rapidly,
Control-drag or -click to change the value slowly for fine-tuning.

Tool Bar

New, Open, Save, Undo, Redo. Buttons. Right-clicking the Undo/Redo menu
items opens a menu listing all the available undo or redo operations, so
you can click once to easily undo the most recent 5, for example. A
preference in the User Interface area can request that the list always be
shown, which can be easier than thinking about left or right click. There
are also tooltips that show the top undo or redo operation, especially for
the legacy mode. In aggregate the "new" way will likely be more effective.
(Control Panel buttons). Changes the active control panel.
Forward/Backward ( / ). Button. Changes the current playback and
tracking direction.
Reset Time . Button. Resets the timebar so that the entire shot is visible.
Fill . Button. The camera viewport is reset so that the entire image becomes
visible. Shift-fill sets the zoom to 1:1 horizontally. Control-click fills the
image, but centers it instead of putting it at top-left. If you click control-
shift-home, all camera and 3-D views are reset. The default key for this
button is HOME, with the same combinations of shift and control keys.
Active Tracker Host: Camera01. Dropdown. Active camera/object.

Viewport Configuration Select . List box. Selects the


viewport configuration. Hold SHIFT when changing configuration to not
change the definition of the room. Use the viewport layout manager on the
Window menu to modify or add configurations.
Msg. Button. Lights up green when there are messages from the factory
available. Click the button to open your web browser to view them. See
Messages from Home. With properly-configured non-expired update
information, shows the customer-only messages, without it, shows the
more public demo messages.
D/L. Button. Will show a numeric version number such as 1604 when a new
version is available. Download new builds of SynthEyes, see Automatic
Downloads. Requires properly-configured non-expired update information.
Color coding: green, new version available and ready to install; yellow,
download in progress; red, no usable update access information (see
Help/Set Update Info); blue, new build is available but requires support
renewal for access and use.
Sug. Button. Brings up the web page for making suggestions about SynthEyes.
NOT the place for technical support! Be sure to check this manual, many

494
CONTROL PANEL REFERENCE

suggestions already exist. Requires properly-configured non-expired


update information.
IA. Button. Starts the Synthia instructible assistant, see Help/Synthia PDF. This
button also has a right-click menu, see the manual for Window/Synthia
Helpers for details.

Play Bar

Rewind Button. Rewind back to the beginning of the shot.


Back Key Button. Go backwards to the previous key of the selected tracker
or object.
Frame Number . Numeric Field. Sequential frame number, starting at
zero or at 1 if selected on the preferences.
Forward Key Button. Go forward to the next key of the selected tracker or
object.
To End Button. Go to the last frame of the shot.
Frame Backwards . Button. Go backwards one frame. Auto-repeats.
Play/Stop / . Button. Begin playing the shot, forwards or backwards, at the
rate specified on the View menu. The play button changes from right-
pointing to left-pointing when playback will be backwards.
Frame Forward . Button. Go forwards one frame. Auto-repeats.

495
CONTROL PANEL REFERENCE

Summary Panel

AUTO. (the big green one!) Run the entire match-move process: create
features(blips), generate trackers, and solve. Optionally runs tracker
cleanup and auto-place. If no shot has been set up yet, you will be
prompted for that first, so this is truly a one-stop button. See also Submit
for Batch.
Motion Profile. Select one of several profiles reflecting the kinds of motion the
image makes. Use Crash Pan for when the camera spins quickly, for
example, to be able to keep up. Or use Gentle Motion for faster
processing when the camera/image moves only slightly each frame.
Green Screen. Brings up the green-screen control dialog.
Zoom Lens. Check this box if the camera zooms.
On Tripod. Check this box if the camera was on a tripod.
Hold. Animated Button. Use to create hold regions to handle shots with a mix of
normal and tripod-mode sections.
Corners. When on, the corner detector is run when auto-tracking. This checkbox
is a sticky preference.

496
CONTROL PANEL REFERENCE

Fine-tune. Performs an extra stage of re-tracking between the initial feature


tracking and the solve. This fine-tuning pass can improve the sub-pixel
stability of the trackers on some shots.
Settings. Launches the settings panel for fine-tuning.
Run Auto-tracker. Runs the automatic tracking stage, then stops.
Master Solution Reset ( ). Clear any existing solution: points and object
paths.
Solve. Runs the solver.
Not solved. This field will show the overall scene error, in horizontal pixels, after
solving.
Run tracker cleanup. When checked, SynthEyes will run the Edit/Clean up
trackers process automatically when you use AUTO to track and solve a
shot. Experts only! Only use this when the solve will already be very
cleanif there are bad trackers (say due to actors being tracked and not
matted out), then keep this off, or many trackers will be killed
unnecessarily and incorrectly. When in doubt, keep this off and verify
tracking and clean up trackers manually. Initialized from a preference.
Run auto-place. When checked, SynthEyes will run the auto-place algorithm
automatically at the end of an AUTO track and solve, setting up a good-
guess coordinate system. Initialized from a preference.
Place. Automatically place the scene by creating a set of position coordinate
system constraints. The placement is a "good guess;" clicking it
repeatedly will generate different possible placements for your
consideration. If you hold down SHIFT, only the currently-selected
trackers will be considered to potentially form the ground plane or wall.
The Place algorithm runs automatically during the AUTO button sequence
if the Run auto-place checkbox is set.
Coords. Initiates a mode where 3 trackers can be clicked to define a coordinate
system. After the third, you will have the opportunity to re-solve the scene
to apply the new settings. Same as *3 on the Coordinate System panel.
Lens Workflow. Button. Starts a script to help implement either of the two main
lens-distortion workflows, adjusting tracker data and camera field of view
to match distortion.
Save Sequence. Button. Launches the dialog to save the image preprocessors
output sequence, typically to render new images without distortion. Same
as save sequence on the Output tab of the Image Preprocessor.

Rotoscope Control Panel


The roto panel controls the assignment of a shots blips to cameras or
objects. The roto mask can also be written as an alpha channel or RGB image
using the image preprocessor.

497
CONTROL PANEL REFERENCE

Spline/Object List. An ordered list of splines and the camera or object they are
assigned to. The default Spline1 is a rectangle containing the entire
image. A feature is automatically assigned to the camera/object of the
first spline in the list that contains the feature: splines at the top of the list
are on top in the image. Double-click a spline to rename it as desired.
Camera/Object Selector. Drop-down list. Use to set the camera/object of the
spline selected in the Spline/Object List. You can also select Garbage to
set the spline as a garbage matte.
Show this spline. Checkbox. Turn on and off to show or hide the selected
spline. Also see the View/Only Selected Splines menu item.
Lock this spline. Checkbox. Turn on to lock this spline in the camera view, even
when the roto panel is open, so that you don't inadvertently change one
spline while working on another.
Key all CPs if any. Checkbox. When on, moving any control point will place a
key on all control points for that frame. This can help make keyframing
more predictable for some splines.
Enable. Button. Animatable spline enable. Right-click, shift-right, and control-
right delete a key, truncate, or delete all keys, respectively.
Create Circle. Lets you drag out circular splines.
Create Box. Lets you drag out rectangular splines.

498
CONTROL PANEL REFERENCE

Magic Wand. Lets you click out arbitrarily-shaped splines with many control
points. Right-click or hit the ESCape key to stop adding points.
Color. Swatch.
Move Up. Push button. Moves the selected spline up in the Spline/Object
List, making it lower priority.
Move Down. Push button. Moves the selected spline down in the
Spline/Object List, making it higher priority.
Delete. Deletes the currently-selected spline.
Shot Alpha Levels. Integer spinner. Sets the number of levels in the alpha
channel for the shot. For example, select 2 for an alpha channel
containing only 0 or 1(255), which you can then assign to a camera or
moving object.
Object Alpha Level. Spinner. Sets the alpha level assigned to the current
camera or object. For example, with 2 alpha levels, you might assign level
0 to the camera, and 1 to a moving object. The alpha channel is used to
assign a feature only if it is not contained in any of the splines.
Import Tracker to CP. Button. When activated, select a tracker then click on a
spline control point. You'll be asked whether you want to import the
tracker's relative motion (the CP won't change on the current frame), or
the tracker's actual position (the CP will leap to the tracker's position). The
control point will be keyed on each frame where the tracker is valid.

499
CONTROL PANEL REFERENCE

Feature Control Panel

Motion Profile. Select one of several profiles reflecting the kinds of motion the
image makes. Use Crash Pan for when the camera spins quickly, for
example, to be able to keep up. Or use Gentle Motion for faster
processing when the camera/image moves only slightly each frame.
Clear all blips. Clears the blips from all frames. Use to save disk space after
blips have been peeled to trackers.
Blips this frame. Push button. Calculates features (blips) for this frame.
Blips playback range. Push button. Calculates features for the playback range
of frames.
Blips all frames. Push button. Calculates features for the entire shot. Displays
the frame number while calculating.
Delete. Button. Clears the skip frame channel from this frame to the end of
the shot, or the entire shot if Shift is down when clicked.

500
CONTROL PANEL REFERENCE

Skip Frame. Checkbox. When set, this frame will be ignored during automatic
tracking and solving. Use (sparingly) for occasional bad frames during
explosions or actors blocking the entire view. Camera paths are spline
interpolated on skipped frames.
Advanced. Push button. Brings up a panel with additional control parameters.
Link frames. Push button. Blips from each frame in the shot are linked to those
on the prior frame (depending on tracking direction). Useful after changes
in splines or alpha channels.
Peel. Mode button. When on, clicking on a blip adds a matching tracker, which
will be utilized by the solving process. Use on needed features that were
not selected by the automatic tracking system.
Peel All. Push button. Causes all features to be examined and possibly
converted to trackers.
To Golden. Push button. Marks the currently-selected trackers as golden, so
that they wont be deleted by the Delete Leaden button.
Delete Leaden. Push button. Deletes all trackers, except those marked as
golden. All manually-added trackers are automatically golden, plus any
automatically-added ones you previously converted to golden. This button
lets you strip out automatically-added trackers.
Blip Display Preferences:
Show Trails. (sticky preference) When checked, the trail of a blip is shown,
allowing a quick assessment of its length and quality. The maximum
displayed length of the trail is controlled by the Trail length preference.
Only Unassigned. (sticky preference) When checked, only blips and trails that
are not already assigned to an existing tracker are shown in the viewport,
reducing clutter.
Show type. Dropdown allows you to show all blips, or only corners or only spots.
Most useful when you only want to add additional corner trackers.
Min. Trail. Sets the minimum required length of a blip's trail in order for it to be
shown. Start with a large minimum to get the longest potential trackers,
then reduce it until you have the number you desire.

501
CONTROL PANEL REFERENCE

Tracking Control Panel


The tracker panel has two variations with different sizes for the tracker
view area, and slightly different button locations. The wider version gives a better
view of the interior of the panel, especially on high-resolution displays. The
smaller version is a more compact layout that reduces mouse motion, and
because of the reduced size, is better for use on smaller laptops. Select the
desired version using the Wider tracker-view panel preference.

Tracker Mini-View. Shows its interior---the inner box of the tracker. Left Mouse:
Drag the tracker location. Middle Scroll: Advance the current frame,

502
CONTROL PANEL REFERENCE

tracking as you go. Right Mouse: Add or remove a position key at the
current frame. Or, cancel a drag in progress. Small crosshair shows the
offset position, if present and offset not enabled.
Create. Mode Button. When turned on, depressing the left mouse button in
the camera view creates new trackers. When off, the left mouse button
selects and moves trackers.
Delete. Button (also Delete key). Deletes the selected tracker(s) or other
objects. If nothing is selected when the button is clicked, then delete-
tracker-mode is entered or left. In this mode, simply clicking or lassoing a
tracker deletes it, which is helpful when cleaning up autotracks before they
have been solved. Delete-tracker-mode is automatically exited when this
tracker panel is closed (even if a new one opens immediately thereafter).
Channel. , , , , , .Button. Selects channel to be used for this
particular tracker, either normal RGB, red, green, blue, luminance, or
alpha. Alpha is usable only for planar trackers and may not be selected
for other tracker types. If all trackers will use the same channel, make the
channel selection on the image preprocessor's Rez tab, which will greatly
reduce RAM storage requirements for the shot (and enable Alpha to be
tracked if necessary).
Finish. Button. Brings up the finalize dialog box, allowing final filtering and
gap filtering as a tracker is locked down.
Lock. Button. Lock or unlock the tracker. Turn on when tracker is complete.
Not animated
Tracker Type. , , , . Button. Flyout to change the tracker type to
normal match-mode, dark spot, bright spot, or symmetric spot.
Direction. Button. Configures the tracker for forwards or backwards tracking:
it will only track when playing or stepping in the specified direction.
Enable. Button. Animated control turns tracker on or off. Turn off when tracker
gets blocked by something or goes offscreen, turn back on when it
becomes visible again. Right-click, shift-right, and control-right delete a
key, truncate, or delete all keys, respectively.
Contrast. Number-less spinner. Enhances contrast in the Tracker Mini-View
window.
Bright. Number-less spinner. Turns up the Tracker Mini-View brightness.
Color. Rectangular swatch. Sets the display color of the tracker for the camera,
perspective, and 3-D views.
Now. Button. Adds a tracker position key at the present location and frame.
Right-click to remove a position key. Shift-right-click to truncate, removing
all following keys.
Key. Spinner tells SynthEyes to automatically add a key after this many frames,
to keep the tracker on track (relevant only for pattern-matching tracking).
Key Smooth. Spinner. Trackers path will be smoothed for this many frames
before each key, so there is no glitch due to re-setting a key.

503
CONTROL PANEL REFERENCE

Pos. H and V spinners. Trackers horizontal and vertical position, from 1 to +1.
You can delete a key (border is red) by right-clicking. Shift-right-clicking
will truncate the tracker after this frame.
By Hand. (Hand animation) Button. Animated. Used to suspend tracking and
hand-animate the tracker over a range of frames, typically to handle
occlusion. Right-click, shift-right, and control-right delete a key, truncate,
or delete all keys, respectively.
Cliff. (Cliffhanger) Button. Normally (with the button off), trackers are
automatically shut off when they reach the edge of the image. That is
inconvenient if the tracker hovers on the edge, so mark it as a cliffhanger,
and the automatic shutoff will be disabled. Turns on automatically when
you re-enable a tracker that was just automatically shut off.
Size. Size and aspect spinners. Animated. Size and aspect ratio (horizontal
divided by vertical size) of the interior portion of the tracker.
Search. H and V spinners. Animated. Horizontal and vertical size of the region
(excluding the actual interior) that SynthEyes will search for the tracker
around its position in the prior frame. "Prior frame" means the adjacent
lower-numbered frame for forward tracking, the adjacent higher-numbered
frame for backward tracking. Shift-right to truncate, control-right to clear.
Offset. Button. Animated. When on, the offset channels will be added to the
tracked location to determine the final tracker location. When off, the offset
position is not added in. Offset tracking is typically used for occlusion or to
handle nearby trackers. Right-click, shift-right, and control-right delete a
key, truncate, or delete all keys, respectively.
Offset. H and V spinners. Animated. These give the desired final position of the
tracker, when the Offset button is on, relative to the 2-D tracking position.
Shift-right to truncate, control-right to clear.
New+. Button. Clones the currently-selected tracker(s) to form new trackers,
typically to be used for fine-detail offset trackers. If a tracker being cloned
has an offset channel, you will be asked whether you wish to keep it, or
clear it after baking in the offsets as position keys (making it much easier
to add additional animated offsets).
Weight. Spinner. Animated. Defaults to 1.0. Multiplier that helps determine the
weight given to the 2-D data for each frame from this tracker. Higher
values cause a closer match, lower values allow a sloppier match. You
can reduce weight in areas of marginal accuracy for a particular tracker.
Adjust the key at the first frame to affect the entire shot. Shift-right to
truncate, control-right to clear. WARNING: This control is for experts and
should be used judiciously and infrequently. It is easy to use it to
mathematically destabilize the solving process, so that you will not get a
valid solution at all. Keep near 1. Also see ZWTs below.
Exact. For use after a scene has already been solved: set the trackers 2-D
position to the exact re-projected location of the trackers 3-D position. A
quick fix for spurious or missing data points, do not overuse. See the
section on filtering and filling gaps. Note: applied to a zero-weighted-

504
CONTROL PANEL REFERENCE

tracker, error will not become zero because the ZWT will re-calculate
using the new 2-D position, yielding a different 3-D and then 2-D position.
F: n.nnn hpix. (display field, right of Exact button) Shows the distance, in
horizontal pixels, between the 2-D tracker location and the re-projected 3-
D tracker location. Valid only if the tracker has been solved.
ZWT. When on, the trackers weight is internally set to zeroit is a zero-
weighted-tracker (ZWT), which does not affect the camera or objects path
at all. As a consequence, its 3-D position will be continually calculated as
you update the 2-D track or change the camera or object path, or field of
view. The Weight spinner of a ZWT will be disabled, because the weight is
internally forced to zero and special processing engaged. The grayed-out
displayed value will be the original weight, which will be restored if ZWT
mode is turned off.
T: n.nnn hpix. (display field, right of ZWT button) Shows the total error, in
horizontal pixels, for the solved tracker. This is the same error as from the
Coordinate System panel. It updates dynamically during tracking of a
zero-weighted tracker.

Planar Control Panel


This panel controls 2- and 3-D planar tracking and is part of the Planar
room on the main tab bar. It is based on, and generally similar to, the Tracking
control panel. For full details and information on planar tracking, please see the
Planar Tracking Manual. There is an equivalent reference section there
describing this panel.

Tip: If you don't see the Planar room, see I Don't See the Planar Room
in the Planar Tracking Manual.

Planar Options Control Panel


This panel contains additional options for planar tracking that are related
to masking out occluding objects.
On larger monitors, it is found directly under the Planar Control Panel in
the Planar room. On smaller monitors and laptops, click the gear icon under the
planar control panel's tracker mini-view to launch a floating Planar Options panel.
For full details and information on planar tracking, please see the Planar
Tracking Manual. There is an equivalent reference section there describing this
panel.

505
CONTROL PANEL REFERENCE

Lens Control Panel

Field of View. Spinner. Field of view, in degrees, on this frame. Right-clicking


clears 1 key, shift-right truncates, control-right clears totally.
Focal Length. Spinner. Focal length, computed using the current Back Plate
Width on Scene Settings. Provided for illustration only.
Add/Remove Key. , Button. Add or remove a key to the field of view
(focal length) track at this frame.
Known. Radio Button. Field of view is already known (typically from an earlier
run) and is taken from the field of view seed track. May be fixed or
zooming. You will be asked if you want to copy the solved FOV track to
the seed FOV trackdo that if you want to lock down the solved FOV.
Fixed, Unknown. Radio Button. Field of view is unknown, but did not zoom
during the shot.
Fixed, with Estimate. Radio Button. Camera did not zoom, and a reasonable
estimate of the field of view is available and has been set into the
beginning of the lens seed track. This mode can make solving slightly
faster and more robust. Important: verify that you know, and have
entered, the correct plate size before using any on-set focal length
values. A correct on-set focal length with an incorrect plate size makes the
focal length useless, and this setting harmful.
Zooming, Unknown. Radio Button. Field of view zoomed during shot.

506
CONTROL PANEL REFERENCE

Identical Lens Weight. Spinner. A 0-120 solver weight for stereo shots, when
non-zero it forces the two lens FOVs towards being identical. Use with
care for special circumstances, lenses are rarely identical!
Lens Distortion. Spinner. Show/change the lens distortion coefficient.
Calculate Distortion. Checkbox. When checked, SynthEyes will calculate the
lens distortion coefficient. You should have plenty of well-distributed
trackers in your shot.
Lens Workflow. Button. Starts a script to help implement either of the two main
lens-distortion workflows, adjusting tracker data and camera field of view
to match distortion.
Add Line. Checkbox. Adds an alignment line to the image that you can line up
with a straight line in the image, adjust the lens distortion to match, and/or
use it for tripod or lock-off scene alignment.
Kill Line. Checkbox. Removes the selected alignment line (the delete key also
does this). Control-click to delete all the alignment lines at once.
Axis Type. Drop-down list. Not oriented, if the line is only there for lens distortion
determination, parallel to one of the three axes, along one of the three
(XYZ) axes, or along one of the three axes, with the length specified by
the spinner. Configures the line for alignment.
<->. Button. Swaps an alignment line end for end. The direction of a line is
significant and displayed only for on-axis lines.
Length. Spinner. Sets the length of the line to control overall scene sizing during
alignment. Only a single line, which must be on-axis, can have a length.
At nnnf. Button. Shows (not set) if no alignment lines have been configured.
This button shows the (single) frame on which alignment lines have been
defined and alignment will take place; clicking the button takes you to this
frame. Set each time you change an alignment line, or right-click the
button to set it to the current frame.
Align! Button. Aligns the scene to match the alignment lines definedon the
frame given by the At button. Other frames are adjusted
correspondingly. Shift-click to look more aggressively for a solution. To
sequence through all the possible solutions one at a time, control-click
this button.

507
CONTROL PANEL REFERENCE

Solver Control Panel

Go! Button. Starts the solving process, after tracking is complete.


Master Reset. Button. Resets all cameras/objects and the trackers on them,
though all Disabled camera/objects are left untouched. Control-click to
clear the seed path, and optionally the seed FOV (after confirmation).
Error. Number display. Root-mean-square error, in horizontal pixels, of all
trackers associated with this object or tracker.
Seeding Method. Upper drop-down list controlling the way the solver begins its
solving process, chosen from the following methods:
Auto. List Item. Selects the automatic seeding(initial estimation) process,
for a camera that physically moves during the shot.
Refine. List item. Resumes a previous solving cycle, generally after
changes in trackers or coordinate systems.
Tripod. List Item. Use when the camera pans, tilts, and zooms, but does
not move.
Refine Tripod. List item. Resumes a previous solving cycle, but indicates
that the camera was mounted on a tripod.
Indirectly. List Item. Use for camera/objects which will be seeded from
links to other camera/objects, for example, a DV shot indirectly
seeded from digital camera stills.

508
CONTROL PANEL REFERENCE

GeoHTrack. List Item. Geometric Hierarchy tracking using meshes and/or


hierarchies of moving objects. See separate manual.
Individual. List Item. Use for motion capture. The objects trackers are
solved individually to determine their path, using the same feature
on other Individual objects; the corresponding trackers are linked
in one direction.
Points. List Item. Seed from seed points, set up from the 3-D trackers
panel. Use with on-set measurement data, or after Set All on the
Coordinate Panel. You should still configure coordinate system
constraints with this mode: some hard locks and/or distance
constraints.
Path. List Item. Uses the camera/objects seed path as a seed, for
example, from a previous solution or a motion-controlled camera.
Disabled. List Item. This camera/object is disabled and will not be solved
for.
Directional Hint. Second drop-down list. Gives a hint to speed the initial
estimation process, or to help select the correct solution, or to specify
camera timing for Individual objects. Chosen from the following for
Automatic objects:
Undirected (previously Automatic). List Item. With the Undirected
setting selected, SynthEyes determines the motion.
Left. List Item. The camera moved generally to its left.
Right. List Item. The camera moved generally to its right.
Up. List Item. The camera moved generally upwards.
Down. List Item. The camera moved generally downwards.
Push In. List Item. The camera moved forward (different than zooming
in!).
Pull Back. List Item. The camera moved backwards (different than
zooming out!).
Camera Timing Setting. The following items are displayed when Individual is
selected as the object solving mode. They actually apply to the entire shot,
not just the particular object.
Sync Locked. List Item. The shot is either the main timing reference, or is
locked to it (ie, gen-locked video camera).
Crystal Sync. List Item. The camera has a crystal-controlled frame rate
(ie a video camera at exactly 29.97 Hz), but it may be up to a frame
out of synchronization because it is not actually locked.
Loosely Synced. List item. The cameras frame rate may vary somewhat
from nominal, and will be determined relative the reference.
Notably, a mechanical film camera.
Slow but sure. Checkbox. When checked, SynthEyes looks especially hard (and
longer) for the best initial solution.
Constrain. Checkbox for experts. When on, constraints set up using the
coordinate system panel are applied rigorously, modifying the tracker
positions. When off, constraints are used to position, size, and orient the
solution, without deforming it. See alignment vs constraints.

509
CONTROL PANEL REFERENCE

Hold. Animated Button. Use to create hold regions to handle shots with a mix of
normal and tripod-mode sections. Right-click, shift-right, and control-right
delete a key, truncate, or delete all keys, respectively.
Begin. Spinner and checkbox. Numeric display shows an initial frame used by
SynthEyes during automatic estimation. With the checkbox checked, you
can override the begin frame solution. Either manually or automatically,
the camera should have panned or tilted only about 30 degrees between
the begin and end frames, with as many trackers as possible that are
simultaneously active on both these two frames. If the camera does
something wild between the automatically-selected frames, or if their data
is particularly unreliable for some reason, you can manually select the
frames instead. The selected frame will be selected as you adjust this, and
the number of frames in common shown on the status line.
End. Spinner and checkbox. Numeric display shows a final frame used by
SynthEyes during automatic estimation. With the checkbox checked, you
can override the end frame solution.
World size. Spinner. Rough estimate of the size of the scene, including the
trackers and motion of the camera. The world size is used to automatically
size objects in the viewports, and to provide mathematical context when
solving the scene. The spinner is underlined in red (key mark) when the
world sizes are not all the same. (The world size does not animate).
(All). Checkbox with no text, immediately right of "World Size", left of the actual
spinner. When set, changing the spinner changes the world size of all
cameras and objects. This is on by default as it is most useful. When off,
you can change the world sizes individually, typically to try to get a
different solution on very different solves.
Transition Frms. Spinner. When trackers first become usable or are about to
become unusable, SynthEyes gradually increases or decreases their
impact on the solution, to maintain an undetectable transition. The value
specifies how many frames to spread the transition over.
Filtering control. Launches the Path Filtering control dialog to configure post-
solve filtering. The button is lit up if any filtering (applied on each solve) is
present.
Overall Weight. Spinner. Defaults to 1.0. Multiplier that helps determine the
weight given to the data for each frame from this objects trackers. Lower
values allow a sloppier match, higher values cause a closer match, for
example, on a high-resolution calibration sequence consisting of only a
few frames. WARNING: This control is for experts and should be used
judiciously and infrequently. It is easy to use it to mathematically
destabilize the solving process, so that you will not get a valid solution at
all. Keep near 1.
More. Button. Brings up or takes down the Hard and Soft Lock Controls dialog.
Axis Locks. 7 Buttons. When enabled, the corresponding axis of the current
camera or object is constrained to match the corresponding value from the
seed path. These constraints are enforced either loosely after solving,
with Constrain off, or tightly during solving, with Constrain on. See the

510
CONTROL PANEL REFERENCE

section on Constraining Camera or Object Position. Animated. Right-click,


shift-right, and control-right delete a key, truncate, or delete all keys,
respectively.
L/R. Left/right axis (ie X)
F/B. Front/back axis (Y or Z)
U/D. Up/down axis (Z in Z-up or Y in Y-up)
FOV. Camera field of view (available/relevant only for Zoom cameras)
Pan. Pan angle around ground plane
Tilt. Tilt angle up or down from ground plane
Roll. Roll angle from vertical
Never convert to Far. Normally, SynthEyes monitors trackers during 3-D solves,
and automatically converts trackers to Far if they are found to be too far
away. This strategy backfires if the shot has very little perspective to start
with, as most trackers can be converted to far. Use this checkbox if you
wish to try obtaining a 3-D solve for your nearly-a-tripod shot.

Phase Control Panel (Not in Intro Version)


The Phase control panel is used to configure phases, which provide
detailed instructions to the Solver on how to solve more difficult scenes. In
simpler scenes, no phases are required at all.

The Phase Panel has only a few basic elements; its contents are primarily
determined by the phase selected in the phase viewport, here a Camera Height
phase named Phase2. In the capture, you'll notice that the bottom edge has been
cut off, because the Camera Height phase has few parameters and there is
nothing else below.
(Phase name), ie Phase2. Editable selector. Shows the name of the selected
phase, if exactly one is selected. Double-click on the actual name to
change it. Use the drop-down to select a different phase, or use the scroll
wheel to quickly look through many phase user interfaces.

511
CONTROL PANEL REFERENCE

(Swatch), ie blue. Sets the color of the selected phase. Since phases are red
when selected, you will not see this color until the phase is unselected.

(Phase kind name), ie Camera Height. Shows what kind of tracker is selected.
Note that a phase can not be changed to a different kind; create a
replacement.

(Phase parameters), ie Camera, Frame, etc. When exactly one phase is


selected, its parameters are shown and changeable in this area. The
meaning of the parameters depends on the kind of phase, see the writeup
for that kind of phase, and the tooltips obtained by hovering the mouse
over them.

Coordinate System Control Panel

Camera/Object. Drop-down list. Shows what object or camera the tracker is


associated with; change it to move the tracker to a different object or
camera on the same shot (or, you can clone it there for special situations).
Entries beginning with asterisk(*) are on a different shot with the same
aspect and length; trackers may be moved there, though this may
adversely affect constraints, lights, etc.
*3. Button. Starts and controls three-point coordinate setup mode. Click it once to
begin, then click on origin, on-axis, and on-plane trackers in the camera

512
CONTROL PANEL REFERENCE

view, 3-D viewports, or perspective window. The button will sequence


through Or, LR, FB, and Pl to indicate which tracker should be clicked
next. Click this button to skip from LR (left/right) to FB (front/back), or to
skip setting other trackers. After the third tracker, you will have the
opportunity to re-solve the scene to apply the new settings.
Seed & Lock Group
X, Y, Z. Buttons. Multi-choice buttons flip between X, X+, X-; Y, Y+, Y-; and Z,
Z+,Z- respectively. These buttons control which possible coordinate-
system solution is selected when there are several possibilities. Only
significant when the tracker is locked on one or two axes.
X, Y, Z. Spinners. An initial position used as a guess at the start of solving (if
seed checkbox on), and/or a position to which the tracker is locked,
depending on the Lock Type list.
Seed. Mode button. When on, the X/Y/Z location will be used to help estimate
camera/object position at the start of solving, if Points seeding mode is
selected.
+/- Nudge Tool. Spinner. Adjusting this spinner moves the seed/lock points of all
selected trackers closer or further from the camera. Right-clicking the
spinner will snap each tracker's seed onto the line of sight through the
tracker's 2D location on the current frame. (This works for Far trackers
also, which can be handy for setting up tripod coordinate systems.) Use
control-drag for fine tuning, or adjust the Nudge Sensitivity preference.
Peg. Mode button. If on, and the Solver panels Constrain checkbox is on, the
tracker will be pegged exactly, as selected by the Lock Type. Otherwise,
the solver may modify the constraints to minimize overall error. See
documentation for details and limitations.
Far. Mode button. Turn on if the tracker is far from the camera. Example: If the
camera moved 10 feet during the shot, turn on for any point 10,000 feet or
more away. Far points are on the horizon, and their distance can not be
estimated. This button states your wish, SynthEyes may solve a tracker
as far anyway, if it is determined to have too little perspective.
Lock Type. Drop-down list. Has no effect if Unconstrained. The other settings tell
SynthEyes to force one or more tracker position coordinates to 0 or the
corresponding seed axis value. Use to lock the tracker to the origin, the
floor, a wall, a known measured position, etc. See the section on Lock
Mode Details. If you select several trackers, some with targets, some
without, this list will be emptyright-click the Target Point button to clear
it.
Target Point. Button. Use to set up links between trackers. Select one tracker,
click the Target Point button to select the target tracker by name. Or, ALT-
click (Mac: Command-Left-Click) the target tracker in the camera view or
3-D viewport. If the trackers are on the same camera/object, the Distance
spinner activates to control the desired distance between the trackers.
You can also lock one or more of their coordinates to be identical, forcing
them parallel to the same axis or plane. If the trackers are on different

513
CONTROL PANEL REFERENCE

camera/objects, you have created a link: the two trackers will be forced to
the same location during solving. If two trackers track the same feature,
but one tracker is on a DV shot, the other on digital camera stills, use the
link to make them have the same location. Right-click to remove an
existing target tracker.
Dist. Spinner. Sets the desired distance between two trackers on the same
object.
Solved. X, Y, Z numbers. After solving, the final tracker location.
Error. Number. After solving, the root-mean-square error between this trackers
predicted and actual positions. If the error exceeds 1 pixel, look for
tracking problems using the Tracker Graph window.
[FAR]. This will show up after the error value, if the tracker has been solved as
far.
Set Seed. Button. After solving, sets the computed location up as the seed
location for later solver passes using Points mode.
All. Button. Sets up all solved trackers as seeds for subsequent passes.
Exportable. Checkbox. Uncheck this box to tell savvy export scripts not to export
this tracker. For example, exporting to a compositor, you may want only a
half dozen of a hundred or two automatically-generated trackers to be
exported and create a new layer in the compositor. Non-exportable points
are shown in a different color, somewhat closer to that of the background.
See also the Exportable checkbox on the 3-D Panel, which operates not
only on trackers, but cameras, objects, meshes, and lights as well.

514
CONTROL PANEL REFERENCE

3-D Control Panel

Creation Mesh Type. Drop-down. Selects the type of object created by the
Create Tool. Note that the Earthling is scaled so that 1.0 is the top of his
head, not hand. If you want a 6ft tall earthling, set the scale to 6.
Create Tool. Mode button. Clicking in a 3-D viewport creates the mesh
object listed on the creation mesh type list, such as a pyramid or Earthling.
Most mesh objects require two drag sequences to set the position, size,
and scale. Note that mesh objects are different than objects created with
the Shot Menus Add Moving Object button. Moving objects can have
trackers associated with them, but are themselves null objects. Mesh
objects have a mesh, but no trackers. Often you will create a moving
object and its trackers, then add a mesh object(s) to it after solving to
check the track.
Delete. Button. Deletes the selected object.
Lock Selection. Mode button. Locks the selection in the 3-D viewport to
prevent inadvertent reselection when moving objects. This enables

515
CONTROL PANEL REFERENCE

additional functionality to rotate or scale an object about any arbitrary


point, inside or outside the object. Lock Selection turns off automatically
when you exit the 3D panel, except if you had turned it on prior to opening
the 3D Panel, using the right-click menu of the 3D or perspective
viewports.
World/Object. Mode button. Switches between the usual world coordinate
system, and the object coordinate system where everything else is
displayed relative to the current object or camera, as selected by the shot
menu. Lets you add a mesh aligned to an object easily.
Far. Mode button. Mesh is very far from the camera; to pretend that, its
translation is parented to the camera similar to Far trackers. Permits
reasonable-sized meshes to be used for distant backdrops more easily.
May not be transmitted to all export targets.
#. Mode button. The XYZ spinners switch to control the number of segments
along width, depth, and height of the selected primitive object(s). Used to
create higher or lower-resolution versions of builtin primitive meshes for
editing and texture extraction.
Move Tool. Mode button. Dragging an object in the 3-D viewport moves it.
Rotate Tool. Mode button. Dragging an object in the 3-D viewport rotates it
about the axis coming up out of the screen.
Scale Tool. Mode button. Dragging an object in the 3-D viewport scales it
uniformly. Use the spinners to change each axis individually.
Make/Remove Key. , Button. Adds or removes a key at the current
frame for the currently-selected object.
Show/Hide. Button. Show or hide the selected mesh object.
Object color. Color Swatch. Object color, click to change. Note that lights,
meshes, and trackers have both a static color and possibly an animated
illumination color (if the Set Illumination... script has been run). The swatch
shows the illuminated color, if present, and the static color if not. To
access the static color when an animated color is present, see the
swatches in the Hierarchy View or Graph Editor.
X/Y/Z Values. Spinners. Display X, Y, or Z position, rotation or scale values,
depending on the currently-selected tool. Note that when a
camera/moving object is selected, the pan/tilt/roll rotation angles can be
animated outside the usual 360 degree range for rotation angles, and
those angles will be preserved as-is. With a camera or moving object
selected, right-clicking clears 1 key, shift-right truncates, control-right
clears totally.
Size/Distance. Spinner. This is an overall size spinner, use it when the Scale
Tool is selected to change all three axis scales in lockstep. Fancy feature:
if you hold down the ALT/command key while changing this spinner, the
mesh's (only) position will also be scaled along the line of sight from the
camera, so that the overall visual size and position of the mesh in the
image doesn't change. This is helpful after a Pinning operation, for

516
CONTROL PANEL REFERENCE

example. Additional nifty feature: When the Move Tool is active, this
spinner shows the distance from camera to the selected entity (not just
meshes). You can change the spinner to move the object closer or further
from the camera (it does not change size with this tool!).
Whole. Button. When moving a solved object, normally it moves only for the
current frame, allowing you to tweak particular frames. If you turn on
Whole, moving the object moves the entire path, so you can adjust your
coordinate system without using locks. For convenience, turning on Whole
selects the active camera or object and turns on Lock Selection. Turning
Whole off turns off Lock Selection. Whole turns off automatically when you
exit the 3D panel, except if you had turned it on prior to opening the 3D
Panel, using the right-click menu of the 3D or perspective viewports. If you
use Whole to align your coordinate system, you should set up some locks
so that the coordinate system will be re-established if you solve again
(done automatically by Auto-place). Hint: Whole mode has some rules to
decide whether or not to affect meshes. Turn on or off Whole affects
meshes on the 3-D viewport and perspective windows right-click menu.
There's also a preference in the Meshes area to turn Whole affects
meshes on or off at startup.
Blast. Button. Writes the entire solved history onto the objects seed path, so it
can be used for path seeding mode.
Reset. Button. Clears the objects solved path, exposing the seed path.
Cast Shadows. (Mesh) Object should cast a shadow in the perspective window.
Catch Shadows. (Mesh) Object should catch shadows in the perspective
window.
Back Faces. Draw the both sides of faces, not only the front.
Invert Normals. Make the mesh normals point the other way from their imported
values.
Opacity. Spinner 0-1. Controls the opacity of the mesh in the perspective view
and the OpenGL version of the camera view (see the View menu and
preferences to enable OpenGL camera view). Note that opacity rendering
is an inexact surface-based approximation and, to allow interactive
performance, is not equivalent to changing the object into a
semitransparent 3-D aero-gel.
Exportable. Checkbox. When set, the selected item (mesh, tracker, camera,
object, light) is marked to be exported. When cleared, it is not. (It is up to
the exporter to decide whether or not to export any individual item.)
Reload. Reloads the selected mesh, if any. If the original file is no longer
accessible, allows a new location to be selected.
(texture filename). Non-editable Text field. Shows the name of the texture
displayed on, or computed for, this mesh.
Create Texture. Checkbox. When on, a texture will be computed for this mesh
on demand (see the texture panel).
Texture Panel. Button. Brings up the texture control panel (also available from
the Window menu).

517
CONTROL PANEL REFERENCE

Lighting Control Panel

New Light. Button. Click to create a new light in the scene.


Delete Light. Button. Delete the light in the selected-light drop-down list.
Selected Light. Drop-down list. Shows the select light, and lets you change its
name, or select a different one.
Color. Swatch. Sets the color emitted by the light into the scene (used by the
perspective view, and for export). There is a separate static value, plus an
optional animated value typically created by the Set Illumination from
Trackers script. The animated value is shown here if present; access the
static value from the Hierarchy View. Note that the perspective view's
ambient illumination can be set from Edit/Scene Settings (initialized from a
color preference).
Far-away light. When checked, light is a distant, directional, light. When off, light
is a nearby spotlight or omnidirectional(point) light.
Compute over frames: This, All, Lock. In the (normal) This mode, the lights
position is computed for each frame independently. In the All or Lock
mode, the lights position is averaged over all the frames in the sequence.
In the All mode, this calculation is performed repeatedly for live updates.
In the Lock mode, the calculation occurs only when clicking the Lock
button.
New Ray. Button. Creates a new ray on the selected light.
Delete Ray. Button. Delete the selected ray.
Previous Ray (<). Button. Switch to the previous lower-numbered ray on the
selected light.
Ray Number. Text field. Shows something like 1/3 to indicate ray 1 of 3 for this
light.

518
CONTROL PANEL REFERENCE

Next Ray (>). Button. Switch to the next higher ray on the selected light.
Selected Ray
Source. Mode button. When lit up, click a tracker in the camera view or any 3-D
view to mark it as one point on the ray.
Target. Mode button. When lit up, click a tracker in the camera view or any 3-D
view to mark it as one point on the ray. If the source and target trackers
are the same, it is a reflected-highlight tracking setup, and the Target
button will show (highlight). For highlight tracking to be functional, there
must be a mesh object for the tracker to reflect from.
Distance. Spinner. When only a single ray to a nearby light is available, use this
spinner to adjust the distance to the light. Leave at zero the rest of the
time.

Hierarchy Panel
The Hierarchy Panel is a pseudo-panel intended for use as a secondary
panel on other panels, most notably the GeoH Tracking panel. It contains a
single fixed-size Hierarchy View, see the documentation for the view for more
details.

Flex/Curve Control Panel

Note: The flex room is not part of the normal default set. To use the
flex panel, use the room bar's Add Room to create a Flex room that
uses the Flex panel.

519
CONTROL PANEL REFERENCE

The flex/curve control panel handles both object types, which are used to
determine the 3-D position/shape of a curve in 3-D, even if it has no discernable
point features. If you select a curve, the parameters of its parent flex (if any) will
be shown in the flex section of the dialog.
New Flex. Creates and selects a new flex. Left-click successively in a 3-D view
or the perspective view to lay down a series of control points. Right-click to
end.
Delete Flex. Deletes the selected flex (even if it was a curve that was initially
clicked).
Flex Name List. Lists all the flexes in the scene, allowing you to select a flex, or
change its name.
Moving Object List. If the flex is parented to a moving object, it is shown here.
Normally, (world) will be listed.
Show this 3-D flex. Controls whether the flex is seen in the viewports or not.

520
CONTROL PANEL REFERENCE

Clear. Clears any existing 3-D solution for the flex, so that the flexs initial seed
control points may be seen and changed.
Solve. Solves for the 3-D position and shape of the flex. The control points
disappear, and the solved shape becomes visible.
All. Causes all the flexes to be solved simultaneously.
Pixel error. Root-mean-square (~average) error in the solved flex, in horizontal
pixels.
Count. The number of points that will be solved for along the length of the flex.
Stiffness. Controls the relative importance of keeping the flex stiff and straight
versus reproducing each detail in the curves.
Stretch. Relative importance of (not) being stretchy.
Endiness. (yes, made this up) Relative importance of exactly meeting the end-
point specification.
New Curve. Begins creating a new curveclick on a series of points in the
camera view.
Delete. Deletes the curve.
Curve Name List. Shows the currently-selected curves name among a list of all
the curves attached to the current flex, or all the unconnected curves if this
one is not connected.
Parent Flex List. Shows the parent flex of this curve, among all of the flexes.
Show. Controls whether or not the curve is shown in the viewport.
Enable. Animated checkbox indicating whether the curve should be enabled or
not on the current frame. For example, turn it off after the curve goes off-
screen, or if the curve is occluded by something that prevents its correct
position from being determined.
Key all. When on, changing one control point will add a key on all of them.
Rough. Select several trackers, turn this button on, then click a curve to use the
trackers to roughly position the curve throughout the length of the shot.
Truncate. Kills all the keys off the tracker from the current frame to the end of the
shot.
Tune. Snaps the curve exactly onto the edge underneath it, on the current frame.
All. Brings up the Curve Tracking Control dialog, which allows this curve, or all
the curves, to be tracked throughout an entire range of frames.

521
Additional Dialogs Reference
This section contains descriptions of additional dialogs used in SynthEyes.
Generally they can be launched from the main menu. Some of the dialogs
contain very powerful multi-threaded processing engines to solve particular tasks
for you.

Add Many Trackers Dialog

This dialog, launched from the Trackers menu, allows you to add many
more trackersafter you have successfully auto-tracked and solved the shot.
Use it to improve accuracy in a problematic area of the shot, or to produce
additional trackers to use as vertices for a tracker mesh. This dialog can work
directly on (already-solved) 360 VR shots.
Note: it may take several seconds between launching the dialog and its
appearance. During this time your processors will be very busy.
Tracker Requirements
Min #Frames. Spinner. The minimum number of valid frames for any tracker
added.
Min Amplitude. Spinner. The minimum average amplitude of the blip path,
between zero and one. A larger value will require a more visible tracker.
Max Avg Err. Spinner. The maximum allowable average error, in horizontal
pixels, of the prospective tracker. The error is measured in 2-D between
the 2-D tracker position, and the 3-D position of the prospective tracker.
Max Peak Err. Spinner. The maximum allowable error, in horizontal pixels, on
any single frame. Whereas the average error above measures the overall

523
ADDITIONAL DIALOGS REFERENCE

noisiness, the peak error reflects whether or not there are any major
glitches in the path.
Only within last Lasso. Checkbox. When on, trackers will only be created within
the region swept out by the last lasso operation in the main camera view,
allowing control over positioning.
Spots. Only spot trackers may be added.
Corners. Only corner trackers may be added. Corners must have been enabled
on the Summary panel when the autotrack was performed, in order for
there to be eligible corner blips to be turned into trackers.
Either Kind. Both spot and corner trackers may be added.
Frame-Range Controls
Start Region. Spinner. The first frame of a region of frames in which you wish to
add additional trackers. When dragging the spinner, the main timeline will
follow along.
End Region. Spinner. The final frame of the region of interest. When dragging
the spinner, the main timeline will follow along.
Min Overlap. The minimum required number of frames that a prospective tracker
must be active within the region of interest. With a 30-frame region of
interest, you might require 25 valid frames, for example.
Number of Trackers
Available. Text display field. Shows the number of prospective trackers
satisfying the current requirements.
Desired. Spinner. The maximum number of trackers to be added: the actual
number added will be the least of the Available and Desired values.
New Tracker Properties
Regular, not ZWT. Checkbox. When off, ZWTs are created, so further solves will
not be bogged down. When on, regular (auto) trackers will be created.
Selected. Checkbox. When checked, the newly-added trackers will be selected,
facilitating easy further modification.
Set Color. Checkbox. When checked, the new trackers will be assigned the color
specified by the swatch. When off, they will have the standard default
color.
Color. Swatch. Color assigned to trackers when Set Color is on.
Others
Max Lostness. Spinner. Prospective trackers are compared to the other trackers
to make sure they are not lost in space. The spinner controls this test:
the threshold is this specified multiple of the objects world size. For
example, with a lostness of 3 and a world size of 100, trackers more than
300 units from the center of gravity of the others will be dropped.
Re-fetch possibles. Button. Push this after changes in Max Lostness.

524
ADDITIONAL DIALOGS REFERENCE

Add. Button. Adds the trackers into the scene and closes the dialog. Will take a
little while to complete, depending on the number of trackers and length of
the shot.
Cancel. Button. Close the dialog without adding any trackers.
Defaults. Button. Changes all the controls to the standard default values.

Advanced Features

This floating panel can be launched from the Feature control panel,
affecting the details of how blips are placed and accumulated to form trackers.
Feature Size (small). Spinner. Size in pixels for smaller blips
Feature Size (big). Spinner. Size in pixels for larger blips, which are used for
alignment as well as tracking.
Density/1K. Spinner for each of big and small. Gives a suggested blip density in
term of blips per thousand pixels.

525
ADDITIONAL DIALOGS REFERENCE

Minimum Track Length. Spinner. The path of a given blip must be at least this
many frames to have a chance to become a tracker.
Minimum Trackers/Frame. Spinner. SynthEyes will try to promote blips until
there are at least this many trackers on each frame, including pre-existing
guide trackers.
Maximum Tracker Count. Spinner. Only this many trackers will be produced for
the object, unless even more are required to meet the minimum
trackers/frame.
Camera View Type. Drop-down list. Shows black and white filtered versions of
the image, so the effect of the feature sizes can be assessed. Can also
show the images alpha channel, and the blue/green-screen check image,
even if the screen control dialog is not displayed. Edges and corners
displays show intermediate and final results from the corner detector.
Auto Re-blip. Checkbox. When checked, new blips will be calculated whenever
any of the controls on the advanced features panel are changed. Keep off
for large images/slow computers.
(See tooltips for Edge/corner controls at present)

Align Via Links Dialog


Run this dialog (from the perspective window's right-click menu's
Linking/Align to Mesh menu item) to align a mesh to a number of trackers,
including translation, rotation, and scaling, or even to align the solve to the mesh.

Align Mesh to Tracker Positions. The mesh will move to meet the trackers
Align World to Mesh Position. The entire solve, camera path and trackers, will
move to meet the mesh, which will not move.
Allow Uniform scaling, all axes the same. The mesh will be stretched the
same along each axis to match the trackers as best possible.

526
ADDITIONAL DIALOGS REFERENCE

Allow Non-uniform scaling, each axis separate. The mesh can be stretched
separately along each axis to match, most usually for boxes where the
exact dimensions are not known.
Store resulting locations as tracker constraints. After alignment, the locations
of the vertices will be burned into the trackers as Locks, so that the solve
will reproduce this match again later, particularly for Align World to Mesh
Position.

Clean Up Trackers Dialog


Run the clean up tracker dialog after solving, to identify bad trackers and
frames needing repair. This helps remove bumps in tracks and improves overall
accuracy. Start it from the Track menu, or as part of an AUTO track & solve from
the Summary panel.

The camera must already be solved to use this panel, as it operates by


comparing the 2-D and 3-D locations of the trackers. You can open the panel on
an unsolved camera in order to examine and set the parameters for AUTO.
The panel is organized systematically, with a line for trackers with different
categories of problems. A tracker can be counted in several different categories.
There are Select toggle buttons for each category; each Select button selects
and flashes trackers in that category in the main viewports. Click the button a
second time to turn it off and de-select the trackers.
After cleaning up the trackers (Fix), you should re-solve or refine the
solution.

527
ADDITIONAL DIALOGS REFERENCE

New scenes are created with tracker cleanup parameters set from your
preferences. You can set those preferences using the Set Prefs button. The
initial preferences are set from the factory defaults, which you can reload using
the Defaults button (use Set Prefs to make them the new preferences if you like).
When you close the dialog using the Fix or Close button, the current
settings are saved for this scene, and will reappear if you reopen the scene. If
you close the dialog with the red X close button on the top title bar (red circle with
x on Mac), then the settings are not saved.
All trackers. Radio button. All trackers are affected.
Auto only. Radio button. Only automatically-generated trackers are affected;
supervised trackers are untouched. (Default)
Selected. Radio button. Only the trackers already selected when the dialog is
opened are affected.
Defaults. Button. When clicked, resets all the parameter settings to their factory-
default values. Does not change your preferences for these values.
Get Prefs. Button. All the controls are reset to the preference values that you
have set previously, or to the factory values if no preferences are set.
Set Prefs. Button. The current control settings are saved for future use as
preferences, for when this panel is first opened in a new scene.
(Delete) Bad Frames. Checkbox. When checked, bad frames are deleted when
the Fix button is clicked. Note that the number of trackers in the category
is shown in parentheses.
Show. Toggle button. Bad frames are shown in the user interface, by temporarily
invalidating them. The graph editor should be open in Squish mode to see
them.
Threshold. Spinner. This is the threshold for a frame to be bad, as determined
by comparing its 2-D location on a frame to its predicted 3-D location. The
value is either a percentage of the total number of frames (ie the worst
2%), or a value in horizontal pixels, as controlled by the radio buttons
below.
%. Radio button. The bad-frame threshold is measured in percentage; the worst
N% of the frames are considered to be bad.
Hpix. Radio button. The bad-frame threshold is a horizontal-pixel value.
Disable. Radio button. When fixed, bad frames are disabled by adjusting the
trackers enable track.
Clear. Radio button. Bad frames are fixed by clearing the tracking results from
that frame; the tracker is still enabled and can be easily re-tracked on that
frame.
(Delete) Far-ish Trackers. Checkbox. When on, trackers that are too far-ish
(have too little perspective) are deleted.
Threshold. Spinner. Controls how much or little perspective is required for a
tracker to be considered far-ish. Measured in horizontal pixels.
Delete. Radio button. Far-ish trackers will be deleted when fixed.
Make Far. Radio button. Far-ish trackers will be changed to be solved as Far
trackers (direction only, no distance).

528
ADDITIONAL DIALOGS REFERENCE

(Delete) Short-Lived Trackers. Checkbox. Short-lived trackers will be deleted.


Threshold. Spinner. Number of frames a tracker must be valid to avoid being to
short-lived.
(Delete) High-error Trackers. Checkbox. Trackers with too many bad frames
will be deleted.
Threshold. Spinner. A tracker is considered high-error if the percentage of its
frames that are bad (as defined above by the bad-frame threshold) is
higher than this first percentage threshold, or if its average rms error in
hpix is more than the second threshold below (next to Unsolved) . For
example, if more than 30% of a trackers frames are bad, or its average
error is more than 2 hpix, it is a high-error tracker.
Unsolved/Behind. Checkbox. Some trackers may not have been solved, or may
have been solved so that they are behind the camera. If checked, these
trackers will be deleted.
Threshold. Spinner. This is the average hpix error threshold for a tracker to be
high error. Though it is next to the Unsolved category, it is part of the
definition of a high-error tracker.
Clear All Blips. Checkbox. When checked, Fix will clear all the blips. This is a
way to remember to do this and cut the final .sni file size.
Unlock UI. Button. A tricky button that changes this dialog from modal (meaning
the rest of the SynthEyes user interface is locked up) to modeless, so that
you can go fix or rearrange something without having to close and reopen
the panel. Note: keyboard accelerators do not work when the user
interface is unlocked.
Frame. Spinner. The current frame number in SynthEyes, use to scrub through
the shot without closing the dialog or even having to unlock the user
interface.
Fix. Button. Applies the selected fixes, then closes the panel.
Close. Button. Closes the panel, without applying the fixes. Parameter settings
will be saved for next time. The clean-up panel can be a quick way to
examine the trackers, even if you do not use it to fix anything itself.

Coalesce Nearby Trackers Dialog

529
ADDITIONAL DIALOGS REFERENCE

Trackers, especially automatic trackers, can wind up tracking the same


feature in different parts of the shot. This panel finds them and coalesces them
together into a single overall tracker.
Coalesce. Button. Runs the algorithm and coalesces trackers, closing the panel.
Cancel. Button. Removes any tracker selection done by Examine, then closes
the dialog without saving the current parameter settings.
Close. Button on title bar. The close button on the title bar will close the dialog,
saving the tracker selection and parameter settings, making it easy for
examine the trackers and then re-do and complete the coalesce.
Examine. Button. Examines the scene with the current parameter settings to
determine which trackers will be coalesced and how many trackers will be
eliminated. The trackers to be coalesced will be selected in the viewports.
# to be eliminated. Display area with text. Shows how many trackers will be
eliminated by the current settings. Example: SynthEyes found two pairs of
trackers to be coalesced. Four trackers are involved, two will be
eliminated, two will be saved (and enlarged). The display will show 2
trackers to be eliminated.
Defaults. Button. Restores all controls to their factory default settings.
Distance (hpix). Spinner. Sets the maximum consistent distance between two
trackers to be coalesced. Measured in horizontal pixels.
Sharpness. Spinner. Sets the sensitivity within the allowable distance. If zero,
trackers at the maximum distance are as likely to be coalesced as trackers
at the same location. If one, trackers at the maximum distance are
considered unlikely.
Consistency. Spinner. The fraction of the frames two trackers must be nearby to
be merged.
Only selected trackers. Checkbox. When checked, only pre-selected trackers
might be coalesced. Normally, all trackers on the current camera/object
are eligible to be coalesced.
Include supervised non-ZWT trackers. Checkbox. When off, supervised
(golden) trackers that are not zero-weighted-trackers (ZWTs) are not
eligible for coalescing, so that you do not inadvertently affect hand-tuned
trackers. When the checkbox is on, all trackers, including these, are
eligible.
Only with non-overlapping frame ranges. Checkbox. When checked, trackers
that are valid at the same time will not be coalesced, to avoid coalescing
closely-spaced but different trackers. When off, there is no such
restriction.

530
ADDITIONAL DIALOGS REFERENCE

Curve Tracking Control

Launched by the All button on the Flex/Curve control panel.


Filter Size. Edge detection filter size, in pixels. Use larger values to accurately
locate wide edges, smaller value for thinner edges.
Search Width. Pixels. Size of search region for the edge. Larger values mean a
roughed-in location can be further from the actual location, but might also
mean that a different edge is detected instead.
Adjacency Sharpness. 0..1. This is the portion of the search region in which the
edge detector is most sensitive. With a smaller value, edges nearest the
roughed-in location will be favored.
Adjacency Rejection. 0..1. The worst weight an edge far from the roughed-in
location can receive.
Do all curves. When checked all curves will be tuned, not just the selected one.
Animation range only. When checked, tuning will occur over the animation
playback range, rather than the entire playback range.
Continuous Update. Normally, as a range of frames is tuned, the tuning result
from any frame does not affect where any other frame is searched for
the searched-for location is based solely on the earlier curve animation
that was roughed in. With this box checked, the tuning result for each
frame immediately updates the curve control points, and the next frame
will be looked for based on the prior search result. This can allow you to
tune a curve without previously roughing it in.
Do keyed or not. All frames will be keyed, whether or not they have a key
already.
Do only keyed. Add keys only to frames that are already have keys, typically to
tune up a few roughed in keys.

531
ADDITIONAL DIALOGS REFERENCE

Do only unkeyed. Only frames without keys will be tuned. Use to tune without
adversely affecting frames that have already been carefully manually
keyed.

Edit Room Tab Dialog


Used to edit the effect of clicking on a Room tab.

Room Name. The name of the room displayed on the row of tabs.
Panel. Select the panel that should be selected when entering the room. This
selector is grayed out for pre-defined rooms. For user-added rooms, you
can also select Do not change or No panel.
Secondary Panel. Select the second panel that should be selected when
entering the room (if space permits), or none. Used to stack a lens panel
under the solve panel, for example.
Layout. Selects which layout should be selected upon entering the room, or Do
not change.
Show Timebar. Whether or not the time bar should be displayed for this room.
Launch Dialog. Select one of these dialogs to be opened when you enter the
room.
Tooltip. Tooltip displayed for the room when the mouse hovers over its tab.

532
ADDITIONAL DIALOGS REFERENCE

Finalize Tracker Dialog

With one or more trackers selected, launch this panel with the Finalize
Button on the Tracker control panel, then adjust it to automatically close gaps in
a tracker (where an actor briefly obscures a tracker, say), and to filter (smooth)
the trajectory of the selected trackers.
The Finalize dialog affects only trackers which are not Locked (ie their
Lock button is unlocked). When the dialog is closed via OK, affected trackers are
Locked. If you need to later change a Finalized tracker, you should unlock it, then
rerun the tracker from start to finish (this is generally fairly quick, since youve
already got all the necessary keys in place).
Filter Frames. The number of frames that are considered to produce the filtered
version of a particular frame.
Filter Strength. A zero to one value controlling how strongly the filter is applied.
At the default value if one, the filter is applied fully.
Max Gap Frames. The number of missing frames (gap) that can be filled in by
the gap-filling process.
Gap Window. The number of frames before the gap, and after the gap, used to
fill frames inside the gap.
Begin. The first frame to which filtering is applied.
End. The last frame to which filtering is appied.
Entire Shot. Causes the current frame range to be set into the Begin and End
spinners.
Playback Range. Causes the current temporary playback range to be set into
the Begin and End spinners.
Live Update. When checked, filtering and gap filtering is applied immediately,
allowing its effect to be assessed if the tracker graph viewport is open.

533
ADDITIONAL DIALOGS REFERENCE

Fine-Tuning Panel
Launched from the Track menu.

Fine-tune during auto-track. Checkbox. If checked, the fine-tuning process will


run automatically after auto-tracking.
Key Spacing. Spinner. Requests that there be a key every this many frames
after fine-tuning.
Tracker Size. Spinner. The size of the trackers during and after fine tuning. The
tracker size and search values are the same as on the Tracker panel.
Tracker Aspect. Spinner. The aspect ratio of the trackers during and after fine
tuning.
U Search Size. Spinner. U (horizontal) search area size. Note that because the
fine-tuning starts from the previously-tracked location, the search sizes
can be very small, equivalent to a few pixels.
V Search Size. Spinner. V (vertical) search area size.
Reset. Button. Restore the current settings of the panel to factory values (not the
preferences). Does not change the preferences; to reset the preferences
to the factory values click Reset then Set Prefs.
Get Prefs. Button. Set the current settings to the values stored as preferences.
Set Prefs. Button. Save the current settings as the new preferences.

534
ADDITIONAL DIALOGS REFERENCE

HiRez. Drop-down. Sets the high-resolution resampling mode used for


supervised tracking (this is the same setting as displayed and controlled
on the Track menu).
All auto-trackers. Radio button. The Run button will work on all auto-trackers.
Selected trackers. Radio button. The Run button will work on only the selected
trackers, typically for testing the parameters.
Make Golden. Checkbox. When on, fine-tuned trackers become golden as if
they had been supervised-tracked initially. When off, they are left as
automatic trackers.
Run. Button. Causes all the trackers, or the selected trackers, to be fine-tuned
immediately, according to the selected parameters. The other way for fine-
tuning to occur is during automatic tracking, if the Fine-tune during auto-
track checkbox is turned on. If run automatically, the top set of
parameters (in the Overall Parameters group) apply during the automatic
fine-tune cycle.

Green-Screen Control

Launched from the Summary Control Panel, causes auto-tracking to look


only within the keyed area for trackers. The key can also be written as an alpha
channel or RGB image by the image preprocessor.
Enable Green Screen Mode. Turns on or off the green screen mode. Turns on
automatically when the dialog is first launched.
Reset to Defaults. Resets the dialog to the initial default values.

535
ADDITIONAL DIALOGS REFERENCE

Average Key Color. Shows an average value for the key color being looked for.
When the allowable brightness is fairly low, this color may appear darker
than the actual typical key color, for example.
Auto. Sets the hue of the key color automatically by analyzing the current
camera image.
Brightness. The minimum brightness (0..1) of the key color.
Chrominance. The minimum chrominance (0..1) of the key color.
Hue. The center hue of the key color, -180 to +180 degrees.
Hue Tolerance. The tolerance on the matchable hue, in degrees. With a hue of -
135 and a tolerance of 10, hues from -145 to -125 will be matched, for
example.
Radius. Radius, in pixels, around a potential feature that will be analyzed to see
if it is within the keyed region (screen).
Coverage. Within the specified radius around the potential feature, this many
percent of the pixels must match the keyed color for the feature to be
accepted.
Scrub Frame. This frame value lets you quickly scrub through the shot to verify
the key settings over the entire shot.

Hard and Soft Lock Controls


The hard and soft lock controls allow you to specify additional objectives
for the results of the solve, by saying what the desired values are for various
axes.
This panel is launched by the more button on the Solver control panel, or
the Window/Solver Locking menu item. It displays values for the active camera or
object (on the toolbar), and is unaffected by what camera or object is selected in
the viewports. All of the enables, weights, and values may be animated.

536
ADDITIONAL DIALOGS REFERENCE

Master Controls
All. Button. Turn on or off all of the position and rotation locks. Shift-right-click to
truncate keys past the current frame. Control-right-click to clear all keys
leaving the object un-locked.
Master weight. Spinner. Set keys on all position and rotation soft-lock weights.
Shift-right-click to truncate keys past the current frame. Control-right-click
to clear all keys (any locked frames will be hard locks).
Back Key. Button. Skip backwards to the previous frame with a lock enable
or weight key (but not seed path key).
Forward Key. Button. Skip forward to the next frame with a lock enable or
weight key (but not seed path key).
Show. Button. When on, the seed path is shown in the main viewports, not the
seed path. Also, the seed field of view/focal length is shown on the Lens
Control panel, instead of the solved value.

537
ADDITIONAL DIALOGS REFERENCE

Translation Weights
Pos. Button. Turn on or off all position lock enables.
Position Weight. Spinner. Set all position weights.
L/R. Button. Left/right lock enable.
L/R Weight. Spinner. Left/right weight.
F/B. Button. Front/back lock enable.
F/B Weight. Spinner. Front/back weight.
U/D. Button. Up/down lock enable.
U/D Weight. Spinner. Up/down weight.
X Value. Spinner. X value of the seed path at the current frame (regardless of
the Show button)
Y Value. Spinner. Y value of the seed path.
Z Value. Spinner. Z value of the seed path.
Get 1f. Button. Create a position and rotation key on the seed path at the current
frame, based on the solved path.
Get PB. Button. Create position and rotation keys for all frames in the playback
range of the timebar, by copying from the solved path to the seed path.
Get. Button. Copy the entire solved path to the seed path, for all frames
(equivalent to the Blast button on the 3-D panel, except that FOV is never
copied).
Rotation Weights
Rot. Button. Turn on or off all rotation lock enables.
Rot Weight. Spinner. Set all rotation weights.
Pan. Button. Pan angle lock enable.
Pan Weight. Spinner. Pan axis soft-lock weight.
Tilt. Button. Tilt axis lock enable.
Tilt Weight. Spinner. Tilt weight.
Roll. Button. Roll axis lock enable.
Roll Weight. Spinner. Roll soft-lock weight.
Pan Value. Spinner. Pan axis seed path value.
Tilt Value. Spinner. Tilt axis seed path value.
Roll Value. Spinner. Roll axis seed path value.
Overall Distance Weights
Distance. Button. Enable/disable the overall distance lock on this frame
Distance Weight. Spinner. Overall distance constraint weight - soft lock only.
Distance Value. Spinner. Value to which the distance between the camera and
origin, or object and camera, is locked when the constraint is enabled.
Get 1f. Button. Create a overall distance key on the distance value track at the
current frame from the current value.
Get PB. Button. Create overall distance keys on the distance value track from
the current values, for the frames in the playback range.
Get. Button. Create overall distance keys from the current values, for all frames.

538
ADDITIONAL DIALOGS REFERENCE

Field of View/Focal Length Weights


FOV/FL. Button. Field-of-view/focal length lock enable.
FOV/FL Weight. Spinner. FOV/FL soft-lock weight.
FOV Value. Spinner. Field of view seed-path value.
FL Value. Spinner. Focal length seed-path value.
Get 1f. Button. Create a field of view key on the seed path at the current frame
from the solved value.
Get PB. Button. Create field of view keys on the seed path from the solved
values, for the frames in the playback range.
Get. Button. Create field of view keys from the solved values, for all frames.

Hold Tracker Preparation Tool


Launched from the Window/Hold Region Tracker Prep menu item to
configure trackers when hold regions are present: regions of tripod-type motion in
a shot.

Apply. Button. The preparation operation is performed.


Undo. Button. Undo the last operation (of any kind)
Preparation Mode
Truncate. Button. Affected trackers are shut down in the interior of any hold
region.

539
ADDITIONAL DIALOGS REFERENCE

Make Far. Button. Affected trackers are converted to Far, and shut down outside
the hold region, plus the specified overlap.
Clone to Far. Button. Default. Affected trackers are cloned, and the clone
converted to Far with a reduced range as in Make Far.
Convert Some. Button. A specified percentage of trackers is randomly selected
and converted to Far.
Percentage. Spinner. The percentage of trackers converted in Convert Some
mode.
Affected Trackers
Selected. Button. Only selected trackers are affected by the operation.
All. Button. All trackers are affected. (In both options, only automatic, non-far,
trackers are considered).
Transitions Considered
Nearest Current Frame. Button. Only the trackers crossing the transition
nearest to the current frame (within 10 frames) are considered.
All Transitions. Button. Operation is applied across all hold regions.
Combine Cloned Fars. Checkbox. When off (default), a separate cloned far
tracker is created for each hold region. When on, only a single Far tracker
is produced, combining far trackers from all hold regions.
Minimum Length. Spinner. Prevents the creation of tracker fragments smaller
than this threshold. Default=6.
Far Overlap. Spinner. The range of created Far trackers is allowed to extend out
of the hold region, into the adjacent translating-camera region, by this
amount to improve continuity.

Image Preparation Dialog


The image preparation dialog allows the incoming images from disk to be
modified before they are cached in RAM for replay and tracking. The dialog is
launched either from the open-shot dialog, or from the Shot/Image Preparation
menu item.

540
ADDITIONAL DIALOGS REFERENCE

Like the main SynthEyes user interface, the image preparation dialog has
several tabs, each bringing up a different set of controls. The Stabilize tab is
active above. With the left button pushed, you can review all the tabs quickly.
For more information on this panel, see the Image Preparation and
Stabilization sections.
Warning: you should be sure to set up the cropping and distortion/scale
values before beginning tracking or creating rotosplines. The splines and trackers
do not automatically update to adapt to these changes in the underlying image
structure, which can be complex. Use the Apply/Remove Lens Distortion script
on the main Script menu to adapt to late changes in the distortion value.
Shared Controls
OK. Button. Closes the image preprocessing dialog and flushes no-longer-valid
frames the RAM buffer to make way for the new version of the shot
images. You can use SynthEyess main undo button to undo all the effects
of the Image Preprocessing dialog as a unit, or then redo them if desired.
Cancel. Button. Undoes the changes made using the image preprocessing
dialog, then closes it.
Undo. Button. Undo the latest change made using the image preprocessing
panel. You can not undo changes made before the panel was opened.
Redo. Button. Redo the last change undone.
Add (checkline). Button. When on, drag in the view to create checklines.
Delete (checkline). Button. Delete the selected checkline.
Final/Padded. Button. Reads either Final or Padded: the two display modes of
the viewport. The final view shows the final image coming from the image

541
ADDITIONAL DIALOGS REFERENCE

preparation subsection. The padded view shows the image after padding
and lens undistortion, but before stabilization or resampling.
Both. Button. Reads either Both, Neither, or ImgPrep, indicating whether the
image prep and/or main SynthEyes display window are updated
simultaneously as you change the image prep controls. Neither mode
saves time if you do not need to see what you are doing. Both mode
allows you to show the Padded view and Final view (in the main camera
view) simultaneously.
Margin. Spinner. Creates an extra off-screen border around the image in the
image prep view. Makes it easier to see and understand what the
stabilizer is doing, in particular.
Show. Button. When enabled, trackers are shown in the image prep view.
Image Prep View. Central image display. Shows either the final image produced
by the image prep subsystem (Final mode), or the image obtained after
padding the image and undistorting it (Padded mode). You can drag the
Region-of-interest (ROI) and Point-of-interest (POI) around, plus you can
click to select trackers, or lasso-select by dragging.
Playbar (at bottom)
Preset Manager. Drop-down. Lets you create and control presets for the image
prep system, for example, different presets for the entire shot and for each
moving object in the shot.
Preset Mgr. Disconnect from the current preset; further changes on
the panel will not affect the preset.
New preset. Create and attach to a new preset. You will be
prompted for the name of the new preset.
Reset. Resets the current preset to the initial settings, which do
nothing to the image.
Rename. Prompt for a new name for the current preset.
Delete. Delete the current preset.
Your presets. Selecting your preset will switch to it. Any changes
you then make will affect that preset, unless you later select
the Preset Mgr. item before switching to a different preset.
Rewind. Button. Go back to the beginning of the shot.
Back Key. Button. Go back to the previous frame with a ROI or Levels key.
Back Frame. Button. Go back one frame; with Control down, back one key;
with Shift down, back to the beginning of the shot. Auto-repeats.
Frame. Spinner. The frame to be displayed in the viewport, and to set keys for.
Note that the image does not update while the spinner drags because that
would require fetching all the intermediate frames from disk, which is
largely what were trying to avoid.
Forward Frame. Button. Go forward one frame; with Control down, forward
one key; with Shift down, forward to the end of the shot. Auto-repeats.

542
ADDITIONAL DIALOGS REFERENCE

Forward Key. Button. Go forward to the next frame with a ROI or Levels key.
To End. Button. Go to the end of the shot.
Make Keys. Checkbox. When off, any changes to the levels or region of
interest create keys at frame zero (for when they are not animated). With
the checkbox on, keys are created at the current frame.
Enable. Button (stoplight). Allows you to temporarily disable levels, color, blur,
downsampling, channels, and ROI, but not padding or distortion. Use to
find a lost ROI, for example. Effective only within image prep.
Rez Tab
DownRez. Drop-down list: None, By 1/2, By 1/4. Causes the image from disk to
be reduced in resolution by the specified amount, saving RAM and time
for large film images, but reducing accuracy as well.
Interpolation. Drop-down list. Bi-Linear, 2-Lanczos, 3-Lanczos, 2-Mitchell. The
bi-linear method is fastest but softens the image slightly. If the shot has a
lot of noise, that can be a good thing. The 2-Lanczos filter provides a
sharper result, after a longer time. The 3-Lanczos filter is even sharper,
with more time and of course the noise is made sharper also. 2-Mitchell is
between bi-linear and 2-Lanczos in sharpness.
Channel. Drop-down list: RGB, Luma, R, G, B, A. Allows a luminance image to
be used for tracking, or an individual channel such as red or green. Blue is
usually noisy, alpha is only for spot-checking the incoming alpha. This can
reduce memory consumption by a factor of 3.
Invert. Checkbox. Inverts the RGB image or channel to improve feature visibility.
Channel Depths: Process. 8-bit/16-bit/Float. Radio buttons. Selects the bit
depth used while processing images in the image preprocessor. Note that
Half is intentionally omitted because it is slow to process, use Float for
processing, then store as Half. Same controls as on Shot Setup dialog
Channel Depths: Store. 8-bit/16-bit/Half/Float. Radio buttons. Selects the bit
depth used to store images, after pre-processing. You may wish to
process as floats then store as Halfs, for example.
Keep Alpha. Checkbox. Requests that SynthEyes read and store the alpha
channel (always 8-bit) even if SynthEyes will not use it itselftypically so
that it can be saved with the pre-processed version of the sequence.
Alpha data can be in the RGB files, ie RGBA, or in separate alpha-channel
files, see Separate Alpha Channels.
Mirror Left/Right. Checkbox. Mirror-reverse the image left and right (for some
stereo rigs).
Mirror Top/Bottom. Checkbox. Mirror-reverse the image top and bottom (for
some stereo rigs).
360 VR. Dropdown. This is the same control as on the Shot Settings panel; it
controls 360 VR processing, with modes indicating that the shot is 360 VR
or not, and also for changing 360 VR to non-VR and back.

543
ADDITIONAL DIALOGS REFERENCE

Filtering Tab
Blur. Spinner. Causes a Gaussian blur with the specified radius, typically to
minimize the effect of grain in film. Applied before down-sampling, so it
can eliminate artifacts.
Hi-Pass. Spinner. When non-zero, creates a high-pass filter using a Gaussian
blur of this radius. Use to handle footage with very variable lighting, such
as explosions and strobes. Radius is usually much larger than typical blur
compensations. Applied before down-sampling.
Noise Reduce. Spinner. Controls a noise-reduction algorithm intended to reduce
noise with a bit less blur than the regular blur. The value has no physical
meaning; typically values in the 1-5 range are useful. This noise reduction
algorithm is for helping tracking, not producing finished images. It
avoids thresholding and median filtering, which can shift feature locations.
Unclear if this is more helpful than a similar small Blur value.
Luma Blur. Spinner. Blurs just the luminance portion of the image, typically for
processing DNG images. Artifacts can result. Use Blur for normal
degraining.
Chroma Blur. Spinner. Blurs just the chrominance portion of the image, typically
for processing DNG images. Artifacts can results. Use Blur for normal
degraining.
Levels Tab
3-D Color Map. Drop-down selector. Select a 3-D Color Look-Up-Table (LUT) to
use to process the images. Note that when a LUT is present, the Level
Adjustments are ignored, but Hue, Saturation, and Exposure are still
active (especially for the case of 1D LUTs, and also so a 3D LUT can
have its exposure changed to make features more visible).
Open. Button. Brings up a file selector to use a specific color map file anywhere
on disk (not just in the user or system preset areas).
Reload. Button. Forces an immediate reload of the selected color map. Note that
File/Find New Scripts also does a reload of any color maps that have
changed. Either way, reloading a color map will invalidate the image
cache.
Save. Button. Writes a .cube 1D or 3D (as needed) color map LUT that
corresponds to the current settings of the Level Adjustments, Hue,
Saturation, and Exposure. The resolution is set by a FILE EXPORT
preference (1D maps are 8x the specified 3D LUT resolution). Allows you
to reuse those settings in different scene files, or other applications. Note
that while those controls can be animated, a color map cannot. The color
map is written using the settings of the current frame. You can write the
color map into your LensPresets folder so it will be listed as a color preset
for easy use, or write it to any other location on disk. When reapplying the
color map, you will likely want to clear the Hue, Saturation, and Exposure
tracks, at least to start with, so that they are not double-applied
High. Spinner. Incoming level that will be mapped to full white in RAM. Changing
the level values will create a key on the current frame if the Make Keys

544
ADDITIONAL DIALOGS REFERENCE

checkbox is on, so you can dynamically adjust to changes in shot image


levels. Use right-click to delete a key, shift-right-click to truncate keys past
the current frame, and control-right-click to kill all keys. High, Mid, and Low
are all keyed together.
Mid. Spinner. Incoming level that will be mapped to 50% white in RAM. (Controls
the effective gamma.)
Low. Spinner. Incoming level that will be mapped to 0% black in RAM.
Gamma. Spinner. A gamma level corresponding to the relationship between
High, Mid, and Low.
Hue. Spinner. Rotates the hue angle +/- 180 degrees. Might be used to line up a
color axis a bit better in advance of selecting a single-channel output.
Saturation. Spinner. Controls the saturation (color gain) of the images, without
affecting overall brightness.
Exposure. Spinner. Controls the brightness, up or down in F-stops (2 stops = a
factor of two). This exposure control affects images written to disk, unlike
the range adjustment on the shot setup panel. This one can be animated,
that one cannot.
Cropping Tab
Left Crop. Spinner. The amount of image that was cropped from the left side of
the film.
Width Used. Spinner. The amount of film actually scanned for the image. This
value is not stored permanently; it multiplies the left and right cropping
values. Normally it is 1, so that the left and right crop are the fraction of the
image width that was cropped on that size. But if you have film
measurements in mm, say, you can enter all the measurements in mm
and they will eventually be converted to relative values.
Right Crop. Spinner. The relative amount of the width that was cropped from the
right.
Top Crop. Spinner. The relative amount of the height that was cropped.
Height Used. Spinner. The actual height of the scanned portion of the image,
though this is an arbitrary value.
Bottom Crop. Spinner. The relative amount of the height that was cropped along
the bottom.
Effective Center. 2 Spinners. The optic center falls, by definition, at the center of
the padded-up (uncropped) image. These values show the location of the
optic center in the U and V coordinates of the original image. You can also
change them to achieve a specified center, and corresponding cropping
values will be created.
Maintain original aspect. Checkbox. When checked, changing the effective
image center will be done in a way that maintains the original image
aspect ratio, which minimizes user confusion and workflow impact.
Stabilize Tab
For more information, see the Stabilization section of the manual.

545
ADDITIONAL DIALOGS REFERENCE

Get Tracks. Button. Acquires the path of all selected trackers and computes a
weighted average of them together to get a single net point-of-interest
track.
Stabilize Axes:
Translation. Dropdown list: None/Filter/Peg. Controls stabilization of the left/right
and up/down axes of the stabilizer, if any. The Filter setting uses the cut
frequency spinner, and is typically used for traveling shots such as a car
driving down a highway, where features come and go. The Pegged
setting causes the initial position of the point of interest on the first frame
to be kept throughout the shot (subject to alteration by the Adjust tracks).
This is typical for shots orbiting a target.
Rotation. Dropdown list: None/Filter/Peg. Controls the stabilization of the
rotation of the image around the point of interest.
Cut Freq(Hz). Spinner. This is the cutoff frequency (cycles/second) for low-pass
filtering when the peg checkbox(es) are off. Any higher frequencies are
attenuated, and the higher they are, the less they will be seen. Higher
values are suitable for removing interlacing or residual vibration from a car
mount, say. Lower values under 1 Hz are needed for hand-held shots.
Note that below a certain frequency, depending on the length of the shot,
further reducing this value will have no effect.
Auto-Scale. Button. Creates a Delta-Zoom track that is sufficient to ensure that
there are no empty regions in the stabilized image, subject to the
maximum auto-zoom. Can also animate the zoom and create Delta U and
V pans depending on the Animate setting.
Animate. Dropdown list: Neither/Translate/Zoom/Both. Controls whether or not
Auto-Scale is permitted to animate the zoom or delta U/V pan tracks to
stay under the Maximum auto-zoom value. This can help you achieve
stabilization with a smaller zoom value. But, if it is creating an animated
zoom, be sure you set the main SynthEyes lens setting to Zoom.
Maximum auto-zoom. Spinner. The auto-scale will not create a zoom larger
than this. If the zoom is larger, the delta U/V and zoom tracks may be
animated, depending on the Animate setting.
Clear Tracks. Button. Clears the saved point-of-interest track and reference
track, turning off the stabilizer.
Lens Tab
Get Solver FOV. Button. Imports the field of view determined by a SynthEyes
solve cycle, or previously hand-animated on the main SynthEyes lens
panel, placing these field of view values into the stabilizers FOV track.
Field of View. Spinner. Horizontal angular field of view in degrees. Animatable.
Separate from the solvers FOV track, as found on the main Lens panel.
Focal Length. Spinner. Camera focal length, based on the field of view and back
plate width shown below it. Since plate size is rarely accurately known,
use the field of view value wherever possible.

546
ADDITIONAL DIALOGS REFERENCE

Plate. Text display. Shows the effective plate size in millimeters and inches. To
change it, close the Image Prep dialog, and select the Shot/Edit Shot
menu item.
Get Solver Distort. Button. Brings the distortion coefficient from the main Lens
panel into the image prep systems distortion track. Note that while the
main lens distortion can not be animated, this image prep distortion can
be. This button imports the single value, clearing any other keys. You will
be asked if you want to remove the distortion from the main lens panel,
you should usually answer yes to avoid double-distortion.
Distortion. Spinner. Removes this much distortion from the image. You can
determine this coefficient from the alignment lines on the SynthEyes Lens
panel, then transfer it to this Image Preparation spinner. Do this BEFORE
beginning tracking. Can be animated.
Cubic Distort. Spinner. Adjusts more-complex (higher-order) distortion in the
image. Use to fine-tune the corners after adjusting the main distortion at
the middle of the top, bottom, left, and right edges. Can be animated.
(Note: Nuke supports quartic but not cubic distortion terms.)
Quartic Distort. Spinner. Adjusts more-complex (higher-order) distortion in the
image. Use to fine-tune the corners after adjusting the main distortion at
the middle of the top, bottom, left, and right edges. Can be animated.
Scale. Spinner. Enlarges or reduces the image to compensate for the effect of
the distortion correction. Can be animated.
Lens Selection. Dropdown. Select a lens distortion profile or image map to apply
(instead of the Distortion/Cubic/etc Distort values), or none at all. These
curve selections can help you solve fisheye and other complex wide-angle
shots, with proper advance calibration.
Open. Button. Brings up a file selector to use a specific lens distortion profile or
image map file anywhere on disk (not just in the user or system preset
areas).
Reload. Button. Reloads the currently-selected lens profile from disk. This will
flush all the frames that use the old version of the profile when the image
preprocessor panel is closed. File/Find New Scripts will reload any lens
profile that has changed.
Nominal BPW. Text field, invisible when empty. A nominal back-plate-value
supplied by the lens profile, use at your discretion.
Nominal FL. Text field, invisible when empty. A nominal focal length supplied by
the lens profile, use at your discretion.
Apply distortion. Checkbox. Normally the distortion, scale, and cropping
specified are removed from the shot in preparation for tracking. When this
checkbox is turned on, the distortion, scale, and cropping are applied
instead, typically to reapply distortion to externally-rendered shots to be
written to disk for later compositing. When being turned on, the Output
Tab's New Width, Height, and Aspect settings should be configured to
those of the original footage, and the main aspect ratio (on the Shot
Settings) should be set to that of the images to which the distortion is
being applied, ie what the image preprocessor was previously outputting.

547
ADDITIONAL DIALOGS REFERENCE

Also if the VR Mode was Remove, it should be changed to Apply. These


settings changes are all done automatically by using Shot/Change Shot
Images and selecting the Re-distort CGI---change to Apply option.
Adjust Tab
Delta U. Spinner. Shifts the view horizontally during stabilization, allowing the
point-of-interest to be moved. Animated. Allows the stabilization to be
directed, either to avoid higher zoom factors, or for pan/scan operations.
Note that the shift is in 3-D, and depends on the lens field of view.
Delta V. Spinner. Shifts the view vertically during stabilization. Animated.
Delta Rot. Spinner. Degrees. Rotates the view during stabilization. Animated.
Delta Zoom. Spinner. Zooms in and out of the image. At a value of 1.0, pixels
are the same size coming in and going out. At a value of 2.0, pixels are
twice the size, reducing the field of view and image quality. This value
should stay down in the 1.10-1.20 range (10-20% zoom) to minimize
impact on image quality. Animated. Note that the Auto-Scale button
overwrites this track.
Output Tab
Resample. Checkbox. When turned on, the image prep output can be at a
different resolution and aspect than the source. For example, a 3K 4:3 film
scan might be padded up to restore the image center, then panned and
scanned in 3-D and resampled to produce a 16:9 1080p HD image.
New Width. Spinner. When resampling is enabled, the new width of the output
image.
New Height. Spinner. The new height of the resampled image.
New Aspect. Spinner. The new aspect ratio of the resampled image. The
resampled width is always the full width of the zoomed image being used,
so this aspect ratio winds up controlling the height of the region of the
original being used. Try it in Padded mode and youll see.
4:3. Button. A convenience button, sets the new aspect ratio spinner to 1.333.
16:9. Button. More convenience, sets the new aspect ratio to 1.778.
Save Sequence. Button. Brings up a dialog which allows the entire modified
image sequence to be saved back to disk.
Apply to Trkers. Button. Applies the effect of the selected padding, distortion, or
stabilization to all the tracking data, so that tracking data originally created
on the raw image will be updated to correspond to the present image
preprocessor output.Used to avoid retracking after padding, changing
distortion, or stabilizing a shot. Do not hit more than once!
Padding. Checkbox. Apply/remove the effect of the cropping/padding.
Distortion. Checkbox. Apply/remove the effect of lens distortion. If Padding and
Stabilization are on, Distortion should be on also.
Stabilization. Checkbox. Apply/remove the effect of stabilization.
Remove f/Trkers. Button. Undoes the effect of the selected operations, to get
coordinates that are closer to, or correspond directly to, the original image.

548
ADDITIONAL DIALOGS REFERENCE

Use to remove the effect of earlier operations from tracking data before
changing the image preprocessor setup, to avoid retracking.
Region of Interest (ROI)
Hor. Ctr., Ver. Ctr. Spinners. These are the horizontal and vertical center
position of the region of interest, ranging from -1 to +1. These tracks are
animated, and keys will be set when the Make Keys checkbox is on.
Normally set by dragging in the view window. A smaller ROI will require
less RAM, allowing more frames to be stored for real-time playback. Use
right-click to delete a key, shift-right-click to truncate keys past the current
frame, and control-right-click to kill all keys.
Half Width, Half Height. Spinners. The width and height of the region of interest,
where 0 is completely skinny, and 1 is the entire width or height. They are
called Half Width and Height because with the center at 0, a width of 1
goes from -1 to +1 in U,V coordinates. Use Control-Drag in the viewport to
change the width and height. Keyed simultaneously with the center
positions. Use right-click to delete a key, shift-right-click to truncate keys
past the current frame, and control-right-click to kill all keys.
Save Processed Image Sequence Dialog
Launched from the Save Sequence button on the Output tab.

(ellipsis, dot dot dot) Button. Click this to set the output file name to write the
sequence to. Make sure to select the desired file type as you do this.
When writing an image sequence, include the number of zeroes you wish
in the resulting sequence file names. For example, seq0000 will be a four-
digit image number, starting at zero, while seq1 will have a varying
number of digits, starting from 1.
Compression Settings. Button. Click to set the desired compression settings,
after setting up the file name and type. Subtle non-SynthEyes Quicktime
feature: the H.264 codec requires that the Key Frame every frames
checkbox in its compression settings must be turned off. Otherwise the

549
ADDITIONAL DIALOGS REFERENCE

codec produces only a single frame! Also, be sure to set compression for
Quicktime movies, there is no default compression set by Quicktime.
RGB Included. Checkbox. Include the RGB channels in the files produces
(should usually be on).
Alpha Included. Checkbox. Include the alpha channel in the output. Can be
turned on only if the output format permits it. If the input images do not
contain alpha data, it will be generated from the roto-splines and/or green-
screen key. Or, if (only after) you turn off the RGB Included checkbox, you
can turn on the Alpha Included checkbox, and alpha channel data will be
produced from the roto-spline/green-screen and converted to RGB data
that is written. This feature allows a normal black/white alpha-channel
image to be produced even for formats that do not support alpha
information, or for other applications that require separate alpha data.
Meshes Included. Checkbox. When set, the meshes will be (software) rendered
over top of the shot, for quick previewing only, especially for 360 VR
shots. With checked, only 8-bit/channel output will be produced; keep
this off normally, so that 16-bit and floating channel images can be
produced. (Use the Perspective Window's Preview Movie for normal
previews include motion blur and additional antialiasing options.)
Start. Button. Get going
Close/Cancel. Button. Close: saves the filename and settings, then close. When
running, changes to Cancel: stop when next convenient. For image
sequences on multi-core processors, this can be several frames later
because frames are being generated in parallel.

550
ADDITIONAL DIALOGS REFERENCE

Multiple Export Preference Configuration

Reset from Prefs. Button. Reloads the current scene's list of multiple exports
from the set you've stored as your preferences.
Save as Prefs. Button. Store the current list of exporters as your preferences.
(List of exports). Listbox. Each export will be run when you do a File/Export
Multiple. You can add an extra filename suffix to the scene's basic name
for any or all of the exports; use that to produce different-named files when
you have several of the same type, ie several Python files, so you can tell
which is which. Double-click the entry to bring up a dialog to set the suffix.
After setting it; it will be included on that line after a semi-colon, for
example the C4D suffix for Cinema 4D in the example above. To remove
an export from the list, select it then click the delete key.
(Add exporters by ...). Dropdown List. As the name says, you can add exporters
to the export list just be selecting them on the dropdown list.

551
ADDITIONAL DIALOGS REFERENCE

Notes Editor
Used to edit one or more notes, which appear in the camera and
perspective views.

Title bar. Shows the name/number of the note being edited.


Shown. Checkbox. When turned off, the note isn't visible in the camera views.
Stationary. Checkbox. When checked, the note is unaffected by panning and
zooming of the camera or perspective view. When unchecked, it is pinned
to a location on the image, and pans and zooms with it. Note that there is
a preference to make new notes default to stationary.

WARNING: If you place a pinned note off the edge of the (zoomed out)
shot image, then change the note to stationary, you won't be able to
see it, it's now off the edge of the window. (Change it back and fix.)

Color swatch. The color of the note's background (pastel preferred). By default,
it is a special marker color that defers to a color set in the preferences.
There's also a preference for the text color.
Creation wand. Button. Creates a new note top-left in the active camera view.
Delete. Button. Deletes this note.
Go back. Button. Goes to the prior note, as sorted by begin and end frame
numbers. If the prior note isn't visible on the current frame, the current
frame is changed to the prior note's begin frame.
Go forward. Button. Goes to the next note, as sorted by begin and end frame
numbers. If the next note isn't visible on the current frame, the current
frame is changed to the next note's begin frame.
Camera selector. Dropdown. What camera (shot) the note is attached to. Select
All to appear on all cameras.
Begin Frame. Spinner. Sets the first frame that the note is visible on.
Here. Button. Sets the begin frame to the current frame.

552
ADDITIONAL DIALOGS REFERENCE

Go. Button. Changes the main user interface to the begin frame.
End Frame. Spinner. Sets the last frame that the note is visible on.
Here. Button. Sets the end frame to the current frame.
Text Field. One or more lines of text that are the content of the note.

Path Filtering Dialog


The path filtering dialog controls the filtering of the camera path and field
of view, if any, that takes places after solving or directly upon your command.

Launched from the Filtering Control button on the solver panel or the
Window/Path filtering menu item.

Warning: Path and FOV filtering CAUSE sliding, because they move
the camera path AWAY from the position that produces the best,
locked-on, results.

The filtering is best used as part of a workflow where you only filter a few
axes, lock them after filtering, then refine the solution to accommodate the effect
of the filtering.
The selection of solve or seed path and whole shot or playback range are
available only interactively; the solving process always filters the whole shot into
the solve path (if enabled).

Frequency. Animated spinner. Cutoff frequency controlling how quickly the


parameter is allowed to change, in cycles per second (Hz), ranging up to
at most 1/2 the frame rate.
Strength. Animated spinner. Controls how strongly the filtering is applied,
ranging from 0 (none) to 1 (completely filtered at the given frequency).

553
ADDITIONAL DIALOGS REFERENCE

X/Y/Z. Checkboxes. One checkbox for each translational axis, controlling


whether or not it will be filtered.
Rotation. Checkbox. Controls whether rotation is filtered. Note that there are no
separate channels to filter or not for rotation.
Distance. Checkbox. Controls whether the camera/origin (camera tracks) or
object/camera (object tracks) distance is filtered or not. Primarily intended
for difficult object tracks where most error is in the direction towards or
away from the camera.
FOV. Checkbox. Controls whether or not the field of view is filtered.
To Solve Path. The filtered path (and/or FOV) is written into the solve tracks
(normal default).
To Seed Path. The filtered path (and/or FOV) is written into the seed tracks,
which is available only interactively and is intended to generate data for
hard or soft locking the axes.
Whole Shot. The entire shot is filtered (normal default).
Playback range. Only the portion of the shot between the green and red
playback range markers on the timebar will be filtered. For interactive use
(only), this can be quicker and easier than setting up an animated strength
value to adjust a portion of a shot. The filtering will blend in and out at
each end of the playback range.

Preview Movie Settings Dialog


See the preview movie description in the Perspective Window Reference.

Shot Settings Dialog


This dialog appears after you have selected the footage while creating a
new scene, adding a new shot or survey, or editing an existing shot with
Shot/Edit Shot. There is additional material in the Opening the Shot section of the
main manual.

554
ADDITIONAL DIALOGS REFERENCE

Start Frame, End Frame: the range of frames to be examined. You can adjust
this from this panel, or by shift-dragging the end of the frame range in the
time bar.
Stereo Off/Left/Right. Sequences through the three choices to control the setup
for stereo shots. Leave at Off for normal (monocular) shots, change to Left
when opening the first, left, shot of a stereo pair. See the section on
Stereoscopic shots for more information.
360 VR Mode. Dropdown. Controls processing of spherical 360 degree VR
footage, with three modes: None, for normal non-VR footage; Present,
when the footage is 360 degree footage being processed as that;
Remove, for 360 degree footage that is being dynamically converted to
linear footage; or Apply, for linear footage that is being converted to 360
VR footage.
Time-Shift Animation: enabled only after a Shot/Change Shot Imagery menu
item, this spinner lets you indicate that additional frames have been added
or removed from the beginning of the shot, and that all the trackers, object
paths, splines, etc, for this shot should be shifted earlier or later in the
shot.
Frame rate: Usually 24, 23.976, or 29.97 frames per second. NTSC is used in
the US & Japan, PAL in Europe. Film is generally 24 fps, but you can use
the spinner for over- or under-cranked shots or multimedia projects at
other rates. Some software may have generated or require the rounded 25
or 30 fps, SynthEyes does not care whether you use the exact or
approximate values, but it may be crucial for downstream applications.

555
ADDITIONAL DIALOGS REFERENCE

Interlacing: No for film or progressive-scan DV. Yes to stay with 25/30 fps,
skipping every other field. Minimizes the amount of tracking required, with
some loss of ability to track rapid jitter. Use Yes, But for the same thing,
but to keep only the other (odd) field. Use Starting Odd or Starting Even
for interlaced video, depending on the correct first field. Guessing is fine.
Once you have finished opening the shot in a second, step through a few
frames. If they go 2 steps forward, one back, select the Shot/Edit Shot
menu item, and correct the setting. Use Yes or None for source video
compressed with a non-field-savvy codec such as sequenced JPEG.
Channel Depths: Process. 8-bit/16-bit/Float. Radio buttons. Selects the bit
depth used while processing images in the image preprocessor. Note that
Half is intentionally omitted because it is slow to process, use Float for
processing, then store as Half. Same controls as on Rez tab of the Image
Preprocessor.
Channel Depths: Store. 8-bit/16-bit/Half/Float. Radio buttons. Selects the bit
depth used to store images, after pre-processing. You may wish to
process as floats then store as Halfs, for example. A Half is a 16-bit
floating-point number, so it has enhanced range (not as much as a float)
but is only half the size to store as a float.
Apply Preset: Click to drop down a list of different film formats; selecting one of
them will set the frame rate, image aspect, back plate width, squeeze
factor, interlace setting, rolling shutter, and indirectly, most of the other
aspect and image size parameters. You can make, change, and delete
your own local set of presets using the Save As and Delete entries at the
end of the preset list.
Image Aspect: overall image width divided by height. Equals 1.333 for video,
1.777 for HDTV, 2.35 or other values for film. Click Square Pix to base it
on the image width divided by image height, assuming the pixels are
square(most of the time these days). Note: this is the aspect ratio input to
the image preprocessor. The final aspect shown at lower right is the
aspect ratio coming out of the image preprocessor.
Pixel Aspect: width to height ratio of each pixel in the overall image. (The pixel
aspect is for the final image, not the skinnier width of the pixel on an
anamorphic negative.)
Back Plate Width: Sets the width of the film of the virtual camera, which
determines the interpretation of the focal length. Note that the real values
of focal length and back plate width are always slightly different than the
book values for a given camera. Note: Maya is very picky about this
value, use what it uses for your shot.
Back Plate Height: the height of the film, calculated from the width, image
aspect, and squeeze.
Back Plate Units. Shows in for inches, mm for millimeters, click to change the
desired display units.
Anamorphic Squeeze: when an anamorphic lens is used on a film camera, it
squeezes a wide-screen image down to a narrower negative. The
squeeze factor reflects how much squeezing is involved: a value of 2

556
ADDITIONAL DIALOGS REFERENCE

means that the final image is twice as wide as the negative. The squeeze
is provided for convenience; it is not needed in the overall SynthEyes
scene.
Negatives Aspect: aspect ratio of the negative, which is the same as the final
image, unless an anamorphic squeeze is present. Calculated from the
image aspect and squeeze factor.
Rolling Shutter Enable. Checkbox. Enables rolling-shutter compensation during
solving for the tracker data of the camera and any objects attached to this
shot. CMOS cameras are subject to rolling shutter; it causes intrinsic
image artifacts.
Rolling Shutter Fraction. Spinner. This is the fraction of the frame time that it
takes to read out the image data from the CMOS chip. For an old NTSC
TV camera, there are 486 active lines, and 525 total (including vertical
blanking) for a fraction of 0.9257. For a camera running at 60 fps, the
frame time is 16.667 msec (1sec/60frames*1000msec/sec). If it takes 15
msec to read out the image, the rolling shutter fraction should be set to
0.9, ie 15/16.667. You will have to obtain these values from the camera
manufacturer or by experimentation.
Keep Alpha: when checked, SynthEyes will keep the alpha channel when
opening files, even if there does not appear to be a use for it at present (ie
for rotoscoping). Alpha data can be in the RGB files, ie RGBA, or in
separate alpha-channel files, see Separate Alpha Channels. Turn on
when you want to feed images through the image preprocessor for lens
distortion or stabilization and then write them, and want the alpha channel
to be processed and written also.
F.-P. Range Adjustment: adjusts the shot to compensate for the range of
floating-point image types (OpenEXR, TIFF, DPX). Values should go from
0..1, if not, use this control to increase or decrease the apparent shot
exposure by this many f-stops as it is read in. Different than the image
preprocessor exposure adjust, because this affects the display and
tracking but not images written back to disk from the image
preprocessor.
HiRez: For your supervised trackers, sets the amount of image re-sampling and
the interpolation method between pixels. Larger values and fancier types
will give sharper images and possibly better supervised tracking data, at a
cost of somewhat slower tracking. The default Linear x 4 setting should
be suitable most of the time. The fancier types can be considered for high-
quality uncompressed source footage.
Image Preprocessing: brings up the image preprocessing (preparation) dialog,
allowing various image-level adjustments to make tracking easier (usually
more so for the human than the machine). Includes color, gamma, etc, but
also memory-saving options such as single-channel and region-of-interest
processing. This dialog also accesses SynthEyes image stabilization
features.
Memory Status: shows the image resolution, image size in RAM in megabytes,
shot length in frames, and an estimated total amount of memory required

557
ADDITIONAL DIALOGS REFERENCE

for the sequence compared to the total still available on the machine. Note
that the last number is only a rough current estimate that will change
depending on what else you are doing on the machine. The memory
required per frame is for the first frame, so this can be very inaccurate if
you have an animated region-of-interest that changes size in the Image
Preprocessing system. The final aspect ratio coming out of the image
preprocessor is also shown here; it reflects resampling, padding, and
cropping performed by the preprocessor.

Spinal Editing Control


Launched by the Window/Spinal Editing menu item. See Spinal Editing.

Off/Align/Solve. Button. Controls the mode in which the spinal editing features
run, if at all. In align mode, the scene is re-aligned after a change. In solve
mode, a refine solve cycle is run after a change.
Finish. Button. Used to finish a refine solve cycle that was truncated to maintain
response time. Equivalent to the Go button on the solver control panel.
Lock Weight. Spinner. This weight is applied to create a soft-lock key on each
applicable channel when the camera or object is moved or rotated. When
this spinner is dragged, the solver will run in Solve mode, so you can
interactively adjust the key weight.
Drag time (sec). Spinner. (Solve mode only.) Refine cycles will automatically be
stopped after this duration, to maintain an interactive response rate. If
zero, there will be no refine cycles during drag.
Time at release. Spinner. (Solve mode only.) An additional refine operation will
run at the completion of a drag, lasting for up to this duration. If zero, there

558
ADDITIONAL DIALOGS REFERENCE

will not be a solve cycle at the completion of dragging (ie if the drag time is
long enough for a complete solve already).
Update ZWTs, lights, etc on drag. Checkbox. If enabled, ZWTs, lights, etc will
be updated as the camera is dragged, instead of only at the end.
Message area. Text. A text area displays the results of a solve cycle, including
the number of iterations, whether it completed or was stopped, and the
RMS error. In align mode, a total figure of merit is shown reflecting the
extent to which the constraints could be satisfiedthe value will be very
small, unless the constraints are contradictory.
Preferences Controls
The spinal settings are stored in the scene file. When a new scene is
created, the spinal settings are initialized from a set of preferences. These
preferences are controlled directly from this panel, not from the preferences
panel.
Set Prefs. Button. Stores the current settings as the preferences to use for new
scenes.
Get from Prefs. Button. Reloads the current scenes settings from the
preferences.
Restore Defaults. Button. Resets the current scenes settings to factory default
values. They are not necessarily the same as the current preferences, nor
are these values automatically saved as the preferences: you can hit Set
Prefs if that is your intent.

Survey Shot IFL Editor


The survey IFL editor is used to assemble a list of image filenames into a
survey shot. Survey shots are used for collections of digital still images to
reconstruct a set from a wider range of images, instead of using principal
photography. The images may have different file types and sizes.

559
ADDITIONAL DIALOGS REFERENCE

Image List. List box. Shows the list of files in the image list, in frame-by-frame
order. To delete an image, click on it to select it, then hit the delete key. If
you are editing an existing file list, the selected frame will be shown in the
camera view for your inspection.
Add. Button. Launches the file-selection dialog, so you can select one or more
images to add to the list. Files are inserted after the currently-selected
image, or at the end of the list if none. Note that files that you select in the
operating system's file-selection dialog are added in an order the
operating system determines, it is not the order you click on them. To
control ordering, Add one file at a time or rearrange them. Warning: if you
have already created trackers, you must always add images at the end!
Move Up. Button. Moves the selected image up one spot. Don't do this if you've
already created trackers!
Move Down. Button. Moves the selected image up one spot. Don't do this if
you've already created trackers!

Stereo Geometry Dialog


Launched from the Shot menu, this modeless dialog adds and controls
constraints on the relative position and orientation of the two cameras in a 3-D
stereo rig. The constraints prevent chatter that could not have occurred in the
actual rig.

560
ADDITIONAL DIALOGS REFERENCE

Make Keys. Button. When on, keys are created and shown at the current frame.
When off, the value and status on the first frame in the shot are shown
for non-animated fixed parameters.
Back to Key. Button. Moves back to the next previous frame with any stereo-
related key.
Forward to Key. Button. Moves forward to the next following frame with any
stereo-related key.
Dominant Camera. Drop-down list. Select which camera, left or right, should be
taken to be the dominant stationary camera; the stereo parameters will
reflect the position of the secondary camera moving relative to the
dominant camera. The Left and Right settings are for rigs where only one
camera toes in to produce vergence; the Center-Left and Center-Right
settings are for rigs where both cameras toe in equally to produce
vergence. When you change dominance, you will be asked if you wish to
switch the direction of the links on the trackers (and solver modes).
Show Actuals. Radio button. When selected, the Actual Values column shows
the actual value of the corresponding parameter on the current frame.
Show Differences. Radio button. When selected, the Actual Values column
shows the difference between the Lock-To Value and the actual value on
the frame.
The following sections describe each of the parameters specifying the
relationship between the two cameras, ie one for each row in the stereo
geometry panel. Note that the parameters are all relative, ie they do not depend
on the overall position or orientation within the 3-D environment. If you move the
two cameras as a unit, they can be anywhere in 3-D without changing these
parameters. For each parameter, there is a number of columns, which are
specified in the section after this.
Distance. Parameter row. This is the inter-ocular distance between the (nodal
points) of the two cameras. Note that this value is unit-less, like the rest of
SynthEyes, its units are the same as the rest of the 3-D environment. So if
you want the main 3-D environment to be in feet, you should enter the
inter-ocular distance in feet also.
Direction. Parameter row. Degrees. Direction of the secondary camera relative
to the primary camera, in the coordinate system of the primary camera.
Zero means the secondary is directly beside the primary, a positive value
moves it forward until at 90 degrees it is directly in front of the primary
(though see Elevation, next). However: in Center-Left or Center-Right
mode, the zero-direction changes as a result of vergence to maintain
symmetric toe-in. See other material to help understand that.
Elevation. Parameter row. Degrees. Elevation of the secondary camera relative
to the primary camera, in the coordinate system of the primary camera. At
an elevation of zero degrees, it is at the same relative elevation. At an
elevation of 90 degrees, it would be directly over top of the primary.
Vergence. Parameter row. Degrees. Relative in/out look direction of the two
cameras. At zero, the cameras axes are parallel (subject to Tilt and Roll

561
ADDITIONAL DIALOGS REFERENCE

below), and positive values toe in the secondary camera. In center-left or


center-right mode, the direction changes to the secondary camera to
achieve symmetric toe-in.
Tilt. Parameter Row. Degrees. Relative up/down look direction of the secondary
camera relative to the primary. At zero, they are even, as the value
increases, the secondary camera is twisted looking upwards relative to
the primary camera.
Roll. Parameter Row. Degrees. Relative roll of the secondary camera relative to
the primary. At zero, they have no relative roll. Positive values twist the
secondary camera counter-clockwise, as seen from the back.
Description of Parameter Columns
Lock Mode. Selector. Controls the mode and functionality of constraints for this
parameter: As Is, no constraints are added; Known, constraints are
added to force the cameras so that the parameter is the Lock-To value,
which can be animated; Fixed Unknown, the parameter is forced to a
single constant value, which is unknown but determined during the solve;
Varying, the value can be intermittently locked to specific values using the
Lock button and Lock-To value, or intermittently held at a to-be-
determined value by animating a Hold range.
Color. Swatch. Shows the color of the curve in the graph editor.
Channel. Text. Name of the parameter for the row.
Lock. Button. When set, constraints are generated to lock the parameter to the
Lock-To value. Available only in Varying mode. Animated so specific
ranges of frames may be specified. If all frames are to be locked, use
Known mode instead.
Hold. Button. Animated button that forces the parameter to hold a to-be-
determined value for the specific time it is active. For example, animate on
during frames 0-50 to say that vergence is constant at some value during
that time, while allowing it to change after that. Available only for inter-
ocular distance and vergence and only in Varying mode. If Hold should be
on for the entire shot, use Fixed mode instead.
Lock-To Values. Spinner. Shows the value the parameter will be locked to,
animatable. Note that the spinner shows the value at the first frame of the
shot when Make Keys is off.
Actual Values. Text field. Shows the value of the parameter on the current
frame, or the difference between the Lock-To value and the actual value, if
Show Differences is selected.
Weights. Spinner. Animated control over the weight of the generated constraints.
Shows value at first frame if Make Keys is off. The value 60 is the nominal
value, the weight increases by a factor of 10 for an increase of 20 in the
value (decibels). With a range from 0 to 120, this corresponds to 0.001 to
1000. Hint: if a constraint is not having effect, you will usually do better
reducing the weight, not increasing it. Its like shouting is rarely effective,
just annoys people. Unlike on the hard/soft lock panel, a weight of zero

562
ADDITIONAL DIALOGS REFERENCE

does not create a hard lock. All stereo locks are soft, if the weight is zero it
has no effect.
Less More Button. Shows or hides the following set of controls which shift
information back and forth between the stereo channels and the camera
positions.
Get 1f. Button. Gets the actual stereo parameters on the current frame, and
writes them into the Lock-To value spinners. Any parameter in As-Is mode
is not affected!
Get PB. Button. Same as Get 1f, but for the entire playback range (little green
and red triangles on the time bar).
Get All. Button. Same as Get 1f, but for the entire shot.
Move Left Camera Live. Mode Button. The left camera is moved to a position
determined by the right camera and stereo parameters (excluding any that
are As-Is). If you adjust the spinners, the camera will move
correspondingly. The seed path, solve path, or both are affected, see the
checkboxes at bottom.
Move Left Camera Set 1f. Button. The left camera is moved to a position
determined by the right camera and stereo parameters (excluding any that
are As-Is). The seed path, solve path, or both are affected, see the
checkboxes at bottom. Unlike Live, this is a one-shot event each time you
click the button.
Move Left Camera Set PB. Button. Same as Set 1f, but for the playback range.
Move Left Camera Set All. Button. Same as Set 1f, but updates the left camera
for the entire shot. For example, you might want track the right camera of
a shot by itself; if you have known stereo parameters you can use this
button to instantly generate the left camera path for the entire shot.
Move Both from Center Live/Set 1f/Set PB/Set All. Same as the Left version,
except that the position of the two cameras is averaged to find a center
point, then both cameras are offset half outwards in each direction
(including tilt, roll, etc) to form new positions.
Move Right Camera Live/Set 1f/Set PB/Set All. Same as the Left version,
except the right camera is moved based on the left position and stereo
parameters.
Write seed path. Checkbox. Controls whether or not the Move buttons affect the
seed path. You will need this on if you wish to create hard or soft camera
position locks for a later solve. You can keep it off if you wish to make
temporary fixes. If you write the solve path but not seed path, anything you
do will be erased by the next solve (except in refine mode).
Write solve path. Checkbox. Controls whether or not the Move buttons affect
the solve path. Normally should be on if the camera has already been
solved; keep off if you are generating seed paths. If both Write boxes are
off, the Move buttons will do nothing. If Write seed path is on, Write solve
path is off, and the camera is solved, the Move buttons will be updating
the seed path, but you will not be able to see anything happeningyou
will be seeing the solve path unless you select View/Show seed path.

563
ADDITIONAL DIALOGS REFERENCE

Texture Control Panel


Launched from the Window menu, or the 3-D Control panel, this dialog
controls the extraction and display of textures on meshes.

(file name). Static text field. Shows the base file name of the texture on the
selected mesh; this file is either read, if Create Texture is off, or written, if Create
Texture is on.
Set. Button. Brings up the file browser to set the file name. Important: be
sure to set Create Texture appropriately before clicking Set, so that the correct
File Open or File Save dialog can be displayed.
Clear. Button. Removes the texture file name.
Options. Button. Allows the compression options for the selected file type
to be changed.
Save. Button. All selected meshes with extracted textures are re-saved to
disk. Use after painting in the alpha channel, for example.
Create Texture. Checkbox. When set, this mesh will have a texture
computed for it on demand (via Run, Run All, or after a solve), which will then be
written to the designated file. When the checkbox is clear, the specified texture
will just be shown in the viewport.

564
ADDITIONAL DIALOGS REFERENCE

Enable. Stoplight button. Animated control over which frames are used in
the texture calculate, ie, avoid any with explosions, object passing in front, etc.
Right-click, shift-right, and control-right delete a key, truncate, or delete all keys,
respectively.
Orientation Control. Drop-down. Shows as "None" here. Allows a texture
on disk to be oriented differently from the SynthEyes default, for more convenient
interaction with other programs.
XRes. Editable drop-down. Shows here as 1024 here. Sets the X
horizontal resolution of the created texture. Note that you can type in any value,
and it is not limited to any particular maximum size.
YRes. Editable drop-down. Shows here as 512 here. Sets the Y vertical
resolution of the created texture. Note that you can type in any value, and it is not
limited to any particular maximum size.
Channel Depth. Drop-down. Shows here as 8-bit. Selects the pixel bit
depth of the created texture on disk, ie 8 bit, 16 bit, half-float, or float. Note that
textures are always computed as floats.
Filter type. Drop-down. Shows here as 2-Lanczos. Selects the
interpolation filter type for texture extraction.
Lit texture. Checkbox. When checked, the textured mesh will be lighted in
the perspective view. Unchecked, it will not. Lighting is desirable for meshes with
painted textures such as a soda can, but undesirable when the texture is
generated from the pixels of the shot's image: in that case you want the pixels
used exactly as is. Has no effect on untextured objects.
Run. Button. Runs the texture extraction process for (only) the selected
texture/mesh right now.
Run All. Button. Runs all runnable texture extractions immediately.
Run all after solve. Checkbox. When set, all texture extractions will be re-
run automatically after each solve cycle, including refines.
Show only texture alpha. Checkbox. When on, the texture's alpha
channel will be displayed as the texture, instead of the RGB texture. This can
make alpha-painting easier.
Hide mesh selection. Checkbox. When on, selected meshes are drawn
normally, without the red highlighting. This makes painting and texture panel
manipulations easier to see, though it can be confusing, so this option turns off
automatically when the Texture Control Panel is closed.
Extraction Mode. Dropdown. Best pixel: the highest-weighted pixel is
produced, ie considering tilt and distance. Average: the (weighted) average pixel
intensity is produced. Average w/alpha: the extracted texture includes the
(weighted)average pixel plus an alpha channel reflecting its degree of
consistency (very repeatable pixels will be opaque, and very variable pixels

565
ADDITIONAL DIALOGS REFERENCE

transparent). Alpha creation is subject to the additional controls in the Alpha


Creation Controls section below.
Tilt Limit. Spinner. Texture extraction suffers from reduced accuracy as
triangles are seen edge on, as the pixels squish together. This control limits how
edge-on triangles are used for texture extraction, zero is no limit at all (all
triangles are used), one means only perfectly camera-facing triangles are used.
Increase this value when the camera moves extensively around a cylinder or
sphere; be sure to increase the segment count since the control applies on a
triangle-by-triangle basis. Literal example: multiplying the default 0.1 by 90
degrees says triangles within 9 degrees of edge-on are rejected, eg within 81
degrees of head on.
Blocking control. Drop-down. Selects whether this mesh is considered
opaque for the texture extraction operations of other meshes, ie if this mesh is
between the camera and a mesh whose texture is being extracted, the blocked
pixels will not be used, ie as this mesh passes in front. This control can be set
regardless of whether this mesh is having its own texture extracted. Note that
each mesh that is blocking imposes a non-trivial performance penalty during
extraction.
Blocked by Garbage Splines. Checkbox. When set, any garbage splines
will be masked out of the texture extraction, ie, you can set up a garbage spline
around a person moving in front of a wall, for example, so that the person will not
affect the extracted wall texture.
Alpha Creation Controls
Low. Spinner. Sets the (smaller) RMS level that corresponds to an
opaque pixel.
High. Spinner. Sets the (larger) RMS level that corresponds to a
transparent pixel.
Sharpness. Spinner. Gamma-like control that affects what happens to the
alpha for RMS levels between the low and high limits.
Shadow Maker. Button. Brings up the Shadow Map Maker script. From a
light and mesh, it creates a texture map that is the shadow cast on the mesh by
the light, from all shadow-casting meshes in the scene. The UV texture
coordinates of the mesh are also set (to coordinates "as seen by" the light). To
save the texture, turn Create Texture on (even though it is already created), then
Set (filename), then Save, and finally turn Create Texture back off (so a texture
extraction won't be run later).

566
Viewports Feature Reference
This section describes the mouse actions that can be performed within
various display windows. There are separate major sections for the graph editor
and the perspective view.

No Middle Mouse Button?


Most windows use the middle mouse buttonpushing on the scroll
wheelto pan. This can be difficult on trackpads (Macbooks), tablets, trackballs
or with some mouse drivers installed. There is a preferences setting, No middle-
mouse button, that you can turn on. With it, hold down ALT or Command and
then drag with the left mouse button to pan. When this option is selected, the
ALT/Command-Left-click combination, which links trackers together, is selected
using ALT/Command-Right-click instead.
OS X: clicking the middle mouse button may display the Dashboard
instead. To fix that, fire up the Expose and Spaces controls in the OS X System
Preferences panel, and change the middle mouse button from Dashboard to -
(nothing). Youll still be able to access dashboard via F12.
If you are using a tablet, you must turn off the Enable cursor wrap
checkbox on the preferences panel.
If you have no right-mouse button either, you can use the ESCape key to
terminate some mouse modes (such as roto-spline creation) that would normally
require a right-click.
Please note that SynthEyes is a designed for use with a 3-button + scroll
wheel mouse. It requires many accurate fine-positioning movements, especially
for supervised tracking, and a trackpad will not permit you to work effectively,
trackpads are not designed for this activity.
Similarly, SynthEyes is designed to take advantage of all three of the
mouse buttons. The control, shift, and ALT/command keys are frequently used
for additional useful functionality, so it is not possible to dedicate one of those
keys to replace the missing button(s).

Timing Bar
The timing bar shows valid regions and keys for trackers, roto masks, etc,
depending on what is currently selected, and the active panel. Shows hold
regions with magenta bars at the top of the frames.
There are two possible backgrounds: one showing whether frames are
cached or not, the other whether there a reasonable number of trackers or not
(for the given solving mode). Select between them via the View/Timebar
background menu item.
Green triangle: start of replay loop. Left-drag
Red triangle: end of replay loop. Left-drag.

567
VIEWPORT FEATURES REFERENCE

Black triangle: key on the select object.


Gray triangle: variant appearance of tracker key, when there is a key on a
secondary channel of a tracker, but not a position key.
Green bar: The Begin frame on the solver panel. Display only.
Red bar: The End frame on the solver panel. Display only.
Left Mouse: Click or drag the current frame. Drag the start and end of the replay
loop. Shift-drag to change the overall starting or ending frame. Control-
shift-drag to change the shot length and end frame, even past the end of
the shot (useful when the shot is no longer available).
Middle Mouse: Drag to pan the time bar left and right.
Middle Scroll: Scroll the current time. Shift-scroll to zoom the time bar.
Right Mouse: Horizontal drag to pan time bar, vertical drag to zoom time bar. Or,
right click cancels an ongoing left or middle-mouse operation.

Camera Window
There can be multiple camera views; in the viewport layout manager or
the viewport select tab at top-left of any view pane you can find Camera, Camera
B, LCamera, and RCamera. These differ only in the way they initially attach to an
active stereo pair: Camera B attaches to the other camera of the pair, LCamera
to the left camera, and RCamera to the right camera. This allows you to set up
complex viewport configurations for stereo tracking that configure themselves
automatically. You can also change the object a camera view displays from the
bottom of the right-click menu. (Note that right-click menu largely duplicates the
items on the main menu that are related to the camera view.)

Note: There is a very extensive additional set of mouse operations for


planar trackers, please see the Planar Trackers in the Camera View in
the Planar Tracking Manual.

The camera view can be floated with the Window/Floating camera menu
item.
Left Mouse: Click to select and drag a tracker, or create a tracker if the Tracker
panels create button is lit. Shift-drag to move faster, control-drag to move
a tracker slower (more precisely). While dragging, spot or symmetry
trackers will snap to the best nearby location (within two pixels). Hold
down ALT/Command to suppress snapping. Shift-click to include or
exclude a tracker from the existing selection set. If there is only one
selected tracker and it has an offset, shift-drag drags the tracker, leaving
the offset marker (final position) unchanged. Drag in empty space to lasso
2-D trackers, control-drag to lasso both the 2-D trackers and any 3-D
points, shift-drag to lasso additional trackers. Lasso meshes instead if
"Edit/Lasso meshes instead" is turned on. Control-drag in empty space for
RGB color readout on the status line. ALT-Left-Click (Mac: Command-
Left-Click) to link to a tracker, when the Tracker 3-D panel is displayed.
Click the marker for a tracker on a different object, to switch to that object.

568
PERSPECTIVE WINDOW REFERENCE

Drag a Lens panel alignment line. Click on nothing to clear the selection
set. If a single tracker is selected, and the Z or apostrophe/double-quote
key is pressed, pushing the left mouse button will place the tracker at the
mouse location and allow it to be dragged to be fine-tuned (called the Z-
drop feature). Or, drag a trackers size or search region handles.
Middle Mouse Scroll: Zoom in and out about the cursor. (See mouse
preferences discussion above.)
Right Mouse: Drag vertically to zoom. Click to bring up the right-click menu. Or,
cancel a left or middle button action in progress.

Tracker Mini-View (Tracker Panel & Planar Panel)


The mini-view shows the interior of the tracker. For offset trackers, when
the tracker is unlocked, the mini-view shows the pixels being tracked and shows
an offset marker for the final tracker location (if it is within the mini-tracker view);
when the tracker is locked, the mini-view shows the final tracker location,
including the offset.

Note: The mouse operations are different for planar trackers, please
see Planar Trackers in the Tracker Mini-View in the Planar Tracking
Manual.

Left Mouse: Drag the tracker location. The control key will reduce sensitivity for
more accurate placement. Spot or symmetry trackers will snap to the best
nearby location (within two pixels). Hold down ALT/Command to suppress
snapping. Also, drag a tracker offset marker. On offset trackers, shift-drag
to move the tracker, leaving the final (offset) position stationary.
Middle Scroll: Advance the current frame, tracking as you go.
Right Mouse: Add or remove a position key at the current frame. Or, cancel a
drag in progress.

3-D Viewport
Left Mouse. There's a variety of functionality here:
Click and Drag repeatedly to create an object, when the 3-D Panels
Create button is lit. Shift-drag to create "square" objects.
Click something to select it.
Shift-click to multi-select or unselect trackers or meshes.
Drag a lasso to select multiple trackers, shift-drag to lasso additional
trackers, control-drag to un-lasso trackers.
Lasso etc meshes when "Edit/Lasso meshes instead" is selected.
Move, rotate, or scale an object, depending on the tool selected on the 3-
D Panel (even if it has since been closed). Use the control key when
rotating or scaling for very fine adjustment. (See additional discussion
below.)
ALT-Left-Click (Mac: Command-Left-Click) to link to a tracker, when the
Tracker 3-D panel is displayed.

569
VIEWPORT FEATURES REFERENCE

Normally rotation and scaling are around the pivot point. The pivot
of meshes or GeoH objects can be moved, see Edit Pivots on the right-
click or main menu, described on the main menu.
To rotate or scale around any position in the viewport, turn on Lock
on the 3D panel, which makes sure the object doesn't un-select when you
try to rotate or scale around some position that isn't on the object!
Middle Mouse: Drag to pan the viewport. (See mouse preferences discussion
above.)
Middle Scroll: Zoom the viewport.
Right Mouse: Drag vertically to zoom the viewport. Or, cancel an ongoing left or
middle-mouse operation.

Hierarchy View
The Hierarchy view allows you to see a representation of the parenting of
various objects within SynthEyes, especially cameras, moving objects, trackers,
and meshes. You can also make various changes in parenting, as permitted.

NOTE: the hierarchy shown is that of GeoH tracking, ie where moving


objects may be parented to one another. There's a simpler underlying
hierarchy where all moving objects are parented equally to their
camera, not to one another.

This view is also available as a floating panel or as a (secondary) control


panel, for example on the default GeoHTrack room.
Each line contains up to five elements:
a disclosure triangle, if the item has children (click to change),
the item name (double-click to change), in italic if it is a mesh,
a visibility control to show/hide the item (click to change),
a color swatch (click to change),
a lock icon for objects, meshes, and trackers (click to change).
When an item is selected, that line is drawn with a light blue background.
If the camera or moving object is the active tracker host, it is drawn with a
maroon background.
In addition to the various click options described above, the following
operations are available:
Clicking a mesh will select it immediately.
Click a camera or moving object, and it will become selected and
the Active Tracker Host.
Clicking a tracker will select it. The Active Tracker Host will be
changed, if necessary, to the tracker's owning object. (Why? Only
trackers that are children of the Active Tracker Host can only be
selected.)

570
PERSPECTIVE WINDOW REFERENCE

Clicking off the end will unselect everything.


Shift-click to multi-select contiguous trackers or meshes.
Control-click to toggle an item's selection status.
Control-clicking the visibility or lock icon will set the state of all the
GeoH children in a rig, in addition to the clicked object.
Shift-clicking the color swatch of a mesh or tracker will additionally
select all meshes or trackers (on the current object) with the same
color.
Control-clicking the color swatch of a mesh or tracker will toggle the
selection of all meshes or trackers (on the current object) with the
same color.
Dragging a moving or GeoH object, mesh, or tracker will reparent it
to the camera or moving object it is dropped onto. Acceptable drop
targets (new parents) will change to green before you drop.
Dragging a mesh to a GeoH object that already has one will
present the option to replace the existing mesh with the new one,
while preserving the painted vertex weights (same as Geo Mesh
Replace script).
Dragging a mesh past the left edge will remove its parenting (ie
reparent to root).
While dragging one or more trackers or meshes or a moving or
GeoH object to a new owner, holding CONTROL will cause the
trackers, meshes, or objects (including children) to be cloned.
While dragging trackers, meshes, or objects to a new owner,
holding SHIFT will cause their parent-relative position to be left
unadjusted, rather than corrected for the changed ownership to
maintain the same ultimate world-space location.
Use the middle mouse button to pan the view.
Use the scroll wheel to scroll the view vertically.
The view will also auto-scroll when dragging an object vertically
beyond the top or bottom of the view.

Error Curve Mini-View


This view shows the maximum error of all selected trackers over the
current playback frame range (small green/red triangles at top of timebar). The
maximum error in horizontal pixels of any selected tracker is shown numerically,
labeled in lower case as "hpix."

Note: "error" means the distance between the 2-D position of the
tracker's solved XYZ coordinates, projected into the image (ie the
yellow X), and the tracker's actual position in the image.

If no trackers are selected, then the per-frame error is shown for each
frame, (root-mean-square, RMS) averaged over all trackers. The object's

571
VIEWPORT FEATURES REFERENCE

overall error value (from the Solver panel) is shown, labeled in upper case as
"HPIX."
If a non-hybrid GeoH tracker is selected, an image-difference figure of
merit is shown, from 0% to 100%.
When the preference "Error View only for sole tracker" (in the User
Interface section) is on, then the error curve is shown only when exactly one
tracker is selected... However, if all trackers are selected, then the overall RMS
curve is shown, and the overall scene error value (from the summary panel) is
shown. This preference can be used for very large scenes if the error curves are
taking too long to compute.
The error curve and numeric value are colored green, yellow, or red
according to color thresholds set in the User Interface section of the preferences.

Note: in the error curve mini-view, errors are not weighted according
to the tracker weights on the tracker panel, or the transition weighting
from the solver panel, so that you can see the actual errors.

If a single unsolved tracker is selected, it's tracking figure of merit (FOM)


curve is shown in blue, which is helpful during supervised tracking.
If the error curve view has nothing to display, it is not shown.
A dashed vertical line shows the current frame.
Left Mouse: Click and optionally drag to go to a different current frame within the
playback frame range (small green/red triangles at top of timebar).
Right Mouse: Click to cancel an ongoing left-drag.
Right Double-click: Resets the playback range to the entire shot.

Constrained Points Viewport


Left Mouse: Click to select a tracker. Shift-drag to add trackers to the selection
set. Control-click to invert a trackers selection status. When you select a
tracker, it will flash in the camera and 3-D views. Clicking towards the
right, over a linked tracker, will flash that tracker instead. Drag in the gutter
area between columns (ie immediately left of the textual column headers)
to change the column widths. Change the sort order by clicking on "Name"
or "Error" on the header line.
Middle Mouse: Vertical pan.
Middle Scroll: Scroll the constrained point listing vertically.
Right Mouse: Cancel an ongoing left or middle-mouse operation.

572
Graph Editor Reference

The graph editor can be launched from the Graphs room on the main
toolbar, the Window/Graph Editor menu item, or the F7 key. It can also appear as
a viewport in a layout. The graph editor contains many buttons; they have
extensive tooltips to help you identify the function and features.
All graph editors share a single clipboard for key data, so you can move
keys from one channel to another, object to another, or one editor to another.
The clipboard can be modified from Sizzle scripts to achieve special effects.
The graph editor has two major modes, graphs and tracks, as these
examples show:
Tracks Mode:

Tracker 7 is unlocked and selected in the main user interface, and a


selection of keys from trackers 6, 7, and 9 are selected in the graph editor. While
the other trackers are automatic, #1,2,3 and 7 are now supervised and track in
the forward direction (note the directionality in the key markers). The current
frame # is off to the left, before frame 35.

573
GRAPH EDITOR REFERENCE

Graphs Mode:

The capture shows a graph display of Camera01. The red, green, and
blue traces are solved camera X,Y, and Z velocities, though you would have to
expose the solved velocity node if you did not know. The magenta trace with key
marks every frame is a field-of-view curve from a zoom shot. The time area is in
scroll mode, the graph shows frames 62 to 143, and we are on frame 117.

Hint. This panel does a lot of different stuff. If you only read this, you
will probably not understand exactly what or why everything does what
it does. We could go on and on trying to describing everything exactly,
to no purpose. Keep alert for what SynthEyes can do, and give it a try
inside SynthEyesyou will understand a lot better.

Shared Features in All Modes


Main Buttons
Buttons are shown below in their unselected state. They have a green rim

such as when they are selected.

Tracks Mode. Switch the graph editor to the tracks mode.


Graphs Mode. Switch to graphs (curves) mode.

574
GRAPH EDITOR REFERENCE

Alpha, Error, Time Sort. , , , . Sort trackers in a modified


alphabetical order, by the error after solving, by time, or by tracker lifetime.
The button sequences through these four modes in order, right-click to go
in reverse order.
Selected Only. . When on, only the selected trackers appear in the Active
Trackers node of the graph editorit changes to read Selected
Trackers instead.
Reset Time. . The time slider is adjusted to display the whole shot within the
visible width of the graph editor.
Toolbar Display Mode. . Clicking this button will show or hide the toolbar,
leaving only the time slider area shown at the bottom. Right clicking this
button will close both the tool and time areasa minimal view for when
the graph editor is embedded as a viewport, instead of floating. Click in
the small gutter area at bottom to re-display the time and tools, or right-
click at bottom to re-display only the time area.

Show Status Background. . When on, a colored background is shown that


indicates whether the number of trackers visible on that frame is
satisfactory. The count is different for translating cameras, tripod shots,
and within hold regions. The safe count configured on the preferences
panel is taken into account, above that, the background is white/gray.
Below the safe count, it turns a shade of green. At fewer trackers, it turns
yellowish on marginal levels, or reddish for unacceptable tracker counts.
See also the #Normal and #Far data channels of the Active Trackers
node. Can reduce performance when there are many frames and trackers.
Tip: you can get this same colored background display on the main
timebar, by turning on the View/Timebar background/Tracker count menu
item.

Squish Tracks. . [Only in tracks mode.] When on, all the tracks are
squished vertically to fit into the visible canvas area. This is a great way to
quickly see an overview. You can see the display with or without individual
keys: it has three states: off, with keys, and without keys. Clicking the
button sequences among the 3 modes, right-clicking sequences in the
reverse direction.
Draw Selected. . [Only in graphs mode.] When on (normally), the curves of
all selected nodes or open nodes are drawn. When off, only open nodes
are drawn.
Time Slider
The graph editor time slider has two modes, controlled by the selector icon
at left in the images below.

575
GRAPH EDITOR REFERENCE

Slider mode. The slider locks up with the


canvas area above it, showing only the
displayed range of times.

Scroll mode. The slider area always shows


the entire length of the shot. The dark gray box
(scroll knob) shows the portion displayed in the
canvas.

In the time slider mode:


left-click or drag to change the current time.
Middle-drag to pan the canvas, or
right-drag to zoom/pan the time axis (same as in the canvas area
and main SynthEyes time bar).
In the time scroll mode:
left-drag inside the gray scroll knob to drag the region being
displayed (panning the canvas opposite from usual),
left-drag the blue current-time marker to change the time,
left-click outside the knob to page left or right,
double-click to center the knob at a specific frame,
middle-drag to pan the canvas in the usual way, or
right-click to expand to display the entire shot.
Left Hierarchy Scroll
This is the scroll bar along the left edge of the graph editor in both graph
and tracks modes. In the hierarchy scroll:
left-drag inside the knob to move it and pan the hierarchy vertically,
left-click outside the knob to page up or down,
right-click to HOME to the top, or
double-click to center on that location.
The interior of the entire height of the scroll bar shows where nodes are
selected or open, even though they are not currently displayed. You can rapidly
see any of those open nodes by clicking at that spot on the scroll bar.
Hierarchy/Canvas Gutters
A small gutter area between the hierarchy and canvas area lets you
expand the hierarchy area to show longer tracker names, or even to compress it
down so that it can not be seen at all to save space if the graph editor is
embedded in a complex layout.
Note that the gutter can not be seen directly; it starts at the right edge of
the white border behind selected hierarchy nodes, and the cursor will change
shape to a left/right drag cursor.

576
GRAPH EDITOR REFERENCE

There is a second gutter between the channel name/icon area of the


graph-mode hierarchy and the Number Zone, which displays the numeric value
of the channels (when enabled from the right-click menu). This gutter is just to
the left of the colons separating the areas. The cursor will change when you are
over the gutter; you can adjust the two gutters to adjust the widths of each area.
Double-clicking a displayed value in the number zone will bring up a dialog
to enable you to change the value.
Tracks Mode
Hierarchy Area
The middle-mouse scroll wheel scrolls the hierarchy area vertically.
Disclosure Triangle. . Click to expose or hide the node/nodes/tracks under
this node.
Visibility. . Show or do not show the node (tracker or mesh) in the viewports.
Color. . Has the following modes for trackers; only the last applies to other
node types:
shift-click to add trackers with this color to the selection set,
control-click on the color square of an unselected tracker to select
all trackers of this color,
control-click on the color square of a selected tracker to unselect all
trackers of this color, or
double-click to set the color of the node (tracker or mesh).
Lock. . Lock or unlock the tracker.
Enable. . Tracker or spline enable or disable.
Node/Channel Name. Selected nodes have a white background. Only some
types of nodes can be selected, corresponding to what can be selected in
SynthEyess viewports. The channel name is underlined when it is keyed on the
current frame.
In the following list, keep in mind that only one of most objects can be selected at
a time; only trackers can be multi-selected.
click or drag to select one node (updating all the other views),
control-click or drag to toggle the selection,
control-shift-drag to clear a range of selections,
shift-click to select an additional tracker,
shift-click an already-selected tracker to select the range of trackers
from this one to the nearest selected one, or
double-click to change the name of a node (if allowed).
Include in Composite. . When on (as shown), keys on this track are included
in the composite track of its parent (and possibly in the grandparent, great-

577
GRAPH EDITOR REFERENCE

grandparent, etc). The off key of an enable track is never included on a


composite track.
Mouse Modes
The mouse mode buttons at the bottom center control what the mouse buttons
do in the canvas area. Common operations shared by all modes:
Middle-mouse pan,
Middle-scroll to change the current frame and pan if needed.
Shift-middle-scroll to zoom the time axis
Right-drag to zoom or pan the time axis (like the main timebar)
Right-click to bring up the track modes canvas menu.

Select Keys. . The shared operations plus:


Left-click a key to select it,
Left-drag a box to select all the keys in the box,
Shift-left-click or drag to add to the selected key set,
Control-left-click or drag to remove from the selected key set.

Re-time Keys. . The shared operations plus:


Left-click a key to select it,
Left-drag a box to select all the keys in the box,
Left-drag selected keys to re-time them (shift them in time),
Control-left-drag to clone the select keys and drag them to a new
frame,
Alt-left-drag to include keys on all tracks sharing keys.
Double-click keys to bring up the Set Key Values dialog.

Add Keys. . The shared operations plus:


Left-click a key to select it,
Left-click a location where there is no key to add one.
Left-drag a box to add keys at all possible key locations within the
box. The value will be determined by interpolating the existing curve at the
time the key is added.
Shift-left-click to add to the selected key set,
Double-click keys to bring up the Set Key Values dialog.

Delete Keys. . The shared operations plus:


Left-click a key to delete it,
Left-drag a region, all keys inside that can be deleted will be
deleted.

578
GRAPH EDITOR REFERENCE

Squish Mode. This mode activates automatically when you select Squish mode

with the keys not shown (see Shared Features, above). With no keys
shown, the key manipulation modes do not make sense. Instead the following
mode, modified from the hierarchys name area, is in effect:
click or drag to select and flash one node,
control-click or drag to toggle the selection,
control-shift-drag to clear a range of selections,
shift-click to select an additional tracker,
shift-click an already-selected tracker to select the range of trackers
from this one to the nearest selected one.
Hierarchy Menu (Tracks mode)
This menu appears when you right-click in the hierarchy area. Note that
some menu items pay attention to the mouse location when you right-click.
Home. Scrolls the hierarchy up to the top.
End. Scrolls the hierarchy to the end.
Close except this. Closes all the other nodes except the right-clicked one.
Close all. Closes all nodes except the top-level Scene.
Expose recursive. Exposes the clicked-on node, and all its children.
Close recursive. Closes the clicked-on node,and all its children.
Expose selected. Exposes all selected nodes.
Close selected. Closes all selected nodes.
Copy Selected Keys. Copy the selected keys onto the shared graph-editor
clipboard.
Cut Selected Keys. Copy the selected keys onto the shared graph-editor
clipboard, then delete them.
Paste Keys into this. Paste keys from the shared graph-editor clipboard into the
node or channel you right-clicked on. Keys may be moved from between
channels of the same underlying type: for example, Y position keys can be
moved to the X position channel, but not the Pan rotation angle. At least
slightly clever about figuring out what you are trying to do when moving
different kinds of keys into different places, try first, ask questions later.
The menu item will be grayed out if the transfer can not be made. Note
that pasting into a locked tracker will not have any effect.
Delete clicked. Deletes the node you right-clicked on. Note: the delete key (on
the keyboard) deletes keys, not nodes, in both the canvas and hierarchy
areas.
View Controls. The following items appear in the View Controls submenu,
abbreviated as v.c. Note that most have equivalent buttons, but these are
useful when the buttons are hidden.
v.c./To Graph Mode. Change the graph editor to graphs mode.
v.c./Sort Alphabetic. Sort trackers alphabetically (modified).
v.c./Sort By Error. Sort trackers by average error.

579
GRAPH EDITOR REFERENCE

v.c./Sort By Time. Sort trackers by their start and end times (or end and start
times, if the playback direction is set to backwards).
v.c./Sort By Lifetime. Sort trackers by their lifetime, from shortest-lived to
longest-lived.
v.c./List only selected trackers. List only the selected trackers, Active
Trackers node changes to Selected Trackers.
v.c./Lock time to main. The time bar is made to synchronize with the main
timebar, for when the graph editor is embedded in a viewport. Not
recommended, likely to be substantially changed in the future.
v.c./Colorful background. Show the colorful background indicated whether or
not enough trackers are present.
v.c./Remove menu ghosts. Some OpenGL cards do not redraw correctly after a
pop-up menu has appeared, this control forces a delayed redraw to
remove the ghost. On by default and harmless, but this lets you disable it.
This setting is shared throughout SynthEyes and saved as a preference.
Canvas Menu (Tracks mode)
The canvas menu is obtained by right-clicking (without a drag) within the
canvas area. Many of the functions have icons in the main user interface, but the
menu can be handy when the toolbars are closed, and it also allows keyboard
commands to be set up. There are two submenus, Mode (abbreviated m.) and
View Controls (abbreviated v.c.).
m./Select. Go to select-keys mouse mode.
m./Time. Go to re-time keys mouse mode.
m./Add Keys. Go to add keys mouse mode.
m./Delete Keys. Go to delete keys mouse mode.
m./To Graph Mode. Change to graph mode.
Reset time axis. Reset the time axis so the entire length of the shot is shown.
Squish vertically. Squish the tracks vertically so they all can be seen. The keys
will still be shown and can be selected, though if there are many tracks this may
be hard.
Squish with no keys. Squish the tracks vertically, and do not show the keys.
Use the simplified hierarchy-type mouse mode to select trackers.
Squish off. Turn off squish mode.
Copy Selected Keys. Copy the selected keys onto the shared graph-editor
clipboard. Note: you can only paste in the hierarchy area, because that is
where you can and must specify what node/channel the keys should be
pasted into.
Cut Selected Keys. Copy the selected keys onto the shared graph-editor
clipboard, then delete them.
Delete Selected Keys(all). Deletes selected keys outright, including in shared-
key channel groups. Deleting a camera X key will delete keys on Y and Z also.
See the graph editor right-click menu for different versions.
Delete Selected Trackers. Deletes selected trackers.
Approximate Keys. Replaces the selected keys with a smaller number that
approximate the original curve.

580
GRAPH EDITOR REFERENCE

Exactify trackers. Replaces selected tracker position keys with new values
based on the solved 3-D position of the trackersame as the Exact button on
the Tracker Panel.
v.c./Lock time to main. The time bar is made to synchronize with the main
timebar, for when the graph editor is embedded in a viewport. Not
recommended, likely to be substantially changed in the future.
v.c./Colorful background. Show the colorful background indicated whether or
not enough trackers are present.
v.c./Remove menu ghosts. Some OpenGL cards do not redraw correctly after a
pop-up menu has appeared, this control forces a delayed redraw to
remove the ghost. On by default and harmless, but this lets you disable it.
This setting is shared throughout SynthEyes and saved as a preference.
Graphs Mode
Hierarchy Area
Disclosure Triangle. . Click to expose or hide the node/nodes/tracks under
this node.
Visibility. . Show or do not show the node (tracker or mesh) in the viewports.
Color (node). . Has the following modes for trackers; only the last applies to
other node types:
shift-click to add trackers with this color to the selection set,
control-click on the color square of an unselected tracker to select
all trackers of this color,
control-click on the color square of a selected tracker to unselect all
trackers of this color, or
double-click to set the color of the node (tracker or mesh).
Lock. . Lock or unlock the tracker.
Enable. . Tracker or spline enable or disable.
Node/Channel Name. Selected nodes have a white background. Only some
types of nodes can be selected, corresponding to what can be selected in
SynthEyess viewports. The channel name is underlined when it is keyed on the
current frame.
In the following list, keep in mind that only one of most objects can be selected
at a time; only trackers can be multi-selected.
click or drag to select one node (updating all the other views),
control-click or drag to toggle the selection,
control-shift-drag to clear a range of selections,
shift-click to select an additional tracker,
shift-click an already-selected tracker to select the range of trackers
from this one to the nearest selected one, or
double-click to change the name of a node (if allowed).

581
GRAPH EDITOR REFERENCE

Show Channel(s). . When on (as shown), the channels graph is drawn in the
canvas area. On a node, controls all the channels of the node, and the control
may have the on state shown, a partially-shown state (fainter with no middle dot),
or may be off (hollow, no green or dot).
Zoom Channel. . Controls the vertical zoom of this channel, and all others of
the same type: they are always zoomed the same to keep the values
comparable.
Left-click to see all related channels (their zoom icons will light up)
and see the zero level of the channel in the canvas area, and see the
range of values displayed on the status line.
Left-drag to change the scale. It will change the offset to keep the
data visiblehold the ALT key to keep the data visible over the entire
length of the shot.
Right-click to reset the zoom and offsets to their initial values.
Double-click to auto-zoom each channel in the same group so that
they have the same scale and same offsets. Compare to double-clicking
the pan icon.
Shift-double-click auto-zooms all displayed channels, not just this
group.
Alt-double-click auto-zooms over the entire length of the shot, not
just the currently-displayed portion. Can be combined with shift.
Pan Channel. . Pans all channels of this type vertically.
Left-click to see the zero level of the channel in the canvas, and to
show the minimum/maximum values displayed on the status line.
Left-drag to pan the channels vertically.
Right-click to reset the offset to zero.
Double-click to auto-zoom each channel in the same group so that
they have the same scale but different offsets. Compare to double-
clicking the zoom icon.
Shift-double-click auto-zooms all displayed channels, not just this
group.
Alt-double-click auto-zooms over the entire length of the shot, not
just the currently-displayed portion. Can be combined with shift.
Color (channel). . Controls the color of this channel, as drawn in the canvas:
double-click to change the color for this exact node and channel
only, for example, only for Tracker23,
shift-double-click to change the preference for all channels of this
type, or
right-click to change the color back to its preference setting.

582
GRAPH EDITOR REFERENCE

Mouse Modes
The mouse mode buttons at the bottom center control what the mouse buttons
do in the canvas area. Common operations shared by all modes:
Middle-mouse pan,
Middle-scroll to change the current frame and pan if needed.
Shift-middle-scroll to zoom the time axis
Right-drag to zoom or pan the time axis (like the main timebar)
Right-click to bring up the canvas menu.
Double-left-click a number in the Number Zone to bring up the
dialog to change the value, if it is changable.

Select Keys. . The shared operations at top plus:


Left-click a key to select it,
Left-drag a box to select all the keys in the box,
Shift-left-click or drag to add to the selected key set,
Control-left-click or drag to remove from the selected key set.

Set Value. . The shared operations at top plus:


Left-click a key to select it,
Left-drag a box (starting in empty space) to select all the keys in the
box,
Shift-left-click or drag to add to the selected key set,
Control-left-click or drag to remove from the selected key set.
Left-drag a key or selected keys vertically to change their values.
Double-click a key or selected keys to bring up the Set Key Values
dialog and set or offset their values numerically.

Re-time Keys. . The shared operations at top plus:


Left-click a key to select it,
Left-drag a box to select all the keys in the box,
Left-drag selected keys to re-time them (shift them in time),
Control-left-drag to clone the select keys and drag them to a new
frame,
Alt-left-drag to include keys on all tracks sharing keys with the
selected ones.
Double-click keys to bring up the Set Key Values dialog.

Add Keys. . The shared operations at top plus:


Left-click a key to select it,
Shift-left-click on a key to add to the selected key set.
Control-left-click on a key to remove it from the selected key set.

583
GRAPH EDITOR REFERENCE

Left-click on a curve to add a key at that location.


Left-drag a box in empty-space to add keys at all possible key
locations within the box. The value will be determined by interpolating the
existing curve at the time the key is added.
Double-click keys to bring up the Set Key Values dialog.

Delete Keys. . The shared operations at top plus:


Left-click a key to delete it,
Left-drag a region, all keys inside that can be deleted will be
deleted.

Deglitch. . The shared operations at top plus:


Left-click a curve or key to fix a glitch by averaging, or by truncating
if it is the beginning or end of the curve. Warning: do not try to deglitch the
first frame of a velocity curveit is the second frame of the actual data.
Turn on the position curve instead.
Control-left-drag to isolate on the curve under the mouse cursor.
(Temporarily enters isolate mode.)

Isolate. . Intended to be used when all trackers are selected and displayed.
The shared operations at top plus:
Left-click or -drag on a curve or key to isolate only that tracker, by
selecting it and unselecting all the others. Keep the left mouse button
down and roam around to quickly look at different tracker curves.
Right-click on the isolate button at any time selects all the
trackers, even if isolate mode is not active.

Zoom. . The shared operations at top (except as noted) plus:


Left-drag an area then release; then channel zooms and offsets are
changed to display only the dragged region. Simulates zooming the
canvas, but it is the zoom and pan of the individual channels that is
changing.
Right-click on the zoom button resets the pans and zoomseven
if the zoom button is not active.
Hierarchy Menu (Graph mode)
This menu appears when you right-click in the hierarchy area. Note that
some menu items pay attention to the mouse location when you right-click.
Home. Scrolls the hierarchy up to the top.
End. Scrolls the hierarchy to the end.
Hide these curves. Turns off the display of all data channels of the node that
was right-clicked.

584
GRAPH EDITOR REFERENCE

Close except this. Closes all the other nodes except the right-clicked one.
Close all. Closes all nodes except the top-level Scene.
Expose recursive. Exposes the clicked-on node, and all its children.
Close recursive. Closes the clicked-on node,and all its children.
Expose selected. Exposes all selected nodes.
Close selected. Closes all selected nodes.
Copy Selected Keys. Copy the selected keys onto the shared graph-editor
clipboard.
Cut Selected Keys. Copy the selected keys onto the shared graph-editor
clipboard, then delete them.
Paste Keys into this. Paste keys from the shared graph-editor clipboard into the
node or channel you right-clicked on. Keys may be moved from between
channels of the same underlying type: for example, Y position keys can be
moved to the X position channel, but not the Pan rotation angle. At least
slightly clever about figuring out what you are trying to do when moving
different kinds of keys into different places, try first, ask questions later.
The menu item will be grayed out if the transfer can not be made. Note
that pasting into a locked tracker will not have any effect.
Delete clicked. Deletes the node you right-clicked on. Note: the delete key (on
the keyboard) deletes keys, not nodes, in both the canvas and hierarchy
areas.
View Controls. The following items appear in the View Controls submenu,
abbreviated as v.c. Note that most have equivalent buttons, but these are
useful when the buttons are hidden.
v.c./To Tracks Mode. Change the graph editor to tracks mode.
v.c./Show Number Zone. Turns on and off the display of the numeric values of
channel data.
v.c./Sort Alphabetic. Sort trackers alphabetically (modified).
v.c./Sort By Error. Sort trackers by average error.
v.c./Sort By Time. Sort trackers by their start and end times (or end and start
times, if the playback direction is set to backwards).
v.c./Sort By Lifetime. Sort trackers by their lifetime, from shortest-lived to
longest-lived.
v.c./List only selected trackers. List only the selected trackers, Active
Trackers node changes to Selected Trackers.
v.c./Draw all selected nodes. Controls whether or not selected nodes are
drawn, equivalent to the button on the user interface.
v.c./Snap channels to grid. Controls whether or not channels being panned
have their origin (zero value) snapped onto one of the horizontal grid lines.
v.c./Lock time to main. The time bar is made to synchronize with the main
timebar, for when the graph editor is embedded in a viewport. Not
recommended, likely to be substantially changed in the future.
v.c./Colorful background. Show the colorful background indicated whether or
not enough trackers are present.
v.c./Remove menu ghosts. Some OpenGL cards do not redraw correctly after a
pop-up menu has appeared, this control forces a delayed redraw to

585
GRAPH EDITOR REFERENCE

remove the ghost. On by default and harmless, but this lets you disable it.
This setting is shared throughout SynthEyes and saved as a preference.
Canvas Menu (Graph mode)
The canvas menu is obtained by right-clicking (without a drag) within the
canvas area. Many of the functions have icons in the main user interface, but the
menu can be handy when the toolbars are closed, and it also allows keyboard
commands to be set up. There are two submenus, Mode (abbreviated m.) and
View Controls (abbreviated v.c.).
m./Select. Go to select-keys mouse mode.
m./Value. Go to set-value mouse mode.
m./Time. Go to re-time keys mouse mode.
m./Add Keys. Go to add keys mouse mode.
m./Delete Keys. Go to delete keys mouse mode.
m./Deglitch. Go to deglitch mouse mode.
m./Isolate On. Go to isolate mouse mode.
m./Zoom. Go to zoom mouse mode.
m./To Tracks Mode. Change to tracks mode.
Reset time axis. Reset the time axis so the entire length of the shot is shown.
Reset all channel zooms. Resets all channels to their nominal unzoomed
range.
Set to Linear Key. Sets all selected keys to be linear (corners).
Set to Smooth Key. Sets all selected keys to be smooth (spline).
Copy Selected Keys. Copy the selected keys onto the shared graph-editor
clipboard. Note: you can only paste in the hierarchy area, because that is
where you can and must specify what node/channel the keys should be
pasted into.
Cut Selected Keys. Copy the selected keys onto the shared graph-editor
clipboard, then delete them.
Delete Selected Keys-only. Delete only the selected keys, which may require
replacing the value instead of deleting the key. Example, you delete X key of a
camera path. Y and Z have keys already. A new value is computed for X, what X
would be if there was no key there. Since it must have a key, this computed
value is used.
Delete Selected Keys-all. Deletes selected keys outright, including in shared-
key channel groups. Deleting a camera X key will delete keys on Y and Z also.
See the graph editor right-click menu for different versions.
Delete Selected Trackers. Deletes selected trackers.
Approximate Keys. Replaces the selected keys with a smaller number that
approximate the original curve.
Exactify trackers. Replaces selected tracker position keys with new values
based on the solved 3-D position of the trackersame as the Exact button on
the Tracker Panel.
v.c./Draw all selected nodes. Controls whether or not selected nodes are
drawn, equivalent to the button on the user interface.

586
GRAPH EDITOR REFERENCE

v.c./Lock time to main. The time bar is made to synchronize with the main
timebar, for when the graph editor is embedded in a viewport. Not
recommended, likely to be substantially changed in the future.
v.c./Snap channels to grid. Controls whether or not channels being panned
have their origin (zero value) snapped onto one of the horizontal grid lines.
v.c./Colorful background. Show the colorful background indicated whether or
not enough trackers are present.
v.c./Remove menu ghosts. Some OpenGL cards do not redraw correctly after a
pop-up menu has appeared, this control forces a delayed redraw to
remove the ghost. On by default and harmless, but this lets you disable it.
This setting is shared throughout SynthEyes and saved as a preference.

Set Key Values Dialog

Activated by double-clicking a key from the graph or tracks views, or the


number itself in the Number Zone, to change one or more keys to new values,
specified numerically.
If multiple keys are selected when the dialog is activated, the values can
all be set to the same value, or they can all be offset by the same amount, as
selected by the radio buttons at the bottom of the panel.
The value is controlled by the spinner, but also by up and down buttons for
each digit. You can add 0.1 to the value by clicking the + button immediately to
the right and below the decimal point. The buttons add or subtract from the
overall value, not from only a specific digit.
Right-clicking an up or down button clears that digit and all lower digits to
zero, rounding the overall value.
The values update into the rest of the scene as you adjust them. When
you are finished, click OK or Cancel to cancel the change.

587
GRAPH EDITOR REFERENCE

Approximate Keys Dialog


This dialog is launched by right-clicking in the canvas area of the graph
editor, when it is in graphs mode, then selecting the Approximate Keys menu
item.

Approximate Keys does what the name suggests, examining the collection
of selected keys, and replacing them with a smaller number that produces a
curve approximating the original. This feature is typically used on camera or
moving object paths, and zooming field of view curves.

Fine Print: SynthEyes approximates all keys between the first-


selected and the last-selected, including any in the middle even if they
are not selected. All channels in the shared-key channel group will be
approximated: if you have selected keys on the X channel of the
camera, the Y and Z channels and rotation angles will all be
approximated because they all share key positions.

You can select the maximum number of keys permitted in the


approximated curve, and the desired error. SynthEyes will keep adding keys until
it reaches the allowed number, or the error becomes less than specified,
whichever comes first.
The error value is per mil (), meaning a part in a thousand of the
nominal range for the value, as displayed in the SynthEyes status line when you
left-click the zoom control for a channel. For example, the nominal range of field
of view is 0 to 90, so 1 per mil is 0.09 degrees. In practice the exact value should
rarely matter much.
At the bottom of the display, the error and number of keys will be listed.
You can dynamically change the number of keys and error values, and watch the
curves in the viewport and the approximation report to decide how to set the
approximation controls.

588
Perspective Window Reference
The perspective window defines quite a few different mouse modes, which
are selected by right-clicking in the perspective window. The menu modes and
mouse modes are described below.
The perspective window has four entries in the viewport layout manager:
Perspective, Perspective B, Perspective C, and Perspective D. The status of
each of these flavors is maintained separately, so that you can put a perspective
window in several different viewport configurations and have it maintain its view,
and you can up to four different versions each preserving its own different view.
There is a basic mouse handler (Navigate) operating all the time in the
perspective window. You can always left-drag a handle of a mesh object to move
it, or control-left-drag it to rotate around that handle. If you left-click a tracker, you
can select it, shift-select to add it to the selected trackers, add it to a ray for a
light, or ALT-click it to set it as the target of a selected tracker. While you are
dragging as part of a mouse operation, you can right-click to cancel it.
The middle mouse button navigates in 3-D. Middle-drag pans the camera,
ALT-middle-drag orbits, Control-ALT dollies it in or out. (Use command for ALT
on the Mac.) Control-middle makes the camera look around in different directions
(tripod-style pan and tilt). Doing any of the above with the shift key down slows
the motion for increased accuracy. The camera will orbit around selected vertices
or an object, if available. The text area of the perspective window shows the
navigation mode continuously.

Tip: You can turn on the Maya-style navigation preference; when


enabled, ALT-left will Orbit, ALT-middle will Pan, ALT-right will Dolly.
On Macs, you can use either the Maya-equivalent Opt key, or use the
command key. These are available at all times within the perspective
view, independent of the mode, and you can switch between them
while dragging. On Linux, you may have to adjust your modifier key so
that ALT is not taken by your window manager, as you do for Maya
itself.

Tip: For increased Maya compatibility, you can turn on the preference
"In Maya-style mode, tumble not orbit" though we believe that lowers
typical productivity.

The middle-mouse scroll wheel moves forward and back through time if
the view is locked to the camera (may result in GeoH tracking), zooms in and out
in 2D if the view is already zoomed or panned in 2D, and dollies in and out in 3D
when the camera is not locked.
The N key will switch to Navigate mode from any other mode.
If you hold down the Z key or apostrophe/double-quote when you click the
left mouse button in any mode, the perspective window will switch temporarily to

589
PERSPECTIVE WINDOW REFERENCE

the Navigate mode, allowing you to use the left button to navigate. The original
mode will be restored when your release the mouse button.
Right-click Menu Items
No Change. Does nothing, makes it easier to take a quick look at the menu.
Frame on Selected. If the camera is unlocked, it swings around and moves in or
out to bring selected object(s) to occupy most of the frame. If the camera
is locked, the view is zoomed and panned to bring the selected object(s)
full frame, without moving the camera or changing its field of view.
View. Submenu, see details below.
Toolbars. Controls for opening or closing the perspective-view toolbar overlays
Toolbars/Save as Defaults. Saves the current visibility settings and positions as
the defaults next time SynthEyes is opened.
Navigate. When this mode is selected, the mouse navigation actions are
activated by the left mouse button, not just the middle mouse. Keyboard:
N key.
Other Modes. Submenu of other left-mouse modes (other than Navigate), see
below for details.
Set as Edit Mesh. Open the currently-selected mesh for editing, exposing its
vertices. If no object is selected, any edit mesh is closed. Keyboard: M
key.
Clear Edit Mesh. The perspective-window state is changed so that there is no
edit mesh. Note that any existing edit mesh is unchanged: it is not deleted,
the only effect will be is that it will no longer be the edit mesh.
Create Mesh Object. Sets the mouse mode to create a mesh object on the
current grid. The type of object created is controlled by the 3-D control
panel, as it is for the other viewports. Similar to, but separate from, the
create button on the 3-D panel, if it is open.
Creation Object. Submenu selecting the object to be created.
Mesh Operations. Submenu for mesh operations. See below.
Texturing. Submenu for texture mapping. See below.
Linking. Submenu for tracker/mesh linking. See below.
Grid. Submenu for the grid. See below.
Preview Movie. Renders the perspective view for the entire frame range to
create a movie for playback. See the preview control panel referece
below.
Lock to Current Camera. The perspective window becomes locked to look
through the camera selected as the Active Tracker Host on the main
toolbar. The cameras imagery appears as the background for the
perspective view. You can no longer move the perspective view around. If
already locked, the camera is unlocked: the background disappears, the
camera is made upright (roll=0), and the view can be changed. Keyboard:
L key.
Stay Locked to Host. When checked, the perspective view will continue to be
locked to the Active Tracker Host, even when the Active Tracker Host

590
PERSPECTIVE WINDOW REFERENCE

changes. When off, you can lock the perspective view to a particular
camera or object and it will stay locked there.
Camera01, Object01, Camera02, etc. All cameras and objects are listed. Select
one of these items to set the perspective view's camera specifically, ie as
if you'd done a Lock to Current Cam. when that camera/object was the
Active Tracker Host. Doing so clears Stay Locked to Host. Note that
these items continue to show the camera/object for the perspective view,
even while the view is not locked, showing the object the view becomes
locked to when you re-lock the view. When Stay Locked to Host is on and
the view is not locked, no camera/object will be checked, because it will be
determined by the Active Tracker Host at the time that it becomes locked.
View Submenu
Local coordinate handles. The handles on cameras, objects, or meshes can be
oriented along either the global coordinate axes, or the axes of the item
itself, this menu check item controls which is displayed.
Path-relative handles. The handles are positioned using the camera path: slide
the camera along the path, inwards with respective to curvature, or
upwards from the curvature. This option applies only for cameras and
objects.
Stereo Display. If in a stereo shot, selects a stereo display from both cameras.
See Perspective View Settings to configure.
Treat wireframes as solid. Affects hit-testing of the mouse on wireframe
meshes. When off, the mouse must be over one of the wires to hit,
whereas if on the mouse can be anywhere inside a facet (triangle), as if
the mesh was being drawn solid. Controls the preference of the same
name as well.
Affect whole path. Moves a camera or object and its trackers simultaneously.
See 3-D Control Panel.
Whole affects meshes. Controls whether or not the Whole button affects
meshes as it moves a scene. Keep on if you have already placed the
meshes, turn off if you are moving the scene relative the meshes to align
it.
Reset 2D zoom. Any 2-D zoom into the perspective viewport image is removed,
so that the entire field of view and image are visible.
Reset FOV. Reset the field of view to 45 degrees.
Lock position only. When on, modifies the normal Lock mode so that the
perspective view follows only the position of the camera, not the
orientation and field of view as well as normal. This permits the
perspective view to be used as a 360 VR viewer, in conjunction with the
Create Spherical Screen script. Don't leave this on for general use, as it
may adversely affect operations that expect the regular lock.
Perspective View Settings. Brings up the Scene Settings dialog, which has
many sizing controls for the perspective view: clip planes, tracker size, etc.
Show selection handles. The selection handles are shown in the viewport; you
can use this control to hide them to declutter when painting etc.

591
PERSPECTIVE WINDOW REFERENCE

Show bones. The GeoH bone objects are shown in the viewport; you can use
this control to hide them to declutter when painting etc.
Isolate object layer. When a single GeoH object is selected and this mode is on,
the weight map of this object only is shown, rather than the composite
map normally shown.
Lock Selection. Prevents the selection from being changed when clicking in the
viewport, good for dense work areas.
Freeze on this frame. Locks this perspective view at the current frame; you can
use it to look at the scene from a certain view or frame while you work on
it on a different frame in other viewports. Handy for working with
reference shots. Keyboard commands A, s, d, F, ., , allow you to
quickly change the frozen frame (with the default keyboard map).
Unfreeze. Releases a freeze, so the perspective view tracks the main UI time.
Show Only Locked. When the perspective view window is locked to a particular
object (and image), only the trackers for that particular object will be
shown.
Show as Dots. Trackers are shown as fixed-size dots instead of 3-D triangle
markers. This reduces clutter at the expense of less ability to assess
depth.
Solid Meshes. Shows meshes as solids; otherwise, wire frames. This control is
independent of the main-menu setting, which is used for the camera view.
The solid mesh mode can be set separately for each perspective window.
Outline Meshes. When solid meshes are shown, outline meshes causes the
wire frame to be overlaid on top as well, making the triangulation visible.
Cartoon Wireframe Meshes. A special wireframe mode where only the outer
boundary and any internal creases are visible, intended for helping align
set and object models.
Lit wireframes. When on, wireframes are lit in the perspective view (for this
perspective view). When off, they are the flat solid color. A preference in
the Appearance area is used as a default for new perspective views.
Horizon Line. Shows the (infinitely far away) horizon line in the perspective
window. Sticky preference-type item.
Camera Frustum. Toggles the display of camera viewing frustumsthe visible
area of the camera, which depend on field of view, aspect, and world size.
Show Object Paths Submenu:
Show no paths. Paths aren't shown for any cameras or moving objects.
Show all paths. Paths are shown for all cameras and moving objects.
Show selected object. The path is shown for the selected camera or moving
object, if any.
Show selected and children. The paths are shown for the selected camera or
moving object, plus its GeoH children.
View/Reload mesh. Reloads the selected (imported) mesh, if any, from its file on
disk. If the original file is no longer accessible, allows a new location to be
selected.

592
PERSPECTIVE WINDOW REFERENCE

Other show controls in the View menu are described on the main
windows view menu.
Other Modes Submenu
Place on mesh. Slide a trackers seed position, an extra helper point, or a mesh
around on the surface of a mesh, or place onto a tracker, seed position, or
extra point, or the vertex of a lidar mesh. Use to place seed points on
reference head meshes, for example, or a mesh onto a tracker. With
control key pushed, position snaps only onto vertices, not anywhere on
mesh. Shift-click to select a different object to move, or use
control/command-D to unselect everything, then click the different object.
Field of View. Adjust the perspective views field of view (zoom). Normally you
should drive forward to get a closer view.
Lasso Trackers. Lasso-select trackers. Shift-select trackers to add to the
selection, and control-select to unselect them.
Lasso Vertices. Lasso-select vertices of the current edit mesh. Or click directly
on the vertices. Shift-click/lasso to add to the current set, control-
click/lasso to remove. Pays attention to faces, treating the object as solid.
SynthEyes planes ("clipping planes") can be positioned to prevent any
vertices behind them from being selecteduseful during triangulation.
Lasso Entire Meshes. Lasso-select entire meshes, which become selected or
not.
Add Card. Create a 3D plane by fitting to the trackers within the lasso region, or
using the existing trackers. See Creating the Card. Use control-drag to
pre-selected trackers, control+shift to add to that set. (***It used to be
ALT/Command-drag etc, not control!) The bounding box of the swept
region defines the plane size/position in any case.
Add Vertices. Add vertices to the edit mesh, placing them on the current grid.
Use the shift key to move up or down normal to the grid. If control is down,
build a facet out of this vertex and the two previously added.
Move Vertices. Move the selected vertices around parallel to the current grid, or
if shift is down, perpendicular to it. Use control to slow the movement. If
clicking on a vertex, shift will add it to the selection set, control-shift will
remove it from the selection set.
Scrub Timebar. Scrub through the shot quickly by dragging in the perspective
view.
Zoom 2D. Zoom the perspective viewport in and out around the clicked-on point
in the view. Use control and shift to speed up or slow down the zooming.
Paint Alpha. Paint on the alpha channel of an extracted mesh texture to adjust
its coverage. Use the Paint toolbar overlay to control the paintbrush
parameters.
Paint Loop. Paint a filled loop on the alpha channel.
Pen Z Alpha. Click repeatedly to create a zig-zag-type non-splined painted alpha
path, for example to soften a straight edge in the extracted texture.
Pen S Alpha. Click repeatedly to create a smooth splined painted alpha path, for
example to soften a circular edge in the texture.

593
PERSPECTIVE WINDOW REFERENCE

Add Stereo 2nd. Creates a matching stereo tracker in the perspective viewport
for the selected tracker (ie just created in the camera view). See
Supervised Setup in Camera+Perspective Views.
Lightsaber Deletion. Lasso selects in the edit mesh, or if there is none, the
single selected mesh, deleting facets and vertices completely through the
object (not just on the visible surface). This is handy for carving up
primitives into convenient shapes. Shift-click a different mesh to select that
mesh instead.
Mesh Operations Submenu
Assemble Mesh. (Mode) Use to quickly build triangular meshes from trackers.
As you click on each additional tracker, it is converted to a vertex and a
new triangle made, extended from the previous triangle (selected
vertices). Click a selected vertex to deselect or re-select it, to specifically
control which vertices will be used to build a triangle for the next converted
tracker. Hold down control as you click a selected vertex to deselect all, to
begin working in a different area. New vertices and triangles are added to
the current edit mesh; if there is none, one is created.
Convert to Mesh. Converts the selected trackers, or all of them, and adds them
to the edit mesh as vertices, with no facets. If there is no current edit
mesh, a new one is created.
Triangulate. Adds facets to the selected vertices of the edit mesh. Position the
view to observe the collection from above, not from the side, before
triangulating. Use the clipping plane feature of Lasso Vertices to quickly
isolate vertices to triangulate.
Punch in Trackers. The selected trackers must fall inside the edit mesh, as
seen from the camera. Each triangle containing a tracker is removed, then
the hole filled with new triangles that connect to the new tracker. Allows
higher-resolution trackers to be brought into an existing lower-resolution
tracker mesh.
Remove and Repair. The selected vertices are removed from the mesh, and the
resulting hole triangulated to paper it over without those vertices.
Subdivide Edges. The selected edges are bisected by new vertices, and
selected facets replaced with four new ones. Creates non-watertight
meshes without skinny triangles, this is most useful. A watertight version is
available through Sizzle and Synthia, though it can contain some skinny
triangles around the edges.
Subdivide Facets. Selected facets have a new vertex added at their center, and
each facet replaced with three new ones surrounding the new vertex.
Delete selected faces. Selected facets are deleted from the edit mesh. Vertices
are left in place for later deletion or so new facets can be added.
Delete unused vertices. Deletes any vertices of the edit mesh that are not part
of any facet.
Add Many Trackers. Brings up the "Add Many Trackers" dialog from the main
menu, for convenience.

594
PERSPECTIVE WINDOW REFERENCE

GeoHTracking Submenu
Contains items for geometric hierarchy tracking. See the Geometric
Hierarchy Tracking manual for more information. Contains an Edit Pivots, though
the one on the main menu bar is likely more useful, see the main Edit menu
description.
Texturing Submenu
Frozen Front Projection. The current frame is frozen to form a texture map for
every other frame in the shot. The object disappears in this frame; in other
frames you can see geometric distortion as the mesh (with this image
applied) is viewed from other directions.
Rolling Front Projection. The edit mesh will have the shot applied to it as a
texture, but the image applied will always be the current one.
Remove Front Projection. Texture-mapping front projection is removed from
the edit mesh.
Assign Texture Coordinates. Assigns UV texture coordinates using camera
mapping, then crops them to use the entire range.
Crop Texture Coords. Adjust the UV coordinates of the edit mesh so that they
use the entire 0..1 range. Use this after a camera map or heavy edit of a
mesh, to utilize more of the possible texture map's pixels.
Clear Texture Coords. Any UV texture coordinates are cleared from the edit
mesh, whether they are due to front projection or importing.
Create Smooth Normals. Creates a normal vector at each vertex of the edit
mesh, averaging over the attached facets. The smooth normals are used
to provide a smooth perspective display of the mesh.
Clear Normals. The per-vertex normals are cleared, so face normals will be
used subsequently.
Open Texture Panel. Opens the texture control panel, so you can apply an
existing texture to a mesh, or calculate a new one.
Linking Submenu
Align via Links dialog. Brings up a dialog that uses existing links to either align
a mesh to the location of the trackers it is linked to, or align the entire
world (shot) to match the mesh. This latter option is useful when you have
a mesh model and want the matchmove to match your existing model, it is
a form of Coordinate System Alignment.
Update mesh using links. Using the links for the shot to which the perspective
view is linked, update the 3-D coordinates of each linked vertex to exactly
match the current solved 3-D location of the tracker to which the vertex is
linked. Do this for the current edit mesh, each selected mesh if there is no
edit mesh, or all meshes if there is no edit mesh or selected meshes.
Show trackers with links. Trackers are flashed that have links, either those on
the edit mesh, or on all selected meshes, or on all meshes if none are
selected. On the edit mesh (only), if there are vertices selected on the edit
mesh, then only the trackers that are linked to selected vertices are
flashed, allowing you to locate the trackers linked to specific vertices.

595
PERSPECTIVE WINDOW REFERENCE

Add link and align mesh. Small tool to help align vertices on a mesh to
trackers, typically to align a plane mesh to some trackers. Each time you
select this item, you should have one tracker and one vertex selected. The
first time you select this item, a link will be created, and the mesh will be
translated so that the vertex matches the tracker position. The second
time you select this item, a link will be created, and the mesh will be
translated, scaled, and rotated so that both links are satisfied. The third
time you select this item, the mesh will be spun around the axis of the two
prior links so that the vertex and tracker fall in the same planeusually
they will not be able to be matched exactlyand a special kind of link will
be created.
Add links to selected. Add links in the first of three possible situations. #1: If
there is one tracker and one or more vertices selected, set up link(s). #2: if
there is one or more selected vertices, for each, create a link to any
tracker that contains the vertex, as seen in 2-D from the camera viewpoint,
and update the vertex location to match the solved tracker location.
#3: for each selected tracker, create a link to any vertex at the same 2-D
location.
Remove links from selected. Deletes links to selected vertices on the edit mesh
to the current shot, if the view is locked, or to all shots, if the view is not
locked. If there is no edit mesh, then all links are deleted from any
selected trackers.
Remove all links from mesh. Delete all tracker/vertex links for the edit mesh, if
any, all selected meshes, if any, or all meshes, if none. The links are
deleted for the shot to which the perspective view is linked, if any, or for all
shots, if the view is not locked to any shot.
Grid Submenu
Show Grid. Toggle. Turns grid display on and off in this perspective window.
Keyboard: G key.
Move Grid. Mouse mode. Left-dragging will slide the grid along its normal mode,
for example, allowing you to raise or lower a floor grid.
Floor Grid, Back Grid, Left Side Grid, Ceiling Grid, Front Grid, Right Side
Grid. Puts the grid on the corresponding wall of a virtual room (stage),
normally viewed from the front. The grids are described this way so that
they are not affected by the current coordinate system selection.
To Facet/Verts/Trkrs. Aligns the grid using an edit-mesh facet, 1 to 3 edit-mesh
vertices, if a mesh is open for editing, or 1 to 3 trackers otherwise. This is
a very important operation for detail work. With 3 points selected, the grid
is the plane that contains those 3 points, centered between them, aligned
to preserve the global upwards direction. With 2 points selected, the
current grid is spun to make its sideways axis aligned with the two points
(in Z up mode, the X axis is made parallel to the two points). With 1 point
selected, the grid is moved to put its center at that point. Often it will be
useful to use this item 3 times in a row, first with 3 then with 2 and finally 1
vertex or tracker selected.

596
PERSPECTIVE WINDOW REFERENCE

Return to custom grid. Use a custom grid set up earlier by To


Facet/Verts/Trkrs. The custom grid is shared between perspective
windows, so you can define it in one window, and use it in one or more
others as well.
Object-Mode Grids. Submenu. Contains forward-facing object, backward-facing
object, etc, selections. Requires that the SynthEyes main user interface be
set to a moving object, not a camera. Each of these modes creates a grid
through the origin of the objects coordinate system, facing in the direction
indicated. An upward-facing grid means that creating an object on it will go
on the plus-object-Z side in Z-up mode. Downward-facing will go in nearly
the same spot, but on the flip side.
Preview Movie Settings Dialog
Launched from the Perspective view's right-click menu with Preview
Movie. Controls where the movie goes, what format it is, and what is shown. The
preview is made at the resolution of the original footage, not of the perspective
viewport. In the case of left-over-right stereo display, the height will be a double
the original.
Use "Preview Movie" for previewing tracking insertionsif you need to
produce modified versions of original plates, ie with lens distortion removed, use
the Output tab of the image preprocessor instead.

File name/ Select the output file name to which the movie should be written.
ASF(Win), AVI(Win), BMP, Cineon, DPX, JPEG, MP4(Win), MOV

597
PERSPECTIVE WINDOW REFERENCE

(Quicktime), OpenEXR, PNG, SGI, Targa, TIFF, or WMV(Win). Only


image sequences are available on Linux. For image sequences, the file
name given is that of the first frame; this is your chance to specify how
many digits are needed and the starting value, for example, prev1.bmp or
prevu0030.exr.
Clear. Clears the file name. In some circumstances when a file is moved from
one operating system to another, your current operating system may not
be able to display its file picker at all if the existing file is from a different
OS. If that happens, use the Clear button, and then the ... picker.
Compression Settings. Set the compression settings for Quicktime and various
image formats. Since the compression settings depend on the type of file
being produced, you must set the file name first. Note that different codecs
can have their own quirks!
Show All Viewport Items. . When checked, produces a literal rendition of the
perspective view. Includes all the trackers, handles, etc, shown in the
viewport as part of the preview movie.
Show Grid. When checked, includes the main perspective-view grid, ie typically
the ground plane. This checkbox is irrelevant if the Show All Viewport
Items checkbox is checked, as in that case whether the grid is shown or
not depends on the normal grid settings in the perspective view's right-
click menu.
Square-Pixel Output. When off, the preview movie will be produced at the same
resolution as the input shot. When on, the resolution will be adjusted so
that the pixel aspect ratio is 1.0, for undistorted display on computer
monitors by standard playback programs.
RGB Included. Must be on to see the normal RGB images. See below.
Depth Included. Output a monochrome depth map. See below.
Anti-aliasing and motion blur. Select None, Low, Medium, High, or Moblur
Low, Moblur Medium, Moblur High, or Moblur Max to determine the
amount of antialiasing and optionally motion blur.

The allowable output channels depend on the output format. Quicktime


accepts only RGB. Bitmap can take RGB or depth, but not both at once.
OpenEXR can have either or both.

Phase View (Pro version, not on Intro)


The phase view is a generic canvas on which to create and connect
phases, which are instructions for the solver. Each phase can have one or more
inputs, and one output, each shown as a small triangle ('pin'). Inputs are normally
on the top with the output on the right, though a preference can change this so
that inputs are on top, and the output at the bottom.

598
PERSPECTIVE WINDOW REFERENCE

You can tell if a phase has been solved yet by looking at the triangular tab
at top right corner of the phase. It is either red (unsolved) or green (solved). You
can double-click to solve the phase (and any phases its inputs require).
There can be multiple independent collections of phases in the phase
view. One of the phases is special: the 'root.' During a solve, the root phase is
the one that is queried to produce the new solve. The root has an extra-wide right
or bottom edge, ie Phase2 above. Any phase downstream of the root is ignored,
as are any independent phase collections. Phases that will be unused are
darkened and marked with a red X, ie Phase3. Phase1 is selected.
When a scene is solved that has no root, the phase subsystem, and all
your phases, are intentionally ignored. The scene is solved 'as-is.' This is an
error only if it is not what you had in mind, otherwise, it is a feature!
New phases are added using the right-click menu; the phases are
grouped into categories alphabetically at the bottom of the menu.
When you add a new phase, it is placed at the location you (right) clicked,
and it is wired in after the currently-selected phase, ie the selected phase's
output is wired to the new phase's input, and the selected phase's outputs are
connect to the new phase instead. If the selected phase was the root, the new
phase becomes the root instead. Note that you cannot create circular loops in the
wiring.
Selected phases can be copied to the clipboard, producing textual XML.
They can be pasted back into SynthEyes, pre-wired and pre-configured. The
clipboard text can be 'seen' by other apps.
The phase configuration can be written to a file on disk as well, and later
retrieved, typically for insertion into a different shot. To support the creation of
libraries of phases, there is a Folder Preference (Phases) set up so you can
easily save and reopen phase configurations. In enterprises, you can move that
folder preference to a shared location.
Mouse Operations
Left-mouse operations:
click on a phase to select it. Shift-click to toggle its selection state, for
adding or removing phases from an existing set of selected phases.
drag in empty space to sweep out a rectangle to select phases.
drag the lower-right corner of a phase to resize it.
drag a phase to reposition it (and other also-selected phases)

599
PERSPECTIVE WINDOW REFERENCE

drag from an output pin to an input pin to create a wire


left-double-click the solved/unsolved marker at the top right of a phase to
cause a solve to be started on it immediately.
Middle-mouse operations:
drag to pan the workspace
scroll wheel to zoom the workspace
With the no-middle-mouse preference on, use control/command-left-drag
to pan
Right-mouse operations:
click to bring up the menu
drag for smooth zoom
(cancels in-progress left or right mouse operations)
Right-Click Menu
There are two slightly different right-click menus, one that appears when
you right-click on a phase, and one that appears when you right-click on empty
space. The first portion is the same in either case.
Common menu items
Reset View. The phase view is reset to its fixed default zoom setting and
positioning.
Fill View. The phase view is reset, sufficiently zoomed out so that all phases are
visible.
Select All. All phases are selected.
Invert Selection. Selected phases become unselected and vice versa.
Align at same height. The selected phases are aligned at the same vertical
position.
Align at same width. The selected phases are aligned at the same horizontal
position.
Library/Copy Phases. The selected phases are copied onto the clipboard,
producing XML that can be pasted into other apps or back into SynthEyes.
Library/Paste Phases. The clipboard's contents (from an earlier Copy Phases)
are pasted into the SynthEyes scene and the new phases selected, so
that you can easily move them.
Library/Delete Phases. The selected phases are deleted.
Library/Read Phase File. You can select a file containing phases (previously
writtenby Write Phase File), and the phases will be reloaded into the
current scene.
Library/Write Phase File. All phases are written to a file on disk for later reuse.
Jog selected phases/Jog Left or Right or Up or Down. (4 items) The selected
phases are moved slightly in the specified direction. These items are here
mainly so you can easily find the key accelerators for them.

600
PERSPECTIVE WINDOW REFERENCE

If the click was on a phase


Set color. Brings up the color selection for the right-clicked phase, as if it was
selected and the color swatch on the phase view clicked.
Set as root. The right-clicked phase is set to be the root.
Connect to selected. A wire is created from the right-clicked phase to the
selected phases, either their first unconnected input, or if they have only
one input, to that one.
Disconnect from selected. Any wire from the right-clicked phase's inputs or
outputs to any selected phase is removed.
Run this. The right-clicked phase (and any it needs) are solved immediately.
Run all. Runs the scene, starting at the root.
Retrieve from this. The solve results of the right-clicked phase are loaded into
the overall scene for examination (as if it was the root). The right-clicked
phase should already be solved. If not, you will be asked whether you
wish to solve it or cancel.
Un-solve this. The right-clicked phase is marked as not solved.
Un-solve. All selected phases, or all phases if none, are marked as not solved.
If the click was NOT on a phase
Clear root. The scene is reset to have NO root. If a solve is run, the phase
subsystem will be ignored.
Run all. Runs the scene, starting at the root.
Un-solve. All phases are marked as not solved.

601
SimulTrack Reference
The SimulTrack view shows the interior of multiple trackers at multiple
frames throughout the shot simultaneously, allowing you to get a quick overview
of a track and modify it quickly. It can be used not only for checking up on
trackers, but for additional supervised-tracking workflows.
There are four flavors of SimulTrack in the viewport layout manager:
(plain)SimulTrack, Other SimulTrack, LSimulTrack, and RSimulTrack. These
versions differ in what object's trackers are displayed when the active object is
part of a stereo pair. The plain version displays the active object's, the Other
version displays the other camera in the pair, LSimulTrack displays the left
camera's trackers, and RSimulTrack displays the right camera's trackers. These
variants are used in more-complex stereo-tracking viewport configurations where
both cameras are displayed and tracked simultaneously.
Floating SimulTracks can also be launched.
Basics
The SimulTrack view contains any number of tiles laid out in a grid
pattern. Tiles are shown for selected trackers on their keyed frames, and on the
current frame.

This tile corresponds to frame 142 of Tracker88. Clicking on the frame


number will send the SynthEyes user interface to frame 142.
The tracker name is listed at the bottom of the pane; clicking the name will
select (only) this tracker. Shift-clicking the name will un-select the tracker,
removing it from the SimulTrack display (useful when many are selected). Either
way, clicking on the tracker name will also flash the tracker in the other
viewports, to make it easier to find elsewhere.
The parentheses "()" around the tracker name indicate that it is locked;
use the right-click menu to change that. The underline below the tracker name
shows the specific color that has been assigned to this tracker, if any.
The wide rim indicates that there is a tracker position key on this frame,
and the blue color means that this frame (142) is the current active frame in the
main user interface.
Normally only frames with keys are shown in the SimulTrack view (this
can be a lot for auto-tracked trackers), so that the user-created keys can quickly
be examined and modified during supervised tracking. The space between keys
can be expanded to show intervening unkeyed frames by clicking on the gutter,
or by using various right-menu commands.

603
SIMULTRACK REFERENCE

Tip: clicking in the gutter or using a right-click expand menu operation


makes a difference only on keyed tiles.

The light and dark blue curves overlaid on the tile show the figure-of-merit
(FOM) and 3-D error curves of the tracker between this key and the next. The
curves can be enabled or disabled from the right-click menu.
Dragging the interior of a key, or dragging the offset marker, has the same
effect as it does within the tracker mini-view of the Tracker panel, setting a
position or offset key. Use control to slow down the movement of the tracker for
more accurate repositioning. Spot or symmetry trackers will snap to the best
nearby location (within two pixels). Hold down ALT/Command to suppress
snapping.
Similar to, but not identical to, the tracker mini-view, shift-right-click within
a tile to add or remove a position key on that frame. Right-clicking brings up the
right-click menu, so shift-right is needed in SimulTrack, whereas plain right-click
is used in the tracker mini-view, which has no right-click menu.
Clicking on the "S" at the upper-right of a tile will turn on or off the strobe
setting for that frame. When strobing is enabled, the tile will sequence rapidly
between the image of the prior key, the current frame, and the following key.
Hovering over a tile will bring up a tooltips with statistics on the tracker.
Display Modes
The overall SimulTrack window shows many tiles simultaneously, in one
of three different configurations, depending on the number of trackers selected in
the SynthEyes user interface. Or, use the "Force row mode" option to stay in row
mode the entire time.
Use the middle-mouse button or the scroll bar to pan the entire
SimulTrack view.
Use ALT-left (Command-Left) to scrub through the shot from inside the
SimulTrack view.
The middle-mouse scroll wheel will step through the frames of the shot.
Use shift-scroll (command-scroll on Macs) to scroll the tiles instead.

604
SIMULTRACK REFERENCE

Grid Mode

The SimulTrack view is in Grid mode whenever there are more selected
trackers than can be shown in Rows mode (unless the Force row mode menu
item is on). Only a single tile is shown for each tracker, the tile for the current
frame.
In the view above, notice that trackers 27R, 11R, 23R, 26R, 39R, and 44R
are disabled on the current frame. All the other trackers are valid and keyed (they
are auto-trackers and keyed on each frame).
The sort order of the trackers in the SimulTrack view is determined by the
Sort settings on the main View menu.
There are 6 pink and 4 blue trackers at the beginning of the list, grouped
together as the overall View/Group by Color option is turned on.

605
SIMULTRACK REFERENCE

Row Mode

Here, the SimulTrack view is in Rows mode, with exactly 5 trackers


selected. Row mode is used when there is more than one tracker selected, but
when there are few enough that all rows can be shown without scrolling. Or, use
the Force row mode setting to stay in row mode at all times (potentially with
scrolling).
The light-blue background shows where the current tile is being displayed;
panning the view can move that blue region.
Trackers 41 and 88 are valid on the current frame, which is frame 142 as
indicated by the blue rim on Tracker88 at center. Trackers 129, 159, and 198
have tiles in the middle section but they end or begin after the current frame.
These trackers have been fine-tuned with keys every 8 frames, as can be seen.

606
SIMULTRACK REFERENCE

Single Mode

The SimulTrack is showing a single selected tracker, in this case a fine-


tuned one. All keys of the tracker are shown. Single mode is used when a single
tracker is selected, unless the Force row mode option is selected.
This image was captured as a new file was being opened, so that you can
see the waiting graphic, which appears while the relevant shot image is being
fetched (it takes 31 different images to display this single SimulTrack view).
Normally, with adequate RAM on the machine, the wait graphic disappears
rapidly as the frames are fetched. However that may not happen if the machine
does not have enough RAM; you may not be able to see all images
simultaneously.
Right-Click Menu
Important: in several cases, the results of a menu operation depend on
which tile is right-clicked to open the menu.
Home. Scrolls the tiled frame display to the top of the page.
To End. Scrolls the tiled frame display to the bottom of the page.
Show FOM. Shows the figure of merit curve for match-type supervised trackers
within each tile: how well the reference pattern matches image on a given
frame. Larger values indicate problems.
Show Error. Shows the error curve within each tile: how far the 2-D tracker
location is from the 3-D tracker location. Larger values indicate problems.

607
SIMULTRACK REFERENCE

Select only valid. Starting from the set of selected and displayed trackers,
unselect those that are not valid on the current frame, to reduce clutter.
Select same color. Select all other trackers on the same camera/object that
have the same color as the clicked-on tracker.
Stereo spouses. Instead of showing the selected trackers, the SimulTrack view
shows the matching tracker on the other camera of the stereo pair. Open
two SimulTrack views simultaneously, and see both sides at once.
Stereo lefts. The SimulTrack view shows the selected trackers on the left
camera of the stereo pair. (If the left camera is not the active object, the
matching trackers of selected trackers in the right camera are shown.)
Stereo rights. The SimulTrack view shows the selected trackers on the right
camera of the stereo pair. (If the right camera is not the active object, the
matching trackers of selected trackers in the left camera are shown.)
(Locked). Shows whether the clicked-on tracker is locked or not (though you can
already tell if it's name is enclosed in parentheses, ie "(Tracker1)"), and
allows you to unlock or relock it.
Lock All. Locks all currently-selected trackers.
Unlock All. Unlocks all currently-selected trackers.
Is ZWT. Shows whether the clicked-on tracker is a zero-weighted tracker, and
toggles that status.
Exactify. Sets a key on this frame of the clicked-on tracker, exactly at its solved
3-D location (as seen in the image).
Generate autokeys. Fills out additional keys at a spacing determined by the Key
(every) setting of the tracker panel, based on the 3-D location, computed
as if the tracker was a zero-weighted tracker (which it may be). Adjust
these keys to refine the track.
Strobing this. Show and toggle whether or not the clicked-on tracker is strobing
at this frame.
Unstrobe all. Stop all trackers from strobing, on all frames.
Expanded this. Shows and toggles whether or not the clicked-on tracker is
expanded (showing all the intervening non-keyed, tracked, frames
between this key and the next).
Close all. Closes (un-expands) all key frames on all trackers.
Force row mode. When checked, always uses the row-style display, regardless
of the number of trackers selected.
Remove menu ghosts. Some OpenGL cards do not redraw correctly after a
pop-up menu has appeared, this control forces a delayed redraw to
remove the ghost. On by default and harmless, but this lets you disable it.
This setting is shared throughout SynthEyes and saved as a preference.
Strobe Submenu
Strobe all frames. Begins strobing on all displayed frames of the clicked-on
tracker.
Unstrobe all frames. Stops strobing on all frames of the clicked-on tracker
Strobe all on this frame. Begins strobing on this frame of all selected trackers.
Unstrobe all on this frame. Stops strobing on this frame of all selected trackers.

608
SIMULTRACK REFERENCE

Expand Submenu
Expand All. Expand all frames on all selected trackers
Expand all frames. Expand all frames on the clicked-on tracker.
Close all frames. Close all frames on the clicked-on tracker.
Expand all on this frame. Expand this frame on all selected trackers.
Close all on this frame. Close this frame on all selected trackers.

Solver Output View


The solver output view shows the textual output of the last solve
performed, which is useful for review the results of complex multi-phase solves
after the solver dialog has been closed.
There is a vertical scroll bar. To reposition horizontally, drag with the
middle mouse button.
The Solver Output view has a small right-click menu consisting of Home,
End, and Copy All (text to the clipboard).

609
Overview of Standard Tool Scripts
SynthEyes includes a number of standard tool scripts, in addition to the
import and export scripts. Additional scripts are announced regularly on the web
site and via the Msg message button in SynthEyes.
Here is a quick overview of when to use the standard scripts available at
time of publication. For usage details, consult the tutorials on the web site and
the control panels that pop up when they are started.
Animate Trackers by Mesh Projection. Projects the 2D position of all
exportable trackers onto the selected mesh, creating 3D motion-capture-
style paths for each tracker. In a typical use, the mesh is a head mesh
parented to a moving object. The moving object has a set of trackers that
are on rigid features (corners of eyes, nose, etc) used to solve for the
head model track, while the trackers used here are on a disabled object
and track moving features such as corners of mouth etc to create facial
animation from a single camera shoot.
Apply/Remove Lens Distortion. If you track a shot, then discover there was
lens distortion, and want to switch to an undistorted version, but do not
want to re-track the shotuse this script to update the tracking data.
Calculate Texture Dots/Unit. Displays information about the number of pixels
per SynthEyes (world) unit in each direction, to allow extracted texture
map resolutions to be chosen intelligently. There must be an Edit Mesh
with exactly one face selected; the resolution information is for that one
triangle and may be different for different faces, depending on the mesh.
Camera to Tracker Distance. Shows the distance from the camera to the
selected tracker(s). The smaller number in parentheses is the distance
along the camera axis only.
Convert Flex to Trackers. A flex is a 3-D curve in space; this script creates a
row of trackers along it, so you can make it into a mesh or export the
coordinates.
Duplicate Mesh. Use to create copies of a mesh object, possibly shifting each
one successively to make a row of fence posts, for example.
Duplicate Mesh onto Trackers. Duplicate a mesh onto selected (or all) trackers,
for example, many pine trees onto trackers on a mountainside. Use this
script to delete them later if you need to, it is otherwise difficult!
Filter Lens FOV. Use to smooth out a lens field of view track in a zoom shot, to
eliminate zoom/dolly jitter.
Grid of Trackers. Creates a grid of supervised trackers, optionally within a
spline. Use for open-ocean tracking and creating dense supervised
meshes.
Invert Perspective. Turn a low(no) perspective object track inside out.
Make Mesh the Ground. The entire SynthEyes scene is repositioned so that the
mesh's local coordinate system is now the world coordinate system, ie you
can move a plane around in 3D to your desired ground plane location,

611
OVERVIEW OF STANDARD TOOL SCRIPTS

then run this script and the planeand entire sceneare moved to the
origin.
Make Object from Tracker. Creates a moving object from a tracker, using its 2D
path to create a far-tracker-like path in 3D for the moving object, useful for
creating texture-extraction geometry on moving objects where the exact 3-
D path can not be determined. The object can face the camera exactly, or
be spin solely about its vertical axis.
Mark Seeds as Solved. You can create seed trackers at different locations,
possibly on a mesh, then make them appear to be solved at those
coordinates.
Mesh Information. Shows the number of vertices and facets of the current
selected mesh. Note that if normals or texture coordinates are present,
there is always one value for each position vertex in SynthEyes, so that
information would be redundant.
Motion Capture Camera Calibrate. See motion capture writeup.
Perspective Projection Screen Adjust. Use this script to adjust some per-
shot/camera behind-the-scenes controls for the perspective viewport's
built-in projection screen, such as the distance from the camera to the
screen and the grid resolution.
Preferences Listing. (exporter!) Creates a list of preferences available for
SynthEyes scripting, along with tooltip reference text.
Projection Screen Creator. Creates a "projection screen" mesh in the 3-D
environment, textured by the current shot. Allows you to see the shot
imagery in place, even when you are not locked to it. You can matte out
the chroma key that you have set up on the Green Screen Panel, or use
an existing alpha channel.
Rename Selected Trackers. Manipulates the names of the selected trackers.
They can be renamed with a shared basic name and new numbers
assigned, for example Fave1, Fave2, Fave3. Or, new prefixes or suffixes
can be inserted, ie Tracker1 becomes LeftTracker1 or TrackerL1 (or less
desirably, Tracker1L, if "Keep tracker# at end" is turned off). Portions of
the names can be removed as well. The script will automatically increment
the numbering so that the resulting tracker names are unique.
Reverse Shot/Sequence. Use to avoid re-tracking when youre suddenly told to
reverse a shot. Reverses tracker data but not other animated data.
Select By Type. Use to select all Far trackers, all unsolved trackers, etc.
Set Color by RMS Error. Re-colors trackers based on their RMS error after
solving for easier checking. Sets up 3 different colors, good/OK/bad aka
green/yellow/red. Changes a secondary color for each tracker. You can
adjust the colors and rms error levels. Switch back and forth between the
two sets of colors using the View/Use alternate colors menu item.
Set Plane Aspect Ratio. Adjusts the width or height of the selected 3-D plane to
a specified value, usually so it can be used to hold a texture while
maintaining the proper image and pixel aspect ratio.
Set Tracker Color from Image. Sets the primary or alternate color of the tracker
based on the average image color inside the tracker on the current frame,

612
SIMULTRACK REFERENCE

or on the tracker's first valid frame. Can be made to invert the color to
enhance the tracker visibility in the camera view, or not do so, to suggest
the scene in the 3-D point cloud in the perspective view.
Shift Constraints. Especially using GPS survey data, use this script to adjust
the data to eliminate the common offset: if X values are X=999.95,
999.975, 1000.012, you can subtract 1000 from everything to improve
accuracy.
Splice Paths. Sometimes a shot has several different pieces that you can track
individually; this script can glue them together for a final track. Open the
shot repeatedly within the same scene file (or merge files), and adjust the
start and end ranges on the time bar so that they overlap at a single
frame, ie shot1 goes 0..100, shot2 goes 100 to 200 (NOT 101 to 200).
Have each section by solved, then run Splice Paths.
Step by tracker auto-key. Steps the user-interface frame# forward or backward
by the auto-key setting of the single selected tracker. Use this to "rough-
out" supervised trackers, stepping into the as-yet-untracked portion and
typically using z-drop and z-drop-lock.

613
Preferences and Scene Settings Reference
Scene settings for the current scene are accessed through the Edit/Edit
Scene Settings menu item, while the default preference settings are accessed
through the Edit/Edit Preferences menu item. The preferences control the
defaults for the scene, taking effect only when a new scene is created, while the
scene settings affect the currently-open scene, and are stored in it.

Preferences
Preferences apply to the user interface as a whole. Some preferences that
are also found on the scene settings dialog, such as the coordinate axis setting,
take effect only as a new scene is created; subsequently the setting can be
adjusted for that scene alone with the scene settings panel. Other preferences,
especially those having to do with layout or sizing, take effect only when the
SynthEyes window is resized, or SynthEyes restarted. Other preferences are set
directly from the dialog that uses them, for example, the spinal editing
preferences.
The Edit/Reset Preferences item resets the preferences to the factory
values.

Tip: If your machine crashes at an inopportune time, or suffers from


some other problem, you might manage to corrupt the preferences file,
resulting in SynthEyes crashing at startup. To manually clear the
preferences, delete the following file:

Windows: C:\Users\YourNameHere\Application Data\SynthEyes\prefs14.dat or


C:\Documents and Settings\YourNameHere\Application Data\SynthEyes\prefs14.dat
OS X: /Users/YourNameHere/Library/Application Support/SynthEyes/prefs14.dat
Linux: ~/.SynthEyes/prefs14.dat
Apologies in advance: We concede that there are too many preferences!
We have listed some of them below in alphabetic order for your reference. The
left portion of the panel is a very long list of preferences. The main dropdown
(Appearance below) lets you jump through the list to different sections; the PgUp
and PgDn buttons similarly page through them. The right-hand side of the panel
has some others with special requirements that do not fit into the main list.

615
PREFERENCES AND SCENE SETTINGS REFERENCE

Right-Side Controls
Default Back Plate Width. Width of the cameras active image plane, such as
the film or imager.
Back Plate Units. Shows in for inches or mm for millimeters, click it to change
the display units for this panel, and the default for the shot setup panel.
Default Export Type. Selects the export file type to be created by default.
Folder Presets. Helps workflow by letting you set up default folders for various
file types: batch input files, batch output files, images, scene files, scripts,
imports, and exported files. Select the file type to adjust, then hit the Set
button. To prevent SynthEyes from automatically to a certain directory for
a given function, hit the Clear button.
Multi-processing. Drop-down list. Enable or disable SynthEyes's use of multiple
processors, hyper-threading, or cores on your machine. The number in
parentheses for the Enable item shows the number of
processors/cores/threads on your machine. The Single item causes the
multiprocessing algorithms to be used, but only with a single thread,
mainly for testing. The Half option will use half of the available cores,
which can be helpful when you have another major task running, such as
a render on an 8-core machine.
Post-solve sound [hurrah]. Button. Shows the name of the sound to be played
after long calculations.
UI Language. Drop-down. Selects one of the available XML language
translation files from the user and system script folders. Takes effect only

616
PREFERENCES AND SCENE SETTINGS REFERENCE

when SynthEyes starts. When blank (by default), no modifications are


applied.
UI Colors. (Drop-down and color swatch) Change the color of many user-
interface elements. Select an element with the drop-down menu, see the
current color on the swatch, and click the swatch to bring up a dialog box
that lets you change the color.

Main List (Partial)


16 bit/channel (if available). Store all 16 bits per channel from a file, producing
more accurate image, but consuming more storage.
After min. Spinner. The calculation-complete sound will be played if the
calculation takes longer than this number of minutes.
Anti-alias curves. Checkbox. Enables anti-aliasing and thicker lines for curves
displayed by the graph editor. Easier to read, but turn off if it is too slow for
less-powerful OpenGL cards.
Auto-switch to quad. Controls whether SynthEyes switches automatically to the
quad viewport configuration after solving. Switching is handy for beginners
but can be cumbersome in some situations for experts, so you can turn it
off.
Axis Setting. Selects the coordinate system to be used.
Bits/channel: 8/16/Half/Float. Radio buttons. Sets the default processing and
storage bit depth.
Click-on/Click-off. Checkbox. When turned on, the camera view, tracker mini-
view, 3-D viewports, perspective view, and spinners are affected as
follows: clicking the left or middle mouse button turns the mouse button
on, clicking again turns it off. Instead of dragging, you will click, move, and
click. This might help reduce strain on your hand and wrist.
Compress .sni files. When turned on, SynthEyes scene files are compressed as
they are written. Compressed files occupy about half the disk space, but
take substantially longer to write, and somewhat longer to read.
Constrain by default (else align). If enabled, constraints are applied rigorously,
otherwise, they are applied by rotating/translating/scaling the scene
without modifying individual points. This is the default for the checkbox on
the solver panel, used when a new scene is created.
Enable cursor wrap. When the cursor reaches the edge of the screen, it is
wrapped back around onto the opposite edge, allowing continuous mouse
motion. Disable if using a tablet, or under Virtual PC. Enabled by default,
except under Virtual PC.
Enhanced Tablet Response. Some tablet drivers, such as Wacom, delay
sending tablet and keyboard commands when SynthEyes is playing shots.
Turning on this checkbox slows playback slightly to cause the tablet driver
to forward data more frequently.

617
PREFERENCES AND SCENE SETTINGS REFERENCE

Export Units. Selects the units (inches, meters, etc) in the exported files. Some
units may be unavailable in some file types, and some file types may not
support units at all.
Exposure Adjustment: increases or decreases the shot exposure by this many
f-stops as it is read in. The main window updates as you change this.
Supported only for certain image formats, such as Cineon and DPX.
First Frame is 1 (otherwise 0). Turn on to cause frame numbers to start at 1 on
the first frame.
Maximum frames added per pass. During solving, limiting the number of
frames added prevents new tentative frames from overwhelming an
existing solution. You can reduce this value if the track is marginal, or
expand it for long, reliable tracks.
Maya Axis Ordering. Selects the axis ordering for Maya file exports.
Match image-sequence frame #s. See Frame Numbering (Advanced).
Minutes per auto-save. Spinner. If non-zero, SynthEyes will automatically re-
save the file every few minutes, as set by this spinner. The value defaults
to one, which means auto-save is on by default. When auto-save is on,
SynthEyes will always save, rather than asking you if you want to save or
discard a changed file. To turn auto-save off, set the value to zero.
No middle-mouse button. For use with 2-button mice, trackballs, or Microsoft
Intellipoint software on Mac OSX. When turned on, ALT/Command-Left
pans the viewports and ALT/Command-Right links trackers.
Nudge size. Controls the size of the number-pad nudge operations. This value is
in pixels. Note that control-nudge selects a smaller nudge size; you should
not have to make this value too smalluse a convenient value then
control-nudge for the most exacting tweaks.
Place after auto-solve. When checked, the Auto-place algorithm (see the
Summary panel) will run automatically to set up a coordinate system
after and only when you use the large green AUTO button on the solver
panel.
Playbar on toolbar. When checked, the playbar (rewind, end, play, frame
forward etc) is moved from the command panel to a horizontal
configuration along the main toolbar. Usable only on wider monitors.
Prefetch enable. The default setting for whether or not image prefetch is
enabled. Disable if image prefetch overloads your processor, especially if
shot imagery is located on a slow network drive.
Put export filenames on clipboard. When checked (by default), whenever
SynthEyes exports, it puts the name of the output file onto the clipboard,
to make it easier to open in the target application.
Safe #trackers. Spinner. Used to configure a user-controlled desired number of
trackers in the lifetimes panel. If the number is above this limit, the lifetime
color will be white or gray, which is best. Below this limit, but a still
acceptable value, the background is the Safe color, by default a shade of
green: the number of trackers is safe, but not your desired level.
Shadow Level. Spinner. The shadow is dead black, this is an alpha that ranges
0 to 1, at 1 the shadow has been mixed all the way to black.

618
PREFERENCES AND SCENE SETTINGS REFERENCE

Stay Alive. Spinner. Sets the number of frames the search box for a supervised
tracker is displayed after the tracker becomes lost or disabled. If the
control is set to zero, then the search box will never be removed. At larger
values, the screen may become cluttered with disabled trackers.
Start with OpenGL Camera View. When on, SynthEyes uses OpenGL
rendering for the camera view, which is faster on a Mac and when large
meshes are loaded in the scene. When off, SynthEyes uses simpler
graphics that are often faster on Windows, as long as there arent any
complex meshes. This preference is examined when you open SynthEyes
or change scenes. You can change the current setting from the View
menu. When you change the preference, the current setting is also
changed.
Start with OpenGL 3-D Viewport. Same as for the camera view, but applies to
the 3-D viewports.
Thicker trackers. When check trackers will be 2-pixels wide (instead of 1) in the
camera, perspective, and 3-D views. Turned on by default for, and
intended for use with, higher-resolution displays.
Trails. The number of frames in each direction (earlier and later) shown in the
camera view for trackers and blips.
Undo Levels. The number of operations that are buffered and can be undone. If
some of the operations consume much memory (especially auto-tracking),
the actual limit may be much smaller.
Use software mesh render. This control makes a difference only for camera
and 3D viewports that are not using OpenGL. When on, 3D meshes are
rendered using a SynthEyes-specific internal software renderer. For
contemporary multi-core machines, this will be much faster than the
operating system's drawing routines, and can be faster than OpenGL.
Takes effect at startup, after that, see the Software mesh render item on
the View menu.
Wider tracker-panel view. Checkbox. Selects which tracker panel layout is
used. The wider view makes it easier to see the interior contents of a
tracker, especially on high-resolution display. The smaller view is more
compact, especially for laptops.
Write .IFL files for sequences. When set, SynthEyes will write an industry- and
3ds MAX-standard image file list (IFL) file whenever it opens an image
sequence. Subsequently it will refer to that IFL file instead of re-scanning
the entire set of images in order to open the shot. Saves time especially
when the sequence is on a network drive.

Scene Settings
The scene settings, accessed through Edit/Edit Scene Settings, apply to
the current scene (file).
The perspective-window sizing controls are found here. Normally,
SynthEyes bases the perspective-window sizes on the world size of the active

619
PREFERENCES AND SCENE SETTINGS REFERENCE

camera or object. The resulting actual value of the size will be shown in the
spinner, and no key will be indicated (a red frame around the spinner).
If you change the spinner, a key frame will be indicated (though it does not
animate). After you change a value, and the key frame marker appears, it will no
longer change with the world size. You can reset an individual control to the
factory default by right-clicking the spinner.
There are several buttons that transfer the sizing controls back and forth
to the preferences: there is no separate user interface for these controls on the
Preferences panel. If a value has not been changed, that value will be saved in
the preferences, so that when the preferences are applied (to a new scene, or
recalled to the current scene), unchanged values will be the default factory
values, computed from the current world size.
Important Note: the default sizes are dynamically computed from the
current world size. If you think you need to change the size controls here,
especially tracker size and far clip, this probably indicates you need to adjust
your world size instead.

Axis Setting. Selects the coordinate system to be used.

620
PREFERENCES AND SCENE SETTINGS REFERENCE

Mesh De-Duplication. Drop-down. Selects the mesh de-duplication mode, which


can be used to reduce SNI file sizes, so that they don't all have to contain
the same repetitive mesh data.
Ambient Color. Left Swatch. Ambient illumination for perspective views. Set
initially from the perspective ambient preference.
Shadow Color. Right Swatch. Color of shadows in the perspective views, if fully
blended. Set initially from the Shadow Color color preference.
Camera Size. 3-D size of the camera icon in the perspective view.
Far Clip. Far clip distance in the perspective view
Inter-ocular. Spinner. Sets the inter-ocular distance (in the unitless numbers
used in SynthEyes). Used when the perspective view is not locked to the
camera pair.
Key Mark Size. Size of the key marks on camera/object seed paths.
Light Size. Size of the light icon in the perspective view.
Load from Prefs. Loads the settings from the preferences (this is the same as
what happens when a new scene is created).
Mesh Vertex Size. Size of the vertex markers in the perspective viewin pixels,
unlike the other controls here.
Near Clip. Near clipping plane distance.
Object Size. Size of the moving-object icon in the perspective view.
Orbit Distance. The distance out in front of the camera about which the camera
orbits, on a camera rotation when no object or mesh is selected.
Reset to defaults. The perspective window settings are set to the factory
defaults (which vary with world size). The preferences are not affected.
Save to prefs. The current perspective-view settings are saved to the
preferences, where they will be used for new scenes. Note that
unchanged values are flagged, so that they continue to vary with world
size in the new scene.
Stereo. Selector. Sets the desired color for each eye for anaglyph or interlaced
stereo display (as enabled by View/Stereo Display on the perspective
view's right-click menu.) The Luma versions of the anaglyph display
produces a black/white (gray-scale) version of the image, before colorizing
it for anaglyph presentation; you may prefer that for scrutinizing depths,
though it is substantially slower to produce. Note that the mouse is still
sensitive based on the currently-active camera/object, so you need to
select the appropriate item in the display. Left-over-right display is
intended for display and preview movies, not interactive operations, as
you will not be able to click on anything in the right spot.
Tracker Size. Size of the tracker icon (triangle) in the perspective view.
Vergence Dist. Spinner. Sets the vergence distance for the stereo camera pair,
when it is not locked to any actual cameras.

621
Keyboard Reference
SynthEyes has a user-assignable keyboard map, accessed through the
Edit/Edit Keyboard Map menu item. (Preview: use the Listing button to see
them all.) The keyboard manager lets you set up assignments of keys to menu
items, various button operations, and Sizzle scripts such as tools, importers, and
exporters.

The first list box shows a context (see the next section), the second a key,
and the third shows the action assigned to that key (there is a NONE entry also).
The Shift, Control, and Alt (Mac: Command) checkboxes are checked if the
corresponding key must also be down; the panel shown here shows a Select All
operation will result from Control-A in the Main context.
Because several keys can be mapped to the same action, if you want to
change Select All from Control-A to Control-T, say, you should set Control-A
back to NONE, and when configuring the Control-T, select the T, then the Control
checkbox, and finally then change the action to Select All.

Time-Saving Hint: after opening any of the drop-down lists (for


context, key, or action), hit a key to move to that part of the list quickly.

The Change to button sets the current key combination to the action
shown, which is the last significant action performed before opening the
keyboard manager. In the example, it would be Edit Scene Settings.
Change to makes it easy to set up a key code: perform the action, open
the keyboard manager, select the desired key combination, then hit Change to.
The Change to button may not always pick up a desired action, especially if it is a
buttonuse the equivalent menu operation instead.
You can quickly remove the action for a key combination using the NONE
button.
Changes are temporary for this run of SynthEyes unless the Save button
is clicked. The Factory button resets the keyboard assignments to their factory

623
KEYBOARD REFERENCE

defaults. Listing shows the current key assignments; see the Default Key
Assignments section below.

Key Contexts
SynthEyes allows keys to have different functions in different places; they
are context-dependent. The contexts include:
The main window/menu
The camera view
Any perspective view
Any 3-D viewport
Any command panel
There is a separate context for each command panel.
In each context, there is a different set of applicable operations, for
example, the perspective window has different navigation modes, whereas
trackers can only be created in the camera window. When you select a context
on the keyboard manager panel, only the available operations in that context will
be listed.
Here comes the tricky part: when you hit any key, several different
contexts might apply. SynthEyes checks the different contexts in a particular
order, and the first context that provides an action for that key is the context and
action that is applied. In order, SynthEyes checks
The selected command panel context
The context of the window in which the key was struck
The main window/menu context
The context of the camera window for the active tracker host, if it is visible,
even if the cursor was not in the camera window.
This is a bit complex but should allow you to produce many useful effects.
Note that the 4th rule does have an action at a distance flavor that might
surprise you on occasion, though it is generally useful.
You may notice that some operations appear in the main context and the
camera, viewport, or perspective contexts. This is because the operation appears
on the main menu and the corresponding right-click menu. Generally you will
want the main context.
Keys in the command-panel contexts can only be executed when that
command-panel is open. You cannot access a button on the solver panel when
the tracker panel is open, say. The solver panels context is not active, so the key
will not even be detected, the solver panel functionality is unavailable when it
isnt open, and changing settings on hidden panels makes for tricky user
interfaces (though there are some actions that basically do this).

624
KEYBOARD REFERENCE

Default Key Assignments


Rather than imprecisely try to keep track of the key assignments here,
SynthEyes provides a Listing button, which produces and opens a text file. The
file shows the current assignments sorted by action name and by the key, so you
can find the key for a given action, or see what keys are unused.
The listing also shows the available actions, so you can see what
functions you can assign a key to. All menu actions can be assigned, as can all
buttons, check boxes, and radio boxes on the main control panels, plus a variety
of special actions.
You will see the current key assignment listed after menu items and in the
tooltips of most buttons, checkboxes, and radio buttons on command panels.
These will automatically update when you close the keyboard manager.

Fine Print
Do not assign a function to plain Z or apostrophe/double-quote. These
keys are used as an extra click-to-place shift key in the camera view, and any Z
or / keyboard operation will be performed over and over while the key is down
for click-to-place.
The Reset Zoom action does three somewhat different things: with no shift
key, it resets the camera view so the image fills the view. When the shift key is
depressed, it resets the camera view so that the image and display pixels are 1:1
in the horizontal direction, ie the image is full size. If the control key is pressed,
then the camera view is centered within the area, instead of put at top-left.
Consequently, you need to set up your key assignments so that the fill operation
is un-shifted, and the 1:1 operation is shifted, etc.
The same thing applies to other buttons whose functionality depends on
the mouse button. If you shift-click a button to do something, then the function
performed will still depend on the shift setting of the keyboard accelerator key.
There may be other gotchas scattered through the possible actions; you
should be sure to verify their function in testing before trying them in your big
important scene file. You can check the undo button to verify the function
performed, for example.
The My Layout action sets the viewport configuration to one named My
Layout so that you can quickly access your own favorite layout.

Key Assignment File


SynthEyes stores the keyboard map in the file keybd14.ini. If you are very
daring, you can modify the file using the SynthEyes keyboard manager, Notepad,
or any text editor. SynthEyes exact action and key names must be used, as
shown in the keyboard map listing. There is one keybd14.ini file for each user,
located like this:

625
KEYBOARD REFERENCE

C:\Documents and Settings\YourNameHere\Application Data\SynthEyes\keybd14.ini


(Windows)
/Users/YourNameHere/Library/Application Support/SynthEyes/keybd14.ini (Mac OSX)
~/.SynthEyes/keybd14.ini (Linux)
You can quickly access this folder from SynthEyess File/User Data Folder
menu item.
The preferences data and viewport layouts are also stored in prefs14.dat
and layout14.ini files in this folder.
Note that the Application Data folder may be hidden by the Windows
Explorer; there is a Folder Option to make it visible.
Similarly, OS X now hides the Library folder, hold down ALT when clicking
on the Finder's Go menu to have the Library listed.

626
Viewport Layout Manager
With SynthEyess flexible viewport layout manager, you can adjust the
viewports to match how you want to work. In the main display, you can adjust the
relative sizes of each viewport in an overall view, or create quick temporary
layouts by changing the panes in an existing layout, but with the viewport layout
manager, accessed through the Window menu, you can add whole new
configurations with different numbers and types of viewports.

To add a new viewport configuration, do the following. Open the manager,


and select an existing similar configuration in the drop-down list. Hit the Duplicate
button, and give your new configuration a name.
If you created a new Custom layout in the main user interface by
changing the panes, and youd like to keep that layout for future use, you can
give it a name here, so that it is not overwritten by your next Custom layout
creation.

Tip: In the main user interface, the 7 key automatically selects a


layout called My Layout so you can reach it quickly if you use that
name.

Inside the view manager, you can resize the viewports as in the main
display, by dragging the borders (gutters). If you hold down shift while dragging a
border, you disconnect that section of the border from the other sections in the
same row or column. Try this on a quad viewport configuration and it will make
sense.

627
VIEWPORT LAYOUT MANAGER

If you double-click a viewport, you can change its type. You can split a
viewport into two, either horizontally or vertically, by clicking in it and then the
appropriate button, or delete a viewport. After you delete a viewport, you should
usually rearrange the remaining viewports to avoid leaving a hole in your screen.
When you are done, you can hit OK to return to the main window and use
your new configuration. It will be available whenever you re-open the same
scene file.
If you wish to save a set of configurations as preferences, for time you
create a new SynthEyes file, reopen the Viewport Layout Manager, and click the
Save All button.
If you need to delete a configuration, you can do that. But you should not
delete the basic Camera, Perspective, etc layouts.
If you would like to return a scene file to your personal preferences, or
even back to the factory defaults, click the Reset/Reload button and you can
select which.

628
Script Bar Manager Reference
The Script Bar Manager manipulates small text files that describe script
bars a type of toolbar that can quickly launch scripts or commands on the
main menu. You can start the script manager from the Scripts menu.

The selector at top left selects the script bar being edited; its file name is
shown immediately below the script name, with the list of script buttons listed
under that. Use the New button to create a new script bar; you will enter a name
for your script bar, then select a file name for it within your personal scripts folder.
You can also use the Save As button to duplicate a script with a new name, use
Chg. Name to change the human-readable name (not the file name), or you can
Delete a script bar, which will delete the script bars file from disk (but not any of
the scripts).
Each button has a short name, shown in the list, in addition to the longer
full script name and file name, both of which are shown when an individual button
is selected in the list. You can double-click a button to change the short name,
use the Move Up and Move Down buttons to change the order, or click Remove
to remove a button from the script bar (this does NOT delete the script from disk).
To add a button to a script bar, select the script name or menu command
in the selector at bottom, then click the add button. You will be able to select or
adjust the short name as the button is added.

629
SCRIPT BAR MANAGER REFERENCE

Once you have created a script bar, or even while you are working on it,
click Launch to open the script bar.
SynthEyes saves the position of script bars when it closes, and re-opens
all open scripts when it next starts. If you have changed monitor configurations, it
is possible for a script bar to be restored off-screen. If this should happen, click
the Find button and the script bar will appear right there.

630
Lens Information Files
Lens information files may be modified by hand or written by Sizzle
scripts or other tools using this information. They are XML files but have an
extension of .lni. They are automatically found by SynthEyes within the system
and user scripts folders.
Each file uses this general format:
<Lens title="Zeiss UP-10 @ Infinity" mm="1">
<Info>
<creator>c:\Viken\bin\scripts\Lens\zeiss1.szl</creator>
<date>Fri, Apr 03, 2009 10:57:30 PM</date>
<lin>0.058108</lin>
<sq>-0.128553</sq>
<cub>0.01099</cub>
<quar>-0.0002606</quar>
<maxAperture>20</maxAperture>
</Info>
<DATA>
<sample rin="0" rout="0"/>
<sample rin="0.425532" rout="0.425542"/>
<sample rin="0.851064" rout="0.850749"/>
<sample rin="1.2766" rout="1.27515"/>
<sample rin="1.70213" rout="1.69836"/>
<sample rin="2.12766" rout="2.12005"/>
</DATA>
</Lens>
The root tag must be Lens. The title attribute is the human-readable name
of the preset.

Important: XML tag and attribute names are case-sensitive, Lens is


not the same as LENS or lens. And quotes are required for attribute
values.

The Info block (or any other unrecognized blocks) are not read; here they
are used to record, in a standard way, details of the file creation by a script.
After that comes a block of data with samples of the distortion table. They
must be sorted by rin and will be spline-interpolated. The last line above says
that pixels 2.128 mm from the center will be distorted to appear only 2.12 mm
from the lens center.
This is an absolute file with radii measured in millimeters from the optic
center: the optional mm attribute on the root Lens tag makes it absolute (1), by
default it is a relative file(mm=0).
Relative files measure radii in terms of a unit that goes from 0 at the
center of the image (when it is properly centered) vertically to 1.0 at the center of

631
LENS INFORMATION FILES

the top (or bottom) edge of the image. Literally, that is the V coordinate of
trackers; it is resolution- and aspect-independent.
For both file types, there is the question of how far the table should go,
what the maximum value should be. For an absolute file, this is determined by
the maximum image size of the lens. For a relative file, the maximum value is
determined from the maximum aspect ratio: sqrt(max_aspect*max_aspect + 1).
This value is required as an input to relative-type lni generator scripts, but is
essentially arbitrary as long as it is large enough.
With some distortion profiles, if large-enough radii are fed in, the radius
will bend back on itself, so that increasing the radius decreases the distorted
radius. SynthEyes ignores the superfluous rest of the file when this happens
without error.
Two other tags can appear in the file at the top level with Info and DATA: a
tag BPW with value 8.1 would say that the recommended nominal back-plate
width is 8.1 mm, and a tag of FLEN with value 10.5 would say that the nominal
focal length is 10.5 mm. These values are presented for display only on the
image processors Lens tab for the users convenience, if the table was
generated for a specific camcorder model.

632
KEYBOARD REFERENCE

Window Layout File winlayout14.xml


The detailed layout of the SynthEyes window is controlled by the
winlayout14.xml fileit is responsible for placing each element within the
SynthEyes window, such as the time bar, control panels, top tab bar, content
windows, etc. As the file extension indicates, it is a encoded in an XML file with
specified tag and attribute names. You should look at the file as an aid to
understanding this section.
Under the top Layouts tag, there are any number of potential SynthEyes
layouts, each with requirements for that layout to be used. The first layout where
all requirements are satisfied is used.
Within a layout, the available rectangular region, initially the entire interior
of the SynthEyes window, is successively subdivided into regions, and those
regions assigned to window elements.
Although the approach resembles that HTML layout, it differs in that at
every stage of the recursive descent process, there is an immediately apparent
partitioning of the region, unlike an HTML table, where all the rows and columns
must be assessed before decisions can be made.
Here are some details of the XML elements:
<layout layoutcriteria hide-tags>...a layout node...</layout>

<window
name="toptab|toolbar|unredo|support|selector|panel|playbar|timebar|conte
nt|status"
align="fill|left|right|center"
valign="fill|top|bottom|center"
commonmargins
/>
Window fills the current region with the specified kind of SynthEyes element.

<split
side="left|right|top|bottom"
pixels="(pixels)"
frac="(0-100%)"
widthof="(window name)"
heightof="(window name)"
adder="(pixels)"
gutter="(pixels)"
swap="yes|no"
rightpan="yes|no"
commonmargins
>

633
LENS INFORMATION FILES

Split is followed by two child nodes, a left or top, then a right or bottom. The
attributes of the split specify how the overall region is divided into two
subregions, which are then filled with the two child nodes.

<ifelse
layoutcriteria
commonmargins
>
Ifelse is followed by two nodes, one that is used if the criteria are satisfied, on
that is used if it is not. The corresponding node is used to fill the entire region
handed to Ifelse.

<empty/>
The current region is left empty.

<tagged name="(a tag name)"/>


This region is filled from the XML tree with the given tag name. Any other XML
node can be assigned a tag name, which must be unique within the overall file.

layoutcriteria:
widerthan="(window name)"
higherthan="(window name)"
by="(margin in pixels)"
minwidth="(minimum width)" --- width of the current region
minheight="(minimum height)"
panel="one|none"
playbar="yes|no" --- whether a playbar is desired
timebar="yes|no" --- whether a time bar is desired
unredo="yes|no" --- whether an undo/redo/save bar is desired
pref="rightpan|toptime" --- whether preferences call for the panel on
the right, or time bar at the top, respectively
room="(room name)"

commonmargins:
margin="(pixels)" --- these decrease the incoming region
top_margin="(pixels)"
bottom_margin="(pixels)"
left_margin="(pixels)"
right_margin="(pixels)"
tag="(tag name)"
hide="(window name)"

hide-tags:
<hide name="(window name)"/>

634
KEYBOARD REFERENCE

635
Support
Technical support is available through support@ssontech.com. A
response should generally be received within 24 hours except on weekends.
SynthEyes is written, supported, and 2003-2015 by Andersson
Technologies LLC.

637
LENS INFORMATION FILES

Acknowledgements
SynthEyes is based in part on various open- and closed- source libraries
that facilitate communication between different applications. All of the
contributors efforts are greatly appreciated.

Alembic
Alembic library Copyright (c) 2010 Sony Pictures Imageworks and
Industrial Light & Magic, All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Boost
Boost Software License - Version 1.0 - August 17th, 2003
Permission is hereby granted, free of charge, to any person or
organization obtaining a copy of the software and accompanying documentation
covered by this license (the "Software") to use, reproduce, display, distribute,
execute, and transmit the Software, and to prepare derivative works of the
software, and to permit third-parties to whom the Software is furnished to do so,
all subject to the following:

638
KEYBOARD REFERENCE

The copyright notices in the Software and this entire statement, including
the above license grant, this restriction and the following disclaimer, must be
included in all copies of the Software, in whole or in part, and all derivative works
of the Software, unless such copies or derivative works are solely in the form of
machine-executable object code generated by a source language processor.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE
LIABLE FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.

DNG SDK
DNG SDK 1.4 and XMP SDK Copyright (c) 1999 - 2013, Adobe Systems
Incorporated. All rights reserved. Redistribution and use in source and binary
forms, with or without modification, are permitted provided that the following
conditions are met: * Redistributions of source code must retain the above
copyright notice, this list of conditions and the following disclaimer. *
Redistributions in binary form must reproduce the above copyright notice, this list
of conditions and the following disclaimer in the documentation and/or other
materials provided with the distribution. * Neither the name of Adobe Systems
Incorporated, nor the names of its contributors may be used to endorse or
promote products derived from this software without specific prior written
permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS
AND CONTRIBUTORS"AS IS" AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOTLIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FORA PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER ORCONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL,EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO,PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, ORPROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OFLIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THISSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.

Easy EXIF
Copyright (c) 2010-2015 Mayank Lahiri. All rights reserved (BSD
License). Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

639
LENS INFORMATION FILES

redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer; and redistributions in binary form must
reproduce the above copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided with the
distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE FREEBSD PROJECT OR CONTRIBUTORS BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.

FBX SDK
This software contains Autodesk FBX code developed by Autodesk,
Inc. Copyright 2013 Autodesk, Inc. All rights, reserved. Such code is provided as
is and Autodesk, Inc. disclaims any and all warranties, whether express or
implied, including without limitation the implied warranties of merchantability,
fitness for a particular purpose or non-infringement of third party rights. In no
event shall Autodesk, Inc. be liable for any direct, indirect, incidental, special,
exemplary, or consequential damages (including, but not limited to, procurement
of substitute goods or services; loss of use, data, or profits; or business
interruption) however caused and on any theory of liability, whether in contract,
strict liability, or tort (including negligence or otherwise) arising in any way out of
such code.

JPEG
This software is based in part on the work of the Independent JPEG
Group, http://www.ijg.org.

OpenEXR
OpenEXR library Copyright (c) 2004, Industrial Light & Magic, a division of
Lucasfilm Entertainment Company Ltd. Portions contributed and copyright held
by others as indicated. All rights reserved. Neither the name of Industrial Light &
Magic nor the names of any other contributors to this software may be used to
endorse or promote products derived from this software without specific prior
written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT
HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR

640
KEYBOARD REFERENCE

PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT


OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.

PNG
Based in part on the LibPNG library, Glenn Randers-Pehrson and various
contributing authors.

RED SDK
The R3D SDK and all included materials (including header files, libraries,
sample code & documentation) are Copyright (C) 2008-2015 RED Digital
Cinema. All rights reserved. All trademarks are the property of their respective
owners. This software was developed using KAKADU software.

TIFF
Based in part on TIFF library, http://www.libtiff.org, Copyright 1988-1997
Sam Leffler, and Copyright 1991-1997 Silicon Graphics, Inc.

ZLIB
Copyright (C) 1995-2013 Jean-loup Gailly and Mark Adler
This software is provided 'as-is', without any express or implied warranty.
In no event will the authors be held liable for any damages arising from the use of
this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it freely, subject
to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software in a product,
an acknowledgment in the product documentation would be appreciated but is
not required.
2. Altered source versions must be plainly marked as such, and must not
be misrepresented as being the original software.
3. This notice may not be removed or altered from any source
distribution.

641
LENS INFORMATION FILES

642

Das könnte Ihnen auch gefallen