Sie sind auf Seite 1von 289

512M A N UA L

preface

Welcome to the world of Vicon and the manual that explains all you need to know about Vicon 512. Vicon 512 is the motion capture and analysis system designed and built for clinical and scientific use. In keeping with this blueprint, the manual has been developed for the user who wants accurate and reliable data for those purposes.

preface

the manual

About the system


With the Vicon Wizard looking over your shoulder, we intend to bring you up to speed on using the Vicon 512 system as quickly and clearly as is possible.

You will hopefully have direct access to the following pieces of equipment; Vicon Datastation (the big white box). Vicon Workstation software installed on your PC. Up to 12 cameras with tripods or wall mounts. Calibration equipment. Retro-reflective markers. Sufficient space in which to hold your capture session. A subject who is ready to be captured.

Dont despair if you dont have immediate access to the above as you can still learn a great deal by running the Workstation software and looking at the example data weve provided with your installation CD.

About this tutorial


These User Guides have been developed to make life easy for you and show you the enormous potential Vicon offers in capturing complex movement. The guides have been written to help you understand every aspect of Vicon motion capture and analysis. Weve tried to write it in a clear and easy style - similar to the way in which we would have liked Vicon to be explained. Weve have also tried to include many of the questions weve asked the Vicon developers, engineers and other users along our path of discovery. We hope that weve pre-empted most, if not all, of your questions too. The final purpose of these guides is to demystify gait analysis and to remove some of the stuffy "white coat" feel that has existed until now. We believe that there should not be a cultture of complexity in this area. We aim to present it for what it is - the most powerful tool available for generating accurate 3D human movement studies.

ii

the manual

preface

Acknowledgements.
Big thanks are given to all our enthusiastic subjects; Kate, Signe, Brent, Ruth, Mike, Jonathan, Alex, Matt, Dave, Richard, Warren, Katie, Helen and all the others who helped in the development process.

Revision History
Version 1.0 of VICON 512 manuals were created January 1999.

Trademarks
Vicon, BodyBuilder, BodyLanguage and DYNACAL are registered trademarks of Oxford Metrics Ltd. Oxford Metrics and Vicon acknowledge all other trademarks.

Author
Adrian Woolard, Vicon Wizard

Editor for Biomechanics


Martin Lyster, Motion Capture and Gait Analysis Consultant

Contributors
Jonathan Attias, Nick Bolton, Warren Lester, Martin Lyster, Julian Morris, Brain Nilles, Paul Smyth, Paul Tate and Jack Daniels.

Design
Mark McClintock,
B U R N T,

www.burnt.net

Contact details
Oxford Metrics Ltd. 14, Minns Estate West Way, Oxford OX2 0JB Tel +44 (0) 1865 261800 Fax +44 (0) 1865 240527 Vicon Motion Systems 15455 Red Hill Ave. Suite C Tustin, CA 92780 USA Tel (714) 259-1232 Fax (714) 259-1509

www.vicon.com

iii

contents

1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7

INTRODUCTION Whats in the box, Wizard? The Layout of this Manual Whats Not in the Manual (and where you can find it)? Whats New in the World of Vicon? What Should I Know Before I Get Started? A Few Vicon Principles And Finally... 2 3 6 7 10 11 15

2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7

NEW USER Whats This Guide All About? A Little Bit of Preparation Goes a Long Way Lets Unpack Your Vicon Setting Up Your Cameras Calibrating Your Capture Volume So Lets Go Capture Closing Words 18 20 23 37 43 54 61

3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8

ROUTINE USER Whats This Guide All About? Putting Markers on a Real Person Capturing and Calibrating Your Subject Making Life Too Easy - Autolabeling Pipelining - Motion Capture Made Simple Reconstruction and AutoLabel Parameters After the Shoot: 3D Data Editing Closing Words 66 68 79 88 94 102 111 127

contents

the manual

4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8

ADVANCED USER Whats This Guide All About? Linearisation Lets Capture Movie Data How to Write a Vicon Plug-In Using Genlock and Timecode When Capturing With Vicon Capturing Using Sample Skip and the Remote Control How to Capture Multiple Subjects and Props Closing Points - is that it, Wiz? 132 133 139 145 152 159 161 168

5.0 5.1

REFERENCE Animation Pipeline 5.1.1 5.1.2 5.1.3 5.1.4 5.2 Vicon 8 BodyBuilder Mobius Exporting to your CG Package 172 172 176 185 190 202 202 203 205 208 212 212 213 214 214 216 216 217 219 223 224 225 228 232

Calibration Theory 5.2.1 5.2.2 5.2.3 Relative Pose 3D Origin and Axes Summary

5.3 5.4

Reconstruction Theory Autolabeling Theory 5.4.1 5.4.2 5.4.3 5.4.4 Grouping Markers Subject Calibration Trial Statistics Progressive Loosening of Criteria

5.5

File Types 5.5.1 5.5.2 5.5.3 5.5.4 5.5.5 5.5.6 5.5.7 5.5.8 c3d File car File cp File cro File ini File lp File mkr#2 File mpg File

vi

the manual

contents

5.5.9

obd File

233 234 235 236 237 238 239 242 245 253 256 266 278 288 288

5.5.10 ses File 5.5.11 sp File 5.5.12 tvd File 5.5.13 usr File 5.5.14 vad File 5.5.15 wks File 5.5.16 Acclaim File Format 5.5.17 asf/ast File 5.5.18 amc File 5.5.19 csm File 5.6 5.7 5.8 Glossary Troubleshooting Checklists 5.8.1 5.8.2 Step bt step review of the main processes of Vicon 8

Typical values for reconstruction and autolabel parameters 292

INDEX

vii

introduction

This section introduces the Vicon system to you. It covers the basics of what is included in this manual and where you can go for other information. It also covers all the new elements of Vicon 512 and introduces various terms that will be used throughout the manual.

1.1 1.2 1.3 1.4 1.5 1.6 1.7

Whats in the box, Wizard? The Layout Whats Not in the Manual? Whats New? What Should I Know? A Few Vicon Principles And Finally...

introduction

1.1

whats in the box, wizard?

1.1 Whats in the box?


By purchasing Vicon, you have taken the big step towards becoming a motion capture and analysis expert. This tutorial is filled with all kinds of interesting information from straightforward step by step guides to every aspect of Vicon, to loads of example data for you to look at and study. Weve also included plenty of diagrams, screen shots and movie files which are all designed to help you learn how to use Vicon to its potential and, hopefully, beyond. But before you dive straight in, we would like to explain the layout of this tutorial as well as other sources of information and introduce a few concepts that will make help you later.

the layout

1.2

introduction

The layout of this manual 1.2


This tutorial has been developed with several different types of user in mind and hence contains user guides for the New User, the Routine User and the Advanced User. There is also a Reference Guide at the back of this manual that contains various checklists, process information and theory of the Vicon system. Though we hope you will take the time to work through all of the guides at least once, we appreciate that when using Vicon regularly youll only want to look in the book when youve got a problem. To this end, weve prepared a Troubleshooting section in the Reference Guide thats designed to discuss all the potential reasons for your dilemma and direct you to the correct sections of the manual. Weve also included step by step checklists to all aspects of Vicon from the practical set-up of your equipment through to cleaning up your own captured data and displaying your results. There is also a glossary in the Reference Guide to explain a few of the potentially confusing terms. In the tutorial parts of this manual, we assume that the new user will almost always progress from the New User to reading the Routine User Guide and for that reason, the planning of capture sessions is covered in both guides. So if you're a new user and your tempted to skip straight to the Routine User Guide, please be patient and work through from the start. Take the time to work through both Guides and you'll be running much better capture sessions.

New User Guide


Lets say youre new to Vicon and are wondering what all the fuss is about. Well, youll want to read this guide as we explain how to plan your capture session, install your hardware, set up your cameras for your capture volume, use DYNACAL to calibrate your volume and then actually capture data and visualise it in the 3D workspace. If you dont have access to the Vicon hardware at present then you may wish to skip the practicalities of setting up the Vicon system. We do recommend that you have a look at the sections on calibration and capture as they will give you a vital understanding on the two key elements of the process.

introduction

1.2

the layout

Routine User Guide


If youre only ever going to be post processing the data, you are unlikely to get close to a Datastation so save time and skip the previous guide. This guide focuses on the processing of captured trial data. It contains sections on creating subjects, pipeline processing, Autolabeling and using the various editing tools for data clean-up. We do however hope that at some stage you will look at the sections on preparation and capture so you can appreciate the work that has gone on to provide you with all that lovely data on your workstation.

Advanced User Guide


This guide is aimed at introducing some of the more advanced tools and techniques available when using Vicon. We will discuss in more detail some of the practicalities and tools discussed in the New and Routine guides. Well describe how to do synchronous movie capture. Weve also included sections on how to deal with multiple subjects, how to perform facial or hand capture and also tips on capturing in large and irregular volumes. If you have been using Vicon for some time then this guide will bring you up to speed on the new functionality of the system and provide tips on how to use Vicon to solve some of your more taxing problems.

Reference Guide
The Reference section is your resource centre - somewhere you can go when you need more information. It contains a number of key sections detailed below and other parts covering the theory behind the system operation.

Learning About the Pipeline This chapter is particularly of use for those of you who
want to get to grips with visualising your data as a 3D model. Well explain how you use BodyBuilder to generate your model outputs.

Troubleshooting We hope that once youve worked through the tutorial guides you will
feel confident of achieving good quality data in record times. However we have found because of external circumstances things dont always go as smoothly as normal. It may be that youve moved into another studio or youre working to tight deadlines but something isnt behaving as you expect it should and you need to quickly get a solution. 4

the layout

1.2

introduction

We have included a troubleshooting guide as were well aware of the many FAQs that our customer support team have addressed during the previous incarnations of Vicon. We also know that in the future you will only pick up this tutorial when youve got a problem or have forgotten something.

Checklists You should also have a quick look through one of the relevant checklists to
ensure that you havent missed anything. Dont worry weve been there and its easy to forget some part of the whole procedure.

Glossary Finally, you will find a glossary which you can refer to as we introduce various
new subjects and terms.

introduction

1.3

whats not in this manual?

1.3 Whats not in this manual (and where you can find it).
Beyond this tutorial, Vicon provides the following sources of information; on-line help, the web site (www.vicon.com) and user group updates and gatherings. This tutorial doesnt discuss any of the error messages you may occasionally come across as you will find all the help you need in on-line help.

On-Line Help
One of the key ways in which were trying to communicate better with you the user is with improved on-line help. On-line help has been redesigned to try and answer questions as quickly and clearly as possible and point you in the correct direction to find out more on your problem. See later for more information.

Example Data
You will find a number of directories on the installation CD which contain plenty of examples that you can work with to improve your Vicon skills. This example data was recorded and processed in our own studio in Oxford and is used to highlight the wide variety of moves you are able to capture with Vicon. The data is organised into three main sections (or users); New, Routine and Advanced. These are related to the similarly named guides within this Tutorial. You will find crossreferences throughout the text to trials or movies which you are encouraged to look at and play with.

whats new?

1.4

introduction

Whats new in the world of Vicon? 1.4


As you will know Vicon has taken great steps to develop an improved system thats capable of capturing with up to 12 cameras, for up to 24 hours. But for the user, what can we show thats new ? Though the principles of Vicon motion capture remain the same behind the scenes we have made some significant improvements in the functionality of the system and the software.

The Pipeline Process


Weve developed Vicon to remove the previous complexity of motion capture and turn it into a single stage process. Once youve calibrated your volume you can automatically convert the raw video data into fully labelled 3D trajectories of your captured motion without any interaction.

True Dynamic Calibration


Vicon developed the original Dynamic Calibration (or DYNACAL) to give the user greater flexibility in calibrating the positions and orientations of camera prior to capturing motion. We have now developed an enhanced version of DYNACAL which allows the user to calibrate any shape or size of volume without having to concern themselves with the position of a static reference object or the relative positions of the cameras. Vicon 512 is the only system capable of capturing enormous volumes or complicated volumes such as an L shape and it does it while maintaining the accuracy required to capture the nuances of motion.

Integrated Movie Capture


Vicon now has the capacity to allow synchronous movie capture every time you capture. This means that you can store a MPEG file for every capture as future reference.

Full Timecode Support


Vicon now includes full timecode facilities so the Datastation can now be locked to a 7

introduction

1.4

whats new?

house reference signal and time stamp frame accurate captures. Vicon supports both SMPTE or EBU standard time code sources in VITC and LTC signals. The Datastation can also provide a burn-in window on the reference video signal for a time code or frame count reference.

Genlock
Vicon can now genlock its cameras to an external video signal, so that the Vicon cameras are synchronised to the scan rates of existing video sources. This includes support for the automatic "uplock" from lower speed broadcast cameras to our higher speed motion capture cameras. The Datastation will automatically lock to NTSC, PAL and SECAM video signals with easy connection including both BNC and S-VHS support.

Sample Skip
Vicon now includes the ability to reduce the amount of data being captured by not storing all the frames received from the cameras. For example, the user can specify to miss out every other frame thus down-sampling the data by a factor of two. This means you can capture data at the most appropriate rate for a particular move and save on the processing time and the disk space.

Plug In API
Vicon now offers the user a full plug-in interface allowing you to add extra functionality to the motion capture pipeline. From exporting data in your own format to filtering data, you now have access to manipulate the data in far more ways. We will provide you with a full software developers kit, including header files, documentation and examples.

An Improved User Interface


Weve updated our interface with a new time bar and directories. You will now have greater and easier control in displaying and manipulating your data with Vicon.

whats new?

1.4

introduction

Improved Online Help


You will find that Vicon now provides a context based help facility to give you on the spot information. To activate this tool simply select Help | Context Mode Help and click the mouse on the object that interests you.

Better Guidance
We hope that you will find these user guides a valuable tool in helping you to get to grips with Vicon and all that it offers. In the near future we will also be providing further guides on the following; multiple subjects, hand & facial capture and capturing large or irregular volumes.

introduction

1.5

what should i know?

1.5 So what should I know before I get started ?


Do I Need To Know How To Use Windows ?
Yes, we are going to assume that you have a basic knowledge of running applications in a Windows NT environment.

Do I Need To Know How To Do Motion Capture ?

No. If you dont know anything about motion capture, dont worry as were here to explain it all.

Do I Need To Know How To Use The Tutorial ?

Our installation engineers have had a big say in the development of this material and it can be used as a training manual if you werent around when Vicon was first installed. We believe that you can consider the New and Routine User Guides as the equivalent of two full day training sessions. There is, however, a lot of information to understand in these guides so if you have more time available then please use it. We pride ourselves on our customer support and will endeavour to solve your problems as and when they arise but we do hope that these guides will provide a deep resource of information.

10

a few vicon principles

1.6

introduction

A few Vicon Principles 1.6


You are probably keen to get on with your first project, but we would just like to introduce a number of ideas, names and principles that you will come across frequently in your future Vicon life. Learning the Vicon way will make everything so much smoother. .

Datastation and Workstation


Here at Vicon, we distinguish between the purpose built hardware which captures the camera data, and the software used to control the system. The Datastation is the box with the white front panel that will perform the actual capture. It has the capacity to convert and buffer video data from up to 12 cameras for indefinite periods. All the cameras are connected via the Break Out Boxes (BOBs) to the Datastation. The Workstation is the PC which controls the Datastation and also the software which provides a wealth of tools to process your captured data. The majority of what you learn in this tutorial is contained within the Workstation, so you only need worry about switching the Datastation on or off. All control and data communication between the two takes place via a shareable high speed Ethernet based Local Area Network (LAN).

Username and Directories


You will be handling large amounts of data when you capture. We help you organise your data by getting you to define a username (which is password protected if you wish). This creates a directory in this name where all your subsequent captures will be stored. Your entire capture history can be kept here or, alternatively, you can use different usernames for your different projects. The choice is yours. Youll find your directories automatically created in \Vicon\Userdata\Username\.. but you can define your own preference if you wish.

Sessions
Within each user directory, we create sub-directories called sessions (typically called 0000X); into each session will go your captured data. By splitting everything into sessions, the data becomes far more manageable and easier to access. Every session 11

introduction

1.6

a few vicon principles

contains not only the data captured but also all the information on the cameras, calibration and specific parameters used in processing the data. We have designed the session so that everything necessary to reproduce what happened in the studio that day is in one place. It makes it more convenient for archiving and in a case when the Vicon support team asks you for some sample data to explain your question or problem, you can zip the contents of the session folder and e-mail or FTP it.

Trials and Frames


A trial is what we call a data capture and the results that follow it. Within each trial, you can save any of the following; raw video data, fully processed 3D data, accompanying movie MPEG data and analog data. A trial consists of a time-sequence of samples of marker motion. Each sample is called a frame (or sometimes a field). Depending on the type of cameras you are using you will be able to capture data at 50Hz (i.e. 50 frames per second), 60Hz, 120Hz or 240 Hz. If youve got 120Hz cameras and you capture for 5 seconds then you will end up with 600 frames of data.

Subjects and Markers


Subject is the term we use to describe your patient, performer, model, robot or whatever it is you are measuring. We also extend this idea to refer to the capture of any objects or props such as clubs or bats as subjects. The spheres coated in retro-reflective tape are known as markers. We use them as visual reference points on different parts (or segments) of the subjects body and Vicon is designed to track and reconstruct them in 3D space. Where necessary, we will differentiate between real and virtual (simulated within Vicon software) markers.

TVD Data, Video and Live Monitor


When a marker is seen by one of camera/strobe units, it will appear in the camera view as a set of highly illuminated pixels compared with the background. At the time of capture, the Datastation generates and then uploads the raw edge co-ordinates of all the markers in each camera view which is stored in the Workstation in a file format known as TVD. We use the Video Monitor window in the Workstation to view this data. 12

a few vicon principles

1.6

introduction

Prior to capture, you will want to visualise what the cameras can see in real time. This is achieved by using the Live Monitor which allows you to assess the quality of the data in each view and, where necessary, make any necessary adjustments.

Capture Volume and Calibration


When we want to capture a subject, we position the cameras around a volume of space to ensure that their actions are visible in the different camera views. This is known as the capture volume. Before we can measure any of our subjects movements, we need to calibrate this volume. Vicon uses a dynamic approach which automatically calculates the parameters of the cameras from the capture of data of the two markers on a wand. This is known as DYNACAL.

Reconstruction, Trajectories and 3D Workspace


From the 2D .TVD images, and using the camera parameters calculated by DYNACAL, the Workstation can reconstruct the 3D location of each marker at each frame. Vicon then links the correct locations of each marker together to form continuous trajectories. These trajectories describe the paths that each marker has taken through the capture. Hence they represent how the subject has moved over time. Vicon displays these trajectories in a 3D user interface or workspace which allows the user to visualise the captured data. You will become very familiar with the workspace.

Labeling and C3D Files


The numerous trajectories generated in reconstruction are meaningless until they have been identified. This labeling process can be carried out in a number of ways. The most time efficient way is to let Vicon do the hard work and automatically label your trajectories. The Autolabeler looks for consistent spatial distances between markers based measurements derived from a single initial capture of your subject(s). Where necessary, you can manually label the trajectories in the 3D workspace which can then be used to assist or kick start any subsequent autolabeling.

13

introduction

1.7

and finally...

1.7 And finally


Just a quick mention on a couple of conventions used in the Vicon literature. Occasionally, youll come across blocks of text set off in its own box which are intended to add extra detail to the subject currently under discussion. You will find sections like this at various points in the manual which provide tips and tricks for getting the most out of your system. Wisdom, we hope you find them useful THIS
IS A WARNING.

We call them Words of

YOU

WILL SEE THIS TYPE OF MESSAGE WHEN THERE IS THE POTENTIAL THAT

IF YOU WERE TO TAKE AN ALTERNATIVE SET OF ACTIONS, RESULTS.

VICON MAY FAIL TO DELIVER SATISFACTORY

Just remember, were here to hold your hand. All the information you need to be a good (no a great) motion capture wizard is close at hand so lets begin our journey of discovery.

14

new user

The hands on guide to Vicon 512 for the New User. These pages will take you from being a novice to being able to capture data. You will learn about the hardware, from the cameras to the cables. We will cover how to take all of this equipment and set it up in your studio. You will then learn how to calibrate the system with DYNACAL and finally how to capture great looking motion data.

2.1 2.2 2.3 2.4 2.5 2.6 2.7

Whats This Guide All About? A Little Bit of Preparation Lets Unpack Your Vicon Setting Up Your Cameras Calibrating Your Capture Volume Lets Go Capture Closing Words

17

new user

2.1

whats this guide all about?

2.1 Whats this guide all about?


From the Introduction, you will now be aware of what Vicon is, why you are going to use it and where to find the answers to all your questions. You should also feel comfortable with the various Vicon principles but dont worry as we will explain these in more detail as we progress. Not surprisingly, this guide is aimed at the first time user but there is still a lot of information that the more advanced user will find of interest - so stick around. This guide is linked explicitly to you putting together your own capture session. We aim to take you through all the fundamental tasks and tools that Vicon has to enable the capture of many funky moves. We have also provided a number of example sessions related to the sections of this guide. They are found in \Vicon\Userdata\Wizard\New\..

So Where Should You Start ? Section 2.2 discusses the importance of preparation and planning. A little bit of
planning will save you time and money and quite possibly spare you a lot of frustration. Follow our advice and things should go extraordinarily well.

Section 2.3 will help you unpack your Vicon system. We will hold your hand as you
unpack the cases, switch things on and ensure that youve got the correct physical and software settings.

Section 2.4 will give you advice on and positioning your cameras to help define your
capture volume.

Section 2.5 discusses how to calibrate your cameras and hence your capture volume
using DYNACAL. We will explain the reasons why it is so important and give you hints on getting good results.

Section 2.6 will let you go and capture moving things. Hurrah! We will also introduce
you to the 3D Workspace which youll be seeing a lot more of in the future. The final section will briefly review what you will have learned in this guide and point you in the direction of the next user guide. If you dont have access to the Vicon hardware at present we suggest you still work through this guide as it will prove very useful in the future. Finally, we know that quite a few of you will be familiar with Vicon so the majority of 18

whats this guide all about

2.1

new user

this guide will be old news. There are, however, a few new elements worth looking at such as the new Datastation, the enhanced DYNACAL and the improved time-bar. It is also worth flicking through this guide to check out the numerous helpful hints given out by the Vicon Wizard.

19

new user

2.2

a little bit of preparation

2.2 A little bit of preparation goes a long way


OK, we know you dont want to plan in advance. We know you just want to get in the studio, switch the equipment on and start capturing. Yes, you can do that and, yes, you will get results but they wont necessarily be the best you could achieve. If you want high quality data in the shortest possible time then you may end up with problems. These are the types of problems which are easily avoided if you just do a little preparation in advance.

What Questions Should You Ask?


We believe that you should be asking yourself the following questions before you start unpacking your cases and clicking buttons on your PC. But as you are a New User, we going to take on the role of your teacher and answer them for you. Well also discuss these questions in more detail in the Advanced User Guide. Q1. Who / what is your subject? Q2. What type of moves are going to be captured? Q3. What size of capture space is required / available? Q4. How fast does the capture rate need to be? Q5. How is the data going to be used?

And Here Are The Answers.


The answers given are for both the New and Routine User Guides. We have assumed that you will work through both Guides as you develop your Vicon skills.

Q1. Who / what is your subject?


A1. For the most part, we expect our subject to be human but that doesnt mean to say that Vicon can only capture people. After all, Vicon is just a highly accurate 3D measurement system. To see that this is true, we want you to start off capturing loose markers and your calibration wand and L-Frame. In the following Routine User Guide, we will assume we are capturing a human subject doing some fairly standard movements.

20

a little bit of preparation

2.2

new user

Q2. What type of moves are going to be captured?


A2. In the first instance, we want you to capture someone throwing markers around the capture space, waving the wand in and around the volume as well as moving the L Frame. This will hopefully give you a real impression of Vicon as a 3D measurement system. In the Routine Guide, we will want you to capture your human subject performing a series of walks through the volume with different waves in the middle. Well also want you to capture various runs, jumps and even some dancing. Youll find a move sheet included at the end of this tutorial and on the installation CD with a prepared list of the moves to be captured during the various guides. Feel free to add your own extra moves to the sheets.

Q3. What size of capture space is required / available?


A3. Youre going to capture a couple of steps, a wave and then another couple of steps so lets allocate a floor space about 3m or 4m long by 1.5m to 2m wide, with a height of 2.5m (i.e. the volume). None of the actions will be particularly low to the floor or high in the air so we can use a set of default camera locations. Well discuss later where exactly to place your cameras which also depends very much on the space available.

Q4. How fast does the capture rate need to be?


A4. Youre not going to be capturing anything too fast but we suggest you leave your capture rate at the maximum speed of your cameras. For some cameras this will be 120Hz and for others it will be 50Hz or 60Hz (check your camera documentation for more information or call Vicon). Note you can always throw out some of this data after its been processed - it is always better to have it captured even if you dont use it.

Q5. How is the data going to be used?


A5. Depending on which of the Vicon applications you have available you will be able to see the data at any of the various stages described in the pipeline. There are no specific requirements that need to be undertaken if youre only interested in Vicon. Our intention is to take this data all the way through the pipeline from Vicon 512 to BodyBuilder or Polygon. You could, of course, also import the data into 3DSMax, 21

new user

2.2

a little bit of preparation

Maya, Softimage and others. Check out the pipeline section of this manual for more information.

22

lets unpack your vicon

2.3

new user

Lets unpack your Vicon. 2.3


From the last section, you will hopefully appreciate that preparation and planning is important in making life a little easier. So, lets get things set up. If you are lucky and you have your Vicon sitting there ready and waiting with a link established between the Datastation and the Vicon Workstation then you can skip this section. Remember, you can get a quick guide to setting up your Vicon in the checklists section at the end of this tutorial. If, however, you have three or four imposing cases in front of you, fear not as it is reasonably straightforward and well have you up and running in no time. Were going to take you through a description of the equipment and how to connect them together, making certain that the various software settings are correct. Lets consider the equipment list and set-up mentioned in the Introduction. If we look at the following diagram it is fairly obvious how everything fits together. In this section well briefly discuss the main elements: Datastation, cameras, cables, network and Workstation software. The calibration apparatus and markers will be discussed in later sections. [refer to sections 2.5 and 3.2 respectively].

Block diagram of the Vicon


WORKSTATION PC

system - Workstation to network hub to Datastation

HUB

to breakout box to cameras around a volume defined by markers with L-Frame in


CAMERA VICON 8 DATASTATION

centre

TRIPOD

POWER BREAK OUT BOX (BOB) CAPTURE VOLUME

DYNACAL CALIBRATION L-FRAME

MARKERS

23

new user

2.3

lets unpack your vicon

The Datastation.
The Datastation is the name given to the white case with the stylish front panel shown in the following pictures.

Back and front views of the datastations.

TWO-LINE DISPLAY WINDOW

POWER SWITCH

POWER SUPPLY CAMERA CONNECTORS

ETHERNET CONNECTOR

ANALOG CONNECTOR

The Datastation contains purpose built hardware, designed and developed by Vicon, which automatically converts and buffers raw video data from up to 12 cameras for indefinite periods. To switch on the Datastation, simply press the ON button in the bottom right corner. You will have to wait a couple of minutes while the systems internal software is loading so please be patient. By now you should have your Datastation powered up and running with AWAITING CONNECTION on the display. If your Datastation doesnt say this then have a look at the Troubleshooting section to find out more.

The Cameras.
It is likely that you are working with one of two types of camera. These are either standard interlaced 50Hz or 60Hz cameras (V490512 or V493512) or 120Hz 24

lets unpack your vicon

2.3

new user

progressive scan/ 240Hz camera (V4240512). "Progressive scan" means that the lines are not interlaced - this is a camera design feature which has no effect on the way you use the system. For those of you who dont have a camera that matches this specification, then please refer to the reference manual for the relevant section to identify your camera or simply call Vicon.

Given our interest in capturing at the maximum resolution, we will restrict our discussion to a camera configured as 120Hz progressive scan, or a 50/60Hz interlaced camera with no speed adjustment. The camera unit is comprised of a strobe and a camera held together on the same mounting and synchronized to the same frequency. Each unit is shown in the following diagram. The 50/60Hz interlaced camera type uses infra-red (invisible) strobes, and the 120Hz/240Hz camera type uses visible red strobes.
View of V8 240 progressive scan camera.

CAMERA POWER FROM STROBE

VIDEO OUT

STROBE SYNC/POWER
At this stage it is not necessary to go into any detail about the reasons behind the physical settings of the cameras. For those of you interested in knowing more about cameras, please refer to the reference manual. You should however have the same settings as those shown in the picture above and the cables should be connected to the same sockets. The green LED at the rear of the strobe will light indicating when 25

new user

2.3

lets unpack your vicon

you have power connected from the Break Out Box (BOB). The two settings worth checking on your camera prior to positioning are; The focus is set at infinity ( ). This is very important as it can accidentally be changed during camera handling The aperture, or f-number, is set to a typical average value of f4.

Setting the f-number at f4 should allow you to see something in the live monitor window, even if it is not the best setting for your lab.

The Break Out Box and Cables.


In the re-design of the Vicon cabling system, we now bring you a far more robust solution that will remove a lot of the old frustrations. The new cables now deliver the power to the camera and strobe units as well as transmitting the sync signals and returning the raw video data to the Datastation all within the one cable.
The Camera Interface Unit or BreakOut Box (BOB).

The Network.
Vicon 512 uses a 10/100Mbit Ethernet card using the TCP/IP protocols. Detailed instructions for obtaining a reliable network connection are given below. The best bet 26

lets unpack your vicon

2.3

new user

if you fail to establish a link between the Datastation and the Workstation is to refer to the Troubleshooting Guide in the Reference section of the manual or call your network manager.

The Vicon Workstation.


As stated in the Introduction, the word "Workstation" refers to both the computer itself, and the user interface software installed on your PC, which runs under Microsoft Windows NT. Normally, the software will be installed at the time the system is set up, but if you have to do this yourself, put the CD into the drive and double-click the program "Setup". An automatic installation utility guides you through the procedure which takes about five minutes. By default, Workstation installation creates a directory C:\VICON into which all our sub-directories are placed. You have the opportunity to choose a different directory during the installation routine, if you wish. By default, a directory C:\VICON\USERDATA or C:\VICON\UD will be created, in which the data you create will be placed. Some users prefer to move this directory up one level in order to keep path lengths short, or to put the data on a different disk - this does not cause any problems. REMEMBER (HARDWARE
YOUR

WORKSTATION

SOFTWARE WILL NOT RUN IF YOU DO NOT HAVE A VALID DONGLE

KEY) PLUGGED INTO THE PARALLEL PORT OF YOUR TO SORT IT OUT.

PC. NEED

ANOTHER DONGLE

CONTACT VICON

THERE

ARE TWO TYPES OF DONGLE; WHITE AND BLUE.

WHITE
CAN

ONES ARE PERMANENT KEYS WITH NO EXPIRY DATE.

BLUE

ONES HAVE AN EXPIRY DATE.

YOU

FIND OUT HOW MUCH TIME YOU HAVE LEFT ON A BLUE DONGLE BY STARTING THE SOFTWARE, AND CLICKING

HELP ABOUT. IF

YOU NEED MORE TIME, CALL

VICON.

Start the Workstation by double clicking on the desktop icon or starting Vicon from the Start-Programs-Vicon menu. A dialog box will now appear which is headed Vicon Login. Were keen to offer you the option of a level of security if you need it. Hence, you need to enter a user name and, optionally, a password before you can have access to the Workstation and your data. Please note that the password protection is not completely secure and if you need proper data security, you should use the security features of Windows NT to protect sensitive files or directories. For most users, its simpler to leave the password blank. 27

new user

2.3

lets unpack your vicon

For the purposes of this user guide, you should log on with the following user name: NEW with no password. In doing so, Vicon will now access the existing sessions captured in our studio and automatically display the session directory window within the main window. Well discuss this window in the next sub-section.
Vicon Workstation Display.

MENUS

TOOLBAR

WORKSPACE

CURRENT SUBJECT

VIDEO MONITOR ATTACHED MARKER SET MOVIE WINDOW DIRECTORY WINDOW TIMEBAR

STATUS BAR

PLAYBACK CONTROLS

You now need to establish the network link between the Datastation and the Workstation. You can do this by selecting one of the following three ways in the Workstation menus : System|Start Link. System|Live Monitors. Trial|Capture.

In this instance, select System|Live Monitors and after a second or two you will hear a ping sound, the message Link established with Datastation will appear in the status bar at the bottom left of the screen and a Live Monitor window will appear. Again ,well discuss this window shortly. The Workstation will also start a link if you decide to Linearise or Calibrate. We will explain what these terms mean later on. 28

lets unpack your vicon

2.3

new user

It's the first time my PC has been used with Vicon, how do I configure it to link up?
This is a task normally carried out at first installation, but if you need to set up a new PC, you will have to follow the instructions for networking in the reference section of the manual. Generally, you should be confident to do network configuration, or at least you should have access to help from someone who is confident in this area. If in doubt, call Vicon support.

How Do You Know The System Is Connected Correctly?


We will briefly mention some of the system indicators which let you know that everything is connected correctly. There are a set of LED indicators located on each element of the system which you can check to ensure that everything is cool. On the Datastation, you will see the message AWAITING CONNECTION until the link is established with the Workstation whereupon the message will say CONNECTED. On the Break Out Box, there is a single LED which, if lit, indicates that the box is receiving sufficient power from the Datastation. If it is not lit then check the cable between the box and the Datastation. On each strobe unit, a green LED at the rear of the unit will be illuminated if it is receiving power from the Break Out Box. The strobes themselves will not illuminate (and display the red ring) until you have opened the Live Monitor or you have set the system to capture, using Trial|Capture. Switch the camera units on for at least 15 minutes prior to capture to allow the camera to warm up. You can do this by opening the live monitor which ensures that the strobes are illuminated.

Remember to put out some markers in front of your cameras so you can actually see something in the Live Monitor window.

With the link established and the cameras warming up, lets have a look at some of the data weve supplied on the installation CD and get an idea of what youre going to capture. We want you to feel comfortable with the different elements and tools available in the Workstation software as youre going to be seeing a lot of them in the near future. 29

new user

2.3

lets unpack your vicon

Lets Introduce The Directory Window.


On starting, Workstation will automatically open the Data Directory window. If the window isnt open then select Window|New Directory. By double clicking on a session name, number or its description, you will set the particular session as the current one. This means that your calibration and subsequent capture data will be stored within this session. You can also enter a description of your session, say My first capture, in the dialog box at bottom of the window. You may also wish to add comments about the session like name of subject, number of cameras, which project its going to be used on. You can add and edit these notes at any point. The current session is indicated by a message in the bottom right corner as well as the session details being highlighted in gray and a circular symbol displayed within the directory icon. For user NEW, you should see the following view on your screen.

Data Diectory Window.

The directory window displays all the different data trials that have been captured and processed by the Workstation. There are four basic types of data stored by Vicon. 30

lets unpack your vicon

2.3

new user

Video icon - TVD file containing raw video data from the cameras. Analog icon - VAD analog data file Movie icon - MPEG movie data captured. C3D icon - C3D file containing the processed 3D data.
By clicking on any of the above icons or the trial number or description, you will load the captured trial into the Workstation. If you, say, double click on the TVD icon, Vicon will open the Video Monitor window and display the captured data.

Lets Talk About The Video Window.


When the Video window opens, it should look like the following image.
Video Monitor Window.

If you are looking at the 3D Workspace (a black screen with a grid in the middle), then select Window|New Video Monitor from the menu.

What is this view ? You are looking at the raw video data that has been captured by the cameras in the form of marker centroids. You can select the camera view of your 31

new user

2.3

lets unpack your vicon

choice by either clicking on the specific camera number or by browsing up and down the list using the arrows in the bottom left corner. The Video window can display either recorded TVD data (Video Monitor) or provide a real time display of the camera views (Live Monitor). By holding <SHIFT> and clicking on a different camera number to the one currently selected, you can display all the camera views between the two selected at the same time. By holding <CTRL> and clicking on a different camera number to the one currently selected, you can display just the two selected camera views at the same time.

At the bottom of the screen you can see the new, improved time bar. Well discuss this in the next section, but for now you should know that by hitting the PLAY arrow you will start playing the data youve captured. To you, this will appear as a set of blobs moving around and in/out of the monitor views. The camera identifier and its usable area the square shown in each camera view are defined by the cameras linearisation and video type. They are displayed by selecting View|Usable Area. We will discuss these and the use of diagnostic mode, calibration marker pairs and detected and recognized markers in the next section on setting up your cameras.[refer to section 2.4].

Moving In The 3D Workspace.


From the raw video data, Vicon will reconstruct the 3D trajectories of the markers. [refer to section 2.6]. For the time being, have a look at some example trials we prepared earlier. We can visualize our processed data by opening the 3D workspace window as shown below. You can open the workspace by either clicking on the C3D icon in the directory window or by selecting Window|New Workspace. There are a number of key elements wed quickly like to mention here to make your life easier when moving around the space. Again, well discuss the workspace in more detail later. [refer to section 3.7]. You can see the floor represented as a 2D grid with a set of axes indicating the orientation of your measurement space. This has been defined by the L-Frame in the 32

lets unpack your vicon

2.3

new user

The 3D Workspace Window.

calibration for this studio session. You should also see the captured markers represented by white spheres with short lines passing through their centers representing their trajectory path. You can navigate around the workspace by using your mouse. If you hold the left button down, the viewpoint will rotate at a fixed distance about the center as you move the mouse. The rotation center is shown by a purple diamond in the window. Holding the right button down and moving the mouse up and down will let you zoom in and out of the space. Holding both buttons down will let you re-position the center of your view in the current plane of view. Have a go at moving around your space and become familiar with the different ways of altering your viewpoint. When the workspace is opened the default location of the center is the origin defined by the L-Frame in calibration.

When you select a trajectory, you can set that location to be the center by going to Workspace|Select Centre or by using the keyboard shortcut <C>. This is of great use in editing trajectories which well discuss later. [refer to section 3.7].

33

new user

2.3

lets unpack your vicon

Your visible reconstruction volume is also a useful visual aid as it displays the volume you have set to reconstruct within. The default settings should be sufficient for your first captured trials in this guide. You can show it by selecting either View|Reconstruction Volume or the icon on the toolbar.

Whats The New Time Bar?


Vicon Workstation has improved its play back time bar to give you greater flexibility when viewing your data. The following list highlights the key features of the new time bar.
New Timebar

Playback at variable rates. You can set your playback rate by adjusting the slider to any rate up to +/- 4.0. To return to the default rate of 1.0 you should double click on slider. Playback will automatically loop your captured sequence. IF THE VIEW|REAL TIME PLAYBACK IS SWITCHED OFF THEN YOU WILL FIND THE VARIABLE PLAYBACK
DISABLED.

Directly manipulating the slider. If you click and hold the left button of your mouse
whilst the cursor is over the play slider then you can directly manipulate the slider. Updating users will be pleased to know that you no longer have to move the cursor along the time bar, you can move your mouse anywhere.

Go to Start/ End. This will automatically jump you to your first / last frames of
playback data. There are quick keys on <home> / <end>.

Step Through Frame by Frame. This lets you move slowly through your captured
data which is useful when editing your data. With the workspace selected, the quick keys on <arrows left/right> will step through 1 frame at a time. Using the <arrows up/down> step through 4 frames at a go and <page up/down> will step by 10 at a time. You can also click and release the left mouse button at any frame on the time bar to move instantly to that specific frame.

Shuttle Mode. Select shuttle key then click and hold the shuttle slider. It works in
34

lets unpack your vicon

2.3

new user

same way as your video player. By moving the slider away from 0.0 you will play at that rate. Note that it will not loop back to the start when you finish. This is particularly useful in long trials.

Zooming In on the Time Bar. Click and hold with right button and move right to
zoom in and left to zoom out. If you click and hold with the left button you can move the timebar up and down. Remember that click and releasing the left button will select the current frame.

Setting Play Range. What youll find is that you may well capture data at the start
and end of your trial which is of no interest to you. Workstation now allows you to set variable limits on your play range. This is also of use when editing data and you wish to replay only a small section of your captured data. By double clicking on the play range cursors will toggle the ranges between the first and last frames and the user defined frames.

Setting Save Range. This allows you to graphically define the region of frames that
you want to save for subsequent use. Again, you may well have data at the start and end of your trial which is of no interest to you. The regions which you want to discard are indicated by the purple region on the time bar. By double clicking on the save range cursors will toggle the range between the first and last frames and the user defined frames. This is the same as selecting Trial|Trajectory Save Range.. and then manually entering the frame numbers.

Have a play with these different tools on the data weve included and make sure you feel comfortable using the workspace to visualize captured and reconstructed data. Well discuss trajectories in a little more detail later on when youve captured a few of your own trials. [refer to section 2.6].

Didnt You Do Well?


You have now seen how everything fits together in your Vicon system and are getting closer to performing some captures. You should feel more comfortable using the Workstation software and should have an idea of spotting when something is amiss which we are pleased to say is a rare occurrence. 35

new user

2.3

lets unpack your vicon

You should know how to open the Live Monitor and select different camera views with different options. You should also be capable of moving around the 3D workspace to visualize the reconstructed data and know how to manipulate the data in different ways using the time bar. The next section is going to involve a little bit of exertion as you are going to move tripods around to arrange your cameras around your capture volume.

36

setting up your cameras

2.4

new user

Setting up your cameras 2.4


In this section we will discuss creating an arrangement of cameras to capture moves within the desired volume and get you to the point where calibration and capture are possible. As we stated in the section 2.2, the amount of space you have available in your capture studio will dictate the size of your capture volume. Weve defined a default volume of 3m or 4m long by 1.5m to 2m wide, with a height of 2.5m and well assume that you have 9mm lenses attached to your cameras. Finally, well assume that you have a minimum of 5 cameras connected to your Datastation. The placement of the cameras is vital in obtaining accurate reconstruction over the whole of your volume. The main factors in selecting a suitable location for each camera are marker visibility and the kind of motion you plan to capture. We will assume a fairly general kind of motion is to be captured, requiring all round visibility. The cameras are best placed on high tripods or fixed to the walls above the capture volume. There is no problem with camera strobes being visible in other views as Vicon can and will calibrate successfully. What can happen is that as the markers pass in front of the visible strobes they will appear to be swamped in the camera image. This is not a disaster but if you can avoid this occurrence youll have fewer problems in subsequent stages. This is the reason that we suggest, to begin with, putting your cameras up at head height or above.

Cameras arranged around the default volume (marked out with tape) in our Oxford studio.

37

new user

2.4

setting up your cameras

Each camera will have a particular angular field of view which depends on the focal length of the lens fitted. Normally a suitable combination of lenses is fitted at the time of delivery, taking into account the size of the room. A short focal length (eg 6mm) has a wider angular field of view than a longer focal length (eg 12.5mm). We regard 9mm lenses as standard issue. A camera with a shorter focal length lens will have to be placed closer to the capture volume than one with a longer lens. The effective range at which a camera with a shorter focal length lens can see a marker is less than that of a longer lens The ability of a camera to see markers in the capture volume is also affected by the power of the strobe, the aperture setting of the lens, the user defined threshold value and the reflectivity of the marker. Given these issues, lets have a look at the series of steps we always undertake when setting up a volume.

Measuring Out The Volume


Fairly obviously, its a good idea to try to keep your volume as central as possible to your actual lab space when considering where the cameras will have to be positioned. When youre happy, you should place markers on floor around the edge of the volume as shown in the following diagram. It is best to use the same size of markers that you are going to attach to your subject, typically 25mm for full body capture. The reason for this will become clear in subsequent sections. You may also wish to mark the volume out using tape so that you can visualise it easily.

It is advantageous to place two markers close together at intervals around the edge. Ideally, they should be positioned as close as the closest markers will be attached on the body. Typically the shortest distances are between heel and ankle or hand and wrist. Under normal circumstances, it should be possible for a Vicon system to see and measure two 25mm markers separated by their own diameter, throughout to capture volume.

38

setting up your cameras

2.4

new user

The capture volume marked out with 25mm markers.

Position And Orient Your Cameras


The principle to remember when positioning your cameras is to try and ensure that a marker held at any location within the volume can be seen by at least two cameras. You should also try and consider how markers are going to be occluded by the subjects body. For our default volume, you can set your cameras evenly around the volume and all at, roughly, head height or above and directed downwards. If you have 5 cameras, you will probably place one anterior to the walkway, and two on each side. With a sixth camera, place it posterior to the walkway. With more than six, place them (initially) more or less evenly spaced around the capture space. Once you have established the overall position of each tripod, then we need to alter the orientation of each camera to ensure that it sees as much of the capture volume as possible with minimal dead space. To do this is you really need the help of a colleague to physically move each camera in turn as you watch the live monitor. The idea is to have all the outline markers visible within the usable area of each camera. An example is shown below. When you are happy with the floor area, you should ask your colleague to walk around the volume with the wand held above their head to check that you can capture at up to the desired height. Dont be afraid to rotate (roll) the camera through +/- 90 (from landscape to portrait or anywhere inbetween) if it lets you see more of the volume. By selecting the correct angle for the respective live monitor window in the bottom right corner, you will be able to see the data in the correct orientation. 39

new user

2.4

setting up your cameras

Live monitor view with the capture volume identified by static markers on the floor.

It is important that you can visualize the usable area for each camera view when establishing the correct location and orientation. The usable area is derived when each camera is linearised and is stored in the linearisation parameter (LP) file. To do this you need to allocate the camera identifier to its correct camera channel. Well discuss this in detail in the next section on calibration but for now follow the steps to perform calibration and dont worry about capturing anything useful. By accepting a failed calibration you will identify and store each camera and allow access to the usable area. If you want to see how we set up our cameras for a similar volume, check out the movie in the NEW sessions.

Adjusting The Camera Sensitivities To Always See The Markers


You should have now positioned and orientated your cameras and you are happy that as your subject will move through the volume, all markers will be seen by at least two cameras. The final step you need to take is the adjustment of the image sensitivity of each camera. This is to ensure that a marker will be seen throughout the visible volume of that camera. In a practical session, we would now spend some time adjusting the 40

setting up your cameras

2.4

new user

apertures and sensitivities of each camera in turn. However that is for later, for now we would like you to ensure that the aperture for each camera is set at f4.0 and the Live Monitor sensitivity is set between 5 and 6.

Linearisation Of Your Cameras.


Before we start calibrating our volume we should mention linearisation. Linearisation is a procedure that corrects the distortions present in your camera lens and also the small variations which may exist in the internal mounting of the CCD image sensor. This lens correction is essential for the accurate 3D reconstruction you are about to undertake. The linearisation parameters used by Vicon are stored in a read-only LP files which are found in \\Vicon\System\ directory. Each camera will have been accurately linearised prior to the shipment and installation of your new Vicon. Its worth noting that you have different linearisation files for different camera capture rates as they are captured at different resolutions. We also have a standard way of naming linearisation files which will include the cameras serial number followed by the frequency setting (E.g. 0007_120 and 0007_240 will be distributed with camera 0007). We wont go into any detail of how to linearise a camera at this stage as we are assuming that most of you are using a new Vicon with recently linearised cameras. Cameras will require linearisation if you change a lens, and occasionally (every few months) on a routine basis. You can learn how to linearise your cameras in the Advanced User Guide. PLEASE
HANDLE YOUR CAMERAS WITH A DEGREE OF RESPECT.

THEY

ARE QUITE EXPENSIVE AND

YOU MAY AFFECT THEIR SETTINGS WHICH WILL REQUIRE A NEW LINEARISATION, TIME CONSUMING EXERCISE.

Lets Check That Vicon Is Ready For Calibration.


You should now be at the stage where all the equipment is out of the box, connected together and switched on. You should be logged on as NEW or under your own username. You have linked the Workstation to the Datastation and have the Live Monitor window open which means all your cameras are displaying their red strobe rings. 41

new user

2.4

setting up your cameras

You should have marked out your volume using reflective markers and tape and then positioned and oriented your cameras to see the volume with minimal dead space in each camera view. You should have checked that the sensitivity is set to 5 and the aperture of each camera is set to f4 to ensure that the markers are always going to be visible in the camera views. You should now create a new session by selecting File|New Session or the New Session icon. You should also select it (by double-clicking on it) as the current session to ensure that your calibration and capture data is stored in the correct session. You are now at the stage where its time to calibrate your cameras and hence the capture volume. By calibrating, Vicon will calculate the position and orientation of each camera relative to each other and to the volumes origin. This information will subsequently be used in the reconstruction of the marker 3D trajectories. So onward to the next section if youre ready. Remember to pick up the outline markers which defined your volume prior to calibration.

42

calibrating your capture volume

2.5

new user

Calibrating your capture volume. 2.5


Now were going to assume that youre at one of two positions. Either youve worked through our guidelines in the previous sections or, if your lucky, someone else has set up the equipment and youve been spared this donkey work. Whichever way, youre hopefully sitting with a Vicon that is up and running and ready to calibrate. DYNACAL is our proprietary calibration technique which refers to the automatic calculation of camera positions and orientations relative to each other and to an origin and set of axes. All you have to do is wave a wand through your volume and DYNACAL will do the rest. Its that easy. ONCE
YOUVE CALIBRATED SUCCESSFULLY THEN YOU WONT NEED TO RE-CALIBRATE UNTIL THE END

OF THE SESSION. IF HOWEVER, ANY ONE OF YOUR CAMERAS IS KNOCKED OR MOVED THEN YOU MUST RE-CALIBRATE.

THE

ACCURACY OF YOUR RECONSTRUCTION AND YOUR FINAL RESULTS WILL

SUFFER IF YOU IGNORE THIS WARNING.

Upon completion of a successful calibration, Vicon will automatically create and store a .CP file in your current session containing the parameters of all the cameras used in that calibration. For more information on this file format and the parameters have a look at the Reference Guide.

What Apparatus Is Required ?


The two key elements for calibration are your DYNACAL wand and L-Frame. These can come in a variety of shapes and sizes as shown in the images below. Vicon uses the description of each calibration object stored in the file \\Vicon\System\example.cro. If you want to know more about this file and how to edit it, please refer to the Reference Guide.

Calibration objects. L-Frame, wand and flatcal

43

new user

2.5

calibrating your capture volume

The majority of you will be using the Clinical L-Frame and a 500mm wand so well consider these as the default apparatus. Briefly, the wand is used to calculate the camera positions and orientations and the L-Frame is used only to define the origin and direction of the orthogonal axes of your capture volume. Vicon will set Z as UP by default. This convention is built into our Workspace software, although there are ways to change the output later.

Lets Get That Sensitivity Set Correctly.


In DYNACAL, the calibration apparatus tends to use larger markers, typically 50mm in diameter, and therefore you may have to re-adjust the sensitivity for each camera to a value around 5 prior to calibration. With your L Frame placed in the center of the volume, you should have an image on your screen similar to the one shown below. DONT
PHYSICALLY ADJUST THE APERTURE ON THE CAMERAS.

YOU

SHOULD HAVE SET THEM WHEN

YOU WERE PREPARING YOUR CAPTURE VOLUME.

[REFER

TO SECTION

2.4].

L-Frame as seen in the Live Monitor window.

When youre looking at the live monitor, select View|Calibration Marker Pairs. You want to make sure that the centers of the visible markers on the L-Frame are 44

calibrating your capture volume

2.5

new user

visible and stable (no flickering) and that there are no false centers. You also require the two co-linear markers to be paired together.

IF THE L-FRAME ARM WITH THREE MARKERS IS POSITIONED TO LOOK DIRECTLY AT A CAMERA, THEN
YOU MAY GET MARKER MERGING WHICH CAN CAUSE ERRORS.

TRY

MOVING THE

L-FRAME

SLIGHTLY

WHILE OBSERVING IT THROUGH THE LIVE MONITO OR ALTERNATIVELY DESELECT THAT CAMERA FOR THE STATIC PORTION OF THE CAPTURE

Whats The Procedure ?


When you are happy with the images in each of the camera views then you are ready to calibrate. Briefly, you have to take the following steps to calibrate your volume: Set the correct camera identifiers to the corresponding camera channel. Select which cameras you wish to calibrate. Select the correct calibration object. Capture the static object. Capture the dynamic wand. Wait for a minute or so while DYNACAL automatically calculates the parameters of each camera. Examine the results, accept and apply the calibration to the current session.

OK, you should now select System|Calibrate or, alternatively, the calibration button. The calibration dialog box will appear on the screen. This will allow you to make all the necessary settings. Lets set the camera identifiers. As we mentioned earlier, each camera has been linearised to correct for lens imperfections. What we want to do is ensure that the correct .LP file is allocated to the same channel that the camera is actually attached to. It is probably easiest when the name of the .LP file references a serial number or camera number. Your Vicon installation team will brief you on the files created from linearisation performed in the factory. YOU
MAY WELL FIND INACCURACIES IN YOUR SUBSEQUENT CAPTURES AND RECONSTRUCTIONS IF

YOU FAIL TO LABEL EACH CAMERA WITH ITS CORRECT IDENTIFIER. OUT THE

FOR

MORE INFORMATION CHECK

TROUBLESHOOTING

SECTION IN THE REFERENCE PART OF THE MANUAL.

You do this by selecting each identifier box in turn and then browsing through the list 45

new user

2.5

calibrating your capture volume

of identifiers until you find the appropriate name for that specific camera. You can browse by clicking directly on the arrows or by using the keyboard arrows.
Calibration dialog box.

Once you have selected the first identifier box, you can move up and down the camera channels by using the quick keys : <TAB> for down and <SHIFT + TAB> for up. Within the list of identifiers, you can search the list based on the first letters / digits of the name. Also <Home> will send you to top of the list and Not Present.

In the event you decide not to have a particular camera used in this session, then the identifier should be set as Not Present. Once calibrated, You can also set any camera not to capture using Trial|Capture|Cameras..|Enable Cameras.. - more of which later.

As this is your first calibration then you should select ALL of your cameras. Those that are to be calibrated will display a tick in the corresponding box. If you ever need to select / de-select a camera then click directly on its box. There are very few occasions when you will want to calibrate anything less than 46

calibrating your capture volume

2.5

new user

ALL cameras. In some experiments you may want to use only a few cameras, but on most occasions you will use them all.

As this is a new calibration we should capture All New Data. If we need to do a subsequent calibration, then you can decide to capture either static or dynamic only. But more of that later, for now were ready to calibrate.

Lets Capture the Static L-Frame


You should place your L-Frame in your capture volume so that it is visible by at least two of the cameras. With your current volume, nearly all the cameras can see it. The L-Frame can be positioned in any orientation but if youre capturing a sequence of walks in one particular direction then set one of your axes along that path. This may make things easier for you and your animators in the later stages of the pipeline. Remember the L-Frame only defines the location of the origin and the direction of the axes. It has no effect on the calculation of the camera parameters. If you have force plates, use the flanges of the L Frame to align the frame accurately parallel to the sides of the plate. Adjust the screws so that the L Frame is level.

When you click on calibrate, the Datastation will automatically Arm itself ready for capture. On screen youll see the following dialog box. BEFORE
YOU HIT CAPTURE, MAKE CERTAIN THAT THE WAND IS HIDDEN FROM VIEW OF ALL THE

CAMERAS.

AND

ALSO POLITELY ASK YOUR SUBJECT, IF THEY ARE ALREADY COVERED IN MARKERS,

TO LEAVE THE ROOM WHILE YOU CALIBRATE.

AND DONT LET THEM BACK IN UNTIL YOUVE FINISHED.

ALSO

REMOVE ANY OTHER MARKERS LEFT LYING AROUND THE STUDIO.

Static capture dialog.

47

new user

2.5

calibrating your capture volume

When you hit capture, Vicon will then capture 20 frames of data of the L-Frame. It will also extract information on any other visible light sources (such as cameras visible in opposing cameras) and remove them from the subsequent dynamic stage. Though Vicon is robust enough to deal with the presence of other markers or light sources, we recommend removing them at this time unless absolutely necessary. .

Lets Capture The Wand


When Static capture is complete, Vicon will automatically set the system up to capture the dynamic data. You will see the following dialog box.

Dynamic capture dialog.

Before you hit start capture, remove and hide the L-Frame. Go and stand in the volume with the wand. When you are ready, get your helper to hit start and begin waving the wand through the volume. If you dont have a helper and are working alone, then leave the wand in the capture volume when you restart the data capture so that you are acquiring useful data throughout the capture.

DYNACAL is based on extracting the locations of the two markers on the wand in every camera view as the wand is moved throughout the whole volume. It is important that you sweep through the whole space (both high and low) to ensure that you provide an even distribution of marker locations which the DYNACAL can then process. When you are satisfied that you have captured sufficient wand data then hit STOP and Vicon will automatically begin to process the captured data passing it through various stages until it reaches the final calibration. 48

calibrating your capture volume

2.5

new user

For all existing users, can we just clarify that the wand does not have to be visible in the first frame of the dynamic capture. It can save time if you start capturing useful data straight away but it is not essential.

Whats The Best Style to Wave Your Wand?


For some years, we have had many queries on a suitable technique for wand waving. Why not check out the little movie we captured highlighting the style we at Vicon find to give successful calibration. [refer to \new\new25\calib MPEG file]. But dont be put off go out and find your own style - just remember the following: Cover as much of the volume as possible - high and low + near and far. Allow as many cameras as possible to see the wand at any instant. Present the wand in a variety of attitudes but try to minimize the number of occasions the wand is pointed directly into a camera. Try not to wave the wand excessively fast. Finally, though it is sometimes difficult to avoid, try not to capture for too long. A typical duration for a volume of this size is about 15 to 20 seconds.

What Makes A Good Calibration ?


Once youve stopped capturing, DYNACAL will automatically process the data. Depending on the number of cameras you are using, this will typically take a couple of minutes to complete. Vicon will display a number of messages as the calibration progresses indicating its current status. At this point, we wont bother going into the meaning of these messages. When calibration is complete, the calibration dialog box is shown again with an assessment of the precision of your calibration for each camera. The calibration residual indicates how well the data fitted in the calculations - a smaller number means a better fit. What are we looking for ? Typically, the calibration residuals for this size volume will have values of less than 2.0. We look for consistent values (similar for all cameras) as well as the absolute value. After doing a few calibrations, you soon get to know what values to expect. If you are happy with the results returned by DYNACAL then you should accept the calibration. You will then be asked to apply the calibration to the current session. Check 49

new user

2.5

calibrating your capture volume

that this is going to be applied to your current session by noting the session number in bottom right of the status bar. YOU
SHOULD ALWAYS AVOID APPLYING YOUR NEW CALIBRATION TO AN OLDER SESSION AS ALL

SUBSEQUENT RECONSTRUCTION WILL USE THIS NEW AND POTENTIALLY FALSE CALIBRATION. WILL LEAD TO INCORRECT RESULTS.

THIS

Vicon automatically saves your calibration TVD file into the session directory when you apply the calibration to the current session. It is named session name.tvd.

And What Can Cause A Poor Calibration?


There are a number of reasons why you may get a poor calibration for one or all of your cameras. Have a look at the following possibilities. Flickering visible light sources. Calibration will tolerate some video noise, but is most likely to have problems if there are flickering sources of light such as reflections. Poor coverage of the volume by cameras (i.e. cameras are positioned in such a way that they fail to propagate the calibration). When calibrating irregular shaped volumes, there must be a continuous volume seen by at least three cameras in order for the calibration to be propagated all the way along it. If at any point there is a break in this, then propagation will stop some way along it, and some cameras will not be calibrated. Insufficient wand coverage in the overlapping regions of cameras.

DYNACAL needs only a small number of points to actually calibrate but for accurate calibration, you should cover the volume fully with the wand. Cameras oriented in such a way as to have insufficient angles between them which causes errors in reconstruction. 50 Cameras moved / nudged during calibration. Cameras given the incorrect identifier and hence incorrect linearisation file. Similarly, if the linearisation of a camera is out of date or at incorrect capture rate.

calibrating your capture volume

2.5

new user

This is not an obvious one to spot but if you have a good overall calibration and one or two cameras are poor then they may need to be re-linearised. [refer to section 4.4 on linearisation]. AS A SPECIAL POINT FOR EXISTING USERS, GIVEN THAT NEW DYNACAL NOW CALIBRATES CAMERAS
RELATIVE TO EACH OTHER RATHER THAN THE OF CAMERAS USING THE EXISTING DATA. RECONSTRUCTIONS.

L-FRAME, THIS

YOU

CANNOT RE-PROCESS A SELECTION

WILL NOW CAUSE ERRORS IN SUBSEQUENT

If you need to re-calibrate you do not need to capture both dynamic and static data, if the problem was only related to one part of the data. Simply select new static data only or new dynamic data only in the calibration dialog box. You may wish to capture dynamic data only if you feel that you havent covered the volume sufficiently or if youve fail to cover the overlapping regions between specific cameras or if youve spotted deficiencies in quality of the TVD data (e.g. Marker merging or blooming). You may wish capture static data only if you wish to re-orient the axes of your world or change the position of the origin. If you want to know more about the technical aspects of calibration then have a look in the reference manual. Check also the section on calibrating irregular volumes in the Advanced User guide.

What Have We Learned About Calibrating Your Volume?


As you make your way through these guides, we hope youll come to appreciate the importance of calibration in achieving good capture data. Lets briefly review the key elements of DYNACAL. Make sure your sensitivity settings are adjusted to deal with the larger markers of the reference objects. Make sure that the cameras are correctly identified with their appropriate LP file. Make sure that all cameras are selected for calibration and also that the correct reference object is selected. Make sure that your subject is hidden away from view for the duration of the calibration. Capture the static object while ensuring the wand is hidden from view. 51

new user

2.5

calibrating your capture volume

Remove the static object from view and capture the wand remembering to cover as much of the volume as possible.

Check the results of calibration, accept them if satisfied and apply them to your new or current session.

Do one or two captures of the wand to get an idea of the size of your volume and also as an indicator of the quality of the reconstruction. Well discuss this in the next section.

We can not stress too highly that the key to the success of all the following stages is a good calibration. The extra little bit of effort here will bring greater rewards and a lot less stress when you start capturing and processing all your lovely trials. And remember, please be careful around those cameras. If you knock, nudge or move a camera, you have got to re-calibrate. If there are visitors, children, and patients in the lab, watch out for them touching tripods without telling you - this can spoil the calibration half way through a session. If it happens, just ask them to stand aside for a minute while you recalibrate.

52

so lets go capture...

2.6

new user

So lets capture 2.6


Were now at the stage of being able to capture trials. This is the fun bit. In this section, we shall briefly describe how to do your first captures, reconstruct them and visualize them in the 3D Workspace. Were not going to do anything difficult and you wont produce anything very useful but we know you must be champing at the bit to see something interesting. To re-cap, you should have a Vicon system installed with a set of cameras positioned around a suitable capture volume. Your Workstation should be linked to the Datastation and you should have just successfully calibrated your current session.

Lets Do A Basic Capture.


By selecting Trial|Capture or the capture icon, the following dialog box will be displayed.
Capture dialog box with general capture displayed

Vicon has a feature called Trial Typing. These are user defined collections of settings which will be used when you go to capture. These settings include duration, the type of data to capture (video, analog and/or movie) and those functions which are called upon after the capture is over. Each type can also have its own pipeline which is 53

new user

2.6

so lets go capture...

started automatically when data uploading is complete. Using these defined types, Vicon automatically applies a template of settings to your data capture to make life that little bit easier. Trial Types are discussed in more detail in the Advanced User guide. For the moment, you dont have to worry about the various types as weve provided a number of useful ones. If it isnt already selected, browse through the trial types and select General Capture which is an all-purpose trial type. The trial type details, along with any variations in the optional parameters, are stored in the program INI file. Any changes made will be available to all users on that machine.

Just prior to capture, you can enter a trial reference (or accept the default) and a description and notes on what youre about to capture if you wish. You can also set a specific duration of any number of seconds rather than an unlimited period. You can always edit the description and notes after the capture in the Directory Window.

For these captures, wed like you to do a general capture of video data only for an unlimited duration with no selected subjects. Ignore the pipeline button at present. Well deal with that later [refer to section 3.4]. When youre happy with the settings, hit Capture to initiate data capture and you will now see the following dialog box.
arm/start dialog box

Vicon will automatically start the link between the Datastation and Workstation if youve forgotten to.

For this capture, walk around the edge of the volume while waving the wand. When 54

so lets go capture...

2.6

new user

this data is reconstructed, you will get an impression of the size and shape of your capture volume and the quality of reconstruction. When you are in position, get your little helper to count you in and hit Start. From this instant Vicon is capturing video data from all the enabled cameras. When you feel that you have all the data you require then hit Stop in the same dialog box. Vicon has been re-designed to give you a far faster upload and you will not have to wait for the data. You will now see one of three things. Either the session directory with your newly captured data listed or a Video Monitor view or an empty 3D workspace. You should be able to spot the two markers of the wand in the Video Monitor for the different cameras but really its difficult to make much sense of their positions relative to the different cameras, isnt it ? Thats where reconstruction will help you out.

Lets Reconstruct The Video Data


We now need to convert the raw 2D TVD data into continuous 3D trajectories using the process of reconstruction. Reconstruction is the term used to describe the process of deriving the virtual 3D locations of the markers in each frame and more importantly linking these 3D positions together from frame to frame to form trajectories. In simple terms, Vicon derives the 3D location of each marker by firstly calculating the 2D center of the marker in each camera view. From DYNACAL, we know the exact location and orientation of each camera with respect to the origin. Using this information, Vicon projects rays through the marker centres and out into the virtual space. A virtual marker is formed where two or more rays intersect. Having derived these 3D virtual markers, Vicon now attempts to thread continuous trajectories through the positions of the same marker in each frame. It uses the position of the marker in previous frames to predict its location in the current frame and distinguish the correct 3D position from various candidates. This means that Vicon can track the markers throughout the trial. For a more detailed description of reconstruction, please have a look at the reference section of the manual. So for our captured wand waving, select Trial|Reconstruction from the menus or select the Reconstruction icon from the toolbar. This can be done with either the Video Monitor or the 3D workspace open. When you start reconstructing, youll see a progress message in the status bar which 55

new user

2.6

so lets go capture...

typically says something like Reconstructing frame 210 of 600 (2 active segments of 3). This tells you that at this instant in reconstruction, Vicon is processing the 210th frame from a total of 600 and it has reconstructed 2 segments of trajectories in this frame. Also, a total of 3 trajectories have been reconstructed so far. At the completion of reconstruction, youll see a message which may say Reconstruction complete. 4 segments created. If youre reconstructing and you want to stop it at the current frame rather than continue to final frame then go to Trial|Stop Reconstruction. You may want to do this if the number of segments is large and your capture is very long. It means you can check whats happening quickly without having to process every frame. If you actually want to cancel the process then select Trial|Cancel Reconstruction. This often happens when youve forgotten to alter your reconstruction parameters or youve selected reconstruct by mistake. Ideally, the number of segments created in the frame should be equal to the number of markers being captured. If it is a greater number then you have either a number of occlusions causing breaks in the trajectories or you have reconstructed a number of ghost or false markers. Dont despair as well discuss trajectory breaks and false markers later. A useful indicator of the success or failure of your reconstruction is the number of trajectory segments created. If it is far lower than the number of markers, your performer is either outside the capture volume or more likely your reconstruction parameters are incorrect. If it is far higher than the number of markers then you have either had a large number of occlusions resulting in broken trajectories or again your parameters will need tweaking or more disturbingly your calibration is not acceptable.

If youve captured your wand and you have more than two trajectory segments you may need to vary your reconstruction parameters which well discuss in the next guide. Why dont you repeat the capture and reconstruct you waving the wand around the volume a few times to become familiar with the different processes. Use other trial types such as Pipeline Capture if you wish.

56

so lets go capture...

2.6

new user

Why Are Trajectories of Use?


These trajectories provide the unique information which describes your captured move. By processing them through the Vicon animation pipeline, you will be able to create your desired 3D character animation. At the moment, they arent really of much use as its only your wand waving around. It does, however, allow you to visualize the size of your capture volume.

How Do You See The Trajectories?


Firstly, make certain that Workspace|Show All Trajectories is selected (this is done by default whenever you open the trial). If you now zoom in on a marker you will see a line vector passing through the marker. This is your trajectory indicating the markers position in the previous, present and next frames. By selecting the trajectory cursors on the bottom of your time bar ruler and moving them left or right, you can show up to +/- 300 frames of trajectory data. This is of great use when you need to clean-up trajectory data which will be dealt with in the section on edit tools. [refer to section 3.8].

By double clicking on the trajectory cursors, they will toggle between the range set by the user and the default range of +/- 1. It can be useful to set your trajectory display range to their maximums and then select your current frame to be one in the mid point of the trial. For the capture of your wand wave, you will now get a better representation of your volume by considering the paths the markers took.

How Do I Select A Trajectory?


Vicon allows you to select between one and twenty four trajectories. You select a trajectory by moving the mouse pointer over it and then clicking the left button. When the tip of the mouse pointer is close enough to a trajectory to select it, the mouse cursor will change to a box containing the trajectorys label. If the trajectory is unlabelled, you will see a dash in the label box. IF
THE CURSOR ISNT CHANGING, CHECK THE

VIEW

MENU TO ENSURE THAT

MOUSE HELPER

IS

ENABLED.

THIS

IS DONE BY DEFAULT.

57

new user

2.6

so lets go capture...

When a trajectory is selected, it will change colour from white (unlabelled) or light blue (labelled) to either yellow, red, purple or blue. As further trajectories are selected, they change to the next colour repeating in cyclic order. These colours and their order will become very familiar to you as you use Vicon. A white cross is also displayed at the selected location (or frame) on the trajectory. YOU
WILL BE UNABLE TO SELECT A TRAJECTORY IF YOU HAVE SET THE TRAJECTORY LENGTHS TO BE

ZERO.

TO

INCREASE THE DISPLAYED THESE LENGTHS USE THE LITTLE TRIANGLES BESIDE THE TIME

CURSOR AS DESCRIBED EARLIER.

To de-select a trajectory, click on the right mouse button anywhere in the workspace window. Multiple trajectories are de-selected in reverse order.

Saving Your Data.


When you have completed the processing of your data, remember to save it by either selecting File|Save or the save icon. Vicon will also automatically offer you a dialog to save your data if you close the window. The data is stored in the C3D file format.

Before We Move On
So there you have it, you have just performed your first Vicon motion capture. OK, it was only a wand wave but, hopefully, you now regard Vicon as an accurate 3D measurement system. Before we finish this section and the New User Guide, wed like you to complete the following exercises. We have provided examples of our efforts in the sample session \\Vicon\Userdata\Wizard\New\new26.

Here are Some Exercises To Try...


1. Repeat your wand wave around the edge of the volume. 2. Write your name in space with the wand. You can visualise this in the Workspace by reconstructing and opening out the trajectory display range. 3. Take your L-Frame and wave it through the volume, spinning it, etc. 58

so lets go capture...

2.6

new user

4. Take a handful of markers and through them up in the air in the middle of the volume. 5. Throw and roll individual markers across the volume. 6. Perform timed captures for different periods. You should also take some time to feel comfortable the Workspace window. You should also become familiar with the various display tools available. Remember that Vicon has a comprehensive on screen context help mode which you should use. So, lets quickly review what youve just learned. You will now be comfortable with actually capturing and reconstructing data as well as the how and where to best visualize your captured data. You should be aware of the various windows in the workstation such as the data directory, video monitor and 3D workspace windows. You should also know how to play your captured moves, display and select trajectories and save your data. You can easily move between your different display windows by holding <CTRL> down and pressing <TAB>.

59

new user

2.7

closing words

2.7 Closing words. Lets review what weve just learned.


Well, how does it feel to be a successful motion capture user ? It wasnt too painful was it ? You should be sitting there with a big grin as you view your data of a wand moving around in 3D. Feels good and remarkably easy, doesnt it ? Before we let you start really using Vicon to capture some more interesting data of your obliging subject, we will briefly review what weve worked through in this guide and where you should look for more information. You should always consider the following issues discussed in this guide whenever you are running a capture session.

Preparation. The time taken, prior to capture, to think about the space available, the
types of move to be captured and how the data is going to be used in the end will make life so much easier.

Setting Up Your Equipment. Follow the practical steps we discussed to ensure you
capture exactly what you want with Vicon.

Calibrating Your Volume. DYNACAL is a powerful tool for calibrating your cameras
accurately and a good calibration is the key to a good capture session.

Capturing And Reconstructing Your Data. You know how to press Start. You
know how to press Stop and by hitting the Reconstruct button you have captured your first 3D data. With this guide by your side, you are now at a stage where you are able to deal with the basics of optical motion capture and the tools that Vicon provides to ensure highly accurate capture in most volumes.

Where Do You Go Now ?


Some of you may now want to stop reading this tutorial and get more hands on experience. Thats fine as you already know more than enough to capture successfully. You should already be aware that Vicon has a comprehensive on-line help system. Remember that there are also a number of relevant Checklists and the Troubleshooting guide at the end of this tutorial. However, most of you will actually want to capture something more than a wand or a handful of markers. For this reason, we have developed the next part of this tutorial; the Routine User Guide. 60

closing words

2.7

new user

Within the Routine User guide, you will learn about how and where to place markers on your subject. We will explain the once only stage of subject calibration that creates a unique measurement and description of your subject. You will see how the Autolabeler takes this description and automatically removes all the hassle of manually labeling your marker trajectories. Time will be taken to explain how to use the Pipeline to automate the process even further turning motion capture into a single stage process from data capture to fully labeled 3D data. Finally, we will provide an overview of Vicon editing tools to clean up your data when its not as good as you want. For those of you who want to know more detail about some of the areas covered in this guide may wish to skip ahead to the Advanced User Guide. That guide will go into detail about using Vicon for more specific tasks. It will also discuss how to linearise your cameras, work with multiple subjects and capture specific parts of the body such as the face or hands. There will also be discussion on the practicalities of capturing in large or irregular volumes. For those of you interested in the understanding more about the specifics of Vicon and its functionality, we suggest you have a look at the accompanying reference manual.

61

routine user

So now you know how to plug the Vicon in and fit the system together; you even know how to capture data. But now it's time to find out how to use the system for production. In this section you will learn about the capture pipeline - from putting on the markers to editing the final data.

3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8

Overview Attaching Markers Subject Calibration Autolabeler Pipeline Parameters Clean-Up Review

63

routine user

3.1

whats this guide all about?

3.1 Whats This Guide All About? - an Overview.


From the previous guide, you will now be aware of how to set up the Vicon system and calibrate your capture volume successfully. You should also be happy with capturing and reconstructing data and visualizing it in the 3D workspace. You should also know where the performer can move as you have marked the volume out on the floor using tape we have thoughtfully provided. So whats stopping us from going out and capturing hundreds of wild and wonderful moves? Well nothing really. At this instant, the system is set up to accurately capture lots of markers and turn them into a similar number of the 3D trajectories. But Vicon is more powerful than that. These unlabelled 3D trajectories are currently meaningless to the rest of the world. By working through this guide, you will gain the vital pieces of knowledge to ensure that the Pipeline process can turn the data into something of use, i.e. known labeled and unbroken trajectories. Again, the majority of this chapter is linked explicitly to you putting together your own capture session. We aim to take you through all the tools that Vicon has to allow the capture of many funky moves and performers. We have also provided a large number of example sessions related to the sections of this guide. They are found in \\Vicon\Userdata\Prodn\ You can logon with Username: PRODN (no password)

Whats in the Various Sections Of This Guide? Section 3.2 will give you suitable advice on the positioning and attachment of markers
to your subject prior to capture.

Section 3.3 sets you up to capture a few moves of your subject and appreciate the
ease of using Vicon. It will also explain the principle of establishing parameters for your subject for all your subsequent captures. This once only operation is the key to implementing the automatic marker labeling.

Section 3.4 will concentrate on how to automatically identify marker trajectories


using the powerful Autolabeler. How and when to use assisted labeling and manual labeling of trajectories will also be discussed.

Section 3.5 will show you how to capture your subject using the Vicon pipeline. We
will explain all the different functions available and highlight the great advantages of 64

whats this guide all about?

3.1

routine user

the pipeline. We will show you how flexible it is to use and why it makes motion capture fast and easy.

Section 3.6 takes a moment to explain the various parameters that you can adjust to
improve the performance of Vicon. We discuss the reasons why you may want to alter reconstruction and autolabel parameters and what indicators Vicon gives you to assess the quality of your captures.

Section 3.7 explains the various tools available in the Vicon edit suite to remove any
potential problems after youve captured your moves. It will show you how to spot any discrepancies and introduces the various editing tools. The final section will briefly review what you will have learned in this guide.

So What Should You Look At ?


If you are the type of user who will be primarily interested in post-capture processing then you may wish to skip the first section on attaching markers. However, the remaining sections are essential reading. They will explain how to produce useful data. Again, we know that quite a few of you will be familiar with Vicon systems so the majority of the sections of this guide will mostly be old news. It is still worth browsing through this guide to refresh yourself with the Vicon way and check out the new helpful hints.

65

routine user

3.2

putting markers on a real person

3.2 Putting Markers on a Real Person


At the end of the New User Guide, you captured the wand and / or the L-Frame and maybe a few random markers. Good fun, but not really much use in the long term. This section will show you how to put these markers to a real use by identifying key points on your subject which are related to the underlying skeleton. We are going to explain about the actual reflective markers, how you connect them to the subject and why you position them at those particular locations. This will lead on to the next section on calibrating your subject.

What Are These Reflective Balls?


The markers come in various shapes and sizes from 4mm markers suitable for facial and hand capture, to 50mm markers suitable for capturing gross movements in a very large volume. However, for most captures we recommend using the standard 20 or 25mm markers when capturing a full body human subject. These are supplied as either a hard sphere (25mm) or the extremely flexible soft (20mm) markers. The advantage of the soft marker is that your subject can perform moves involving falling over while avoiding damage from the markers themselves. THE
USE OF VERY LARGE

(50MM)

MARKERS FOR HUMAN BODY CAPTURE IS NOT RECOMMENDED

AS THEY WILL MERGE TOGETHER IN THE VIDEO WINDOW.

If you havent got any markers, then contact Vicon to order another set.

What Should The Subject Wear?


The clothes that your subject wears can affect the captured motion. Basically, Vicon is designed to capture the movement of the markers and if they are attached to loose clothing then Vicon is going capture the motion of that clothing. Occasionally, this is what you want but usually you want the underlying body motion and, for this reason, we recommend that your performer wears some form of body suit, or better still, that markers are applied direct to the skin. For gait analysis it is often possible to keep a variety of swimming trunks available or ask the subject to walk in their underwear. For general biomechanics work, if clothing is worn, typically something made of lycra like a leotard is very good. A human will move differently when wearing shoes compared with bare feet. In gait analysis it is usual to ask the subject to walk in bare feet, but sometimes it may be easier for the subject if they wear their normal shoes. 66

putting markers on a real person

3.2

routine user

How Do I Attach The Markers?


You should take care to securely attach markers to the subject. When selecting the correct approach, much will depend on the clothing worn by the subject and the types of moves you are capturing. There are a number of different techniques that have been successfully used. The following list is merely a starting point to develop the best approach for you. Double sided wig tape.

The most common way of attaching markers. The tape is very good at adhering to the skin and is recommended for hands, the face, legs and other exposed parts of skin. It is suitable for most other marker locations when the action is not too violent. It is not such a great adhesive to clothing especially if it gets damp. Vicon supplies hypoallergenic tape in the standard accessory kit. Wig tape is ideal for medical and other anatomically precise work. Although the adhesion is reasonably firm, it should not be too painful when peeled off. Gaffer / Duct tape.

These types of tape offer a stronger adhesion and are suitable for attachment to clothes and shoes in particular. Note that it can be affected by moisture and that it should not be stuck directly to skin. Gaffer tape is perfect for attaching markers to objects or machines. Velcro.

This is the most secure method of attachment. If you are going to use the same performers for a large number of capture sessions, we recommend that you get body suits made with soft Velcro patches sewn on at the correct locations for markers. Permanent velcro patches should be sized much larger than the marker base (perhaps 3x3) so that some latitude is allowed for final marker placement. Each marker can be attached to the same point using the opposite hook Velcro pieces. As humans are of different shapes, you will need to consider getting a range of different sizes of bodysuit. This method is widely used in animation applications. Sweatbands.

A headband, with the markers secured by screwing the bases through the material, is the most suitable and comfortable for your head markers. Wrist bands are also useful in holding extender rods in place and thus positioning wrist markers. Coban

Coban or other athletic support elasticated tapes are useful for holding the base of a 67

routine user

3.2

putting markers on a real person

marker wand in place on a leg or arm. Micropore tape

A soft and breathable adhesive tape suitable for direct application to the skin, micropore tape can be used for holding the bases of marker wands in place and should not hurt when peeled off. Several rolls of micropore tape are supplied with the accessory kit.

And Where Do I Put Them?


Placing markers on your subject is another of the key operations in achieving good capture with Vicon. You can place markers anywhere on the body of your subject and Vicon will see and capture them. This will not necessarily yield good results as you are trying to generate data on the underlying skeletal motion of your subject. Because were usually interested in the skeleton, markers should be placed at specific, but easily identified, anatomical locations. The following photos show you some widely used marker locations and the abbreviated labels that Vicon uses. These marker

locations are compatible with the popular general-purpose models "Claudius" and "Brilliant" but there is no reason why you should not experiment with other locations it all depends what results you need in the end. In the Checklist part of the Reference Guide, we have included some marker position templates so you can record marker positions and subject dimensions. The general rule of thumb is to use as few markers as is required to produce the desired skeletal definition and provide help for the Autolabeler.

As youll find out in the next section, Vicon labeling considers the body as a set of segments. [refer to section 3.4]. What follows is a brief explanation of the key body segments and their related markers.

The Chest (Thorax).


Despite the fact that nearly all descriptions of the human skeleton will refer to the pelvis as the root of the hierarchy of body segments, Vicon actually considers the chest or thorax, back and front, to be the main segment when autolabeling whole-body data. This is because the markers are positioned in comparatively static positions relative to 68

putting markers on a real person

3.2

routine user

An example of a subject with attached markers.

RFHD

LFHD
Front View

CLAV

RSHO STRN RUPA

LSHO

LUPA

RELB

LELB

RWRB

LWRB

RWRA

LWRA

RFIN

LFIN

RFWT

LFWT

RTHI RKNE RANK RMT5

LTHI LKNE LANK LMT5

RTOE

LTOE

69

routine user

3.2

putting markers on a real person

An example of a subject with attached markers. Back View

RBHD

LBHD T1O

C7 LSHO LUPA

RSHO

R10

LELB

RUPA RELB

LWRB

RWRB

LWRA RWRA

LBWT

RBWT

RKNE LKNE LMT5 LANK RMT5 RANK

LHEE

RHEE

70

putting markers on a real person

3.2

routine user

each other, there are a greater number located on the thorax and they are more likely to be consistently visible. The chest is used only as a root in the labeling stage of the Animation pipeline. Within Body Builder and Mobius, the pelvis is the root segment. Please note that the exact choice of root segment and the design of the hierarchy of segments is under user control and is discussed in more detail in the manuals relating to Autolabelling, BodyBuilder, Clinical modelling, and Mobius. It is not appropriate to go into great detail about it here.

Common whole-body models define the thorax segment using three or four markers. As with the Pelvis segment, it is recommended that four should be used, since this provides better continuity if one marker is temporarily obscured. The marker names refer to anatomical landmarks. C7 is the most prominent vertebra which can be felt at the base of the neck with the chin resting on the chest. The marker should be placed on this prominence, which is a few centimeters below the hair line. If the performer has long hair, it should be tied, pinned, or tucked under a head band, so as not to obscure this marker.

T10 is the tenth thoracic vertebra. The precise way to locate this is to ask the performer to lean forward, so that the vertebrae can be felt, and to count 10 vertebrae down from C7. It is not generally necessary to be precise in placing this marker, however; you can position it approximately by placing it on the center line of the spine, about level with the bottom of the breast bone. Not all markers locations in widelyused models are required to be located in anatomically precise positions. This depends on the exact approach used in the model concerned and is addressed in the documentation relating to the model, or to BodyBuilder. STRN is short for Sternum, the breast bone. This marker should be placed on the lower end of the breast bone. If the performer is female, this marker may be moved upward until clear of the breasts. It should be placed on skin (or clothing) which moves with the rib cage rather than with the breasts. 71

routine user

3.2

putting markers on a real person

CLAV, the optional fourth point, should be placed centrally, on the collarbone (or clavicle), just below the throat. This point may tend to be obscured if the head is lowered, but in most movements, it will be clearly visible. The shoulder markers should be placed on top of the shoulder, at a point which remains visible. The bony knob at the end of the collarbone (Acromion) is suitable.

The Pelvis.
The origin of the pelvis in commonly used models is the mid-point of a line joining the estimated positions of the hip joint centers, which are estimated using a simple formula. The pelvis is defined by either three or four markers placed around the waist, just below the belt line. To decide whether you need three or four markers, you should consider whether you have constant visibility during capture. We recommend using four markers as it allows for the occlusion of one marker without interrupting the continuity of the root segment. The front waist markers should be placed on the bony prominence of the pelvis which are easily felt on either side of the belt buckle position. The rear waist markers should be positioned relative to the two small dimples found in the small of the back. Place the markers on the flat skin just outside the dimples and at the same height.

Do not let upper body clothing hang over the markers obscuring them as the occlusion of the pelvis segment is important.

If you are performing gait analysis, the exact anatomical locations of the pelvis markers is important. The points used are the righ and left anterior superior iliac spine and the posterior iliac crest.

72

putting markers on a real person

3.2

routine user

The Legs.
Attaching markers to the legs and feet is fairly straightforward. The thigh bone (the femur) is largely determined by the knee marker. This is placed on the outside of the knee at its widest part roughly level with the middle of the kneecap and slightly less than half way between front and back of knee. You can visualize the mechanical flexion of the knee by asking your subject to flex and extend the knee while looking side on.

The ankle marker should be close to, or on, the prominent bone of the outside of the ankle. Optional markers may be placed on the front of the middle of the thighs which arent actually used in calculating the locations of the skeletal joints. They are very useful in helping the Autolabeler correctly differentiate between left and right limbs and possibly for editing purposes. [refer to section 3.7]. We recommend attaching a single marker to the left or right thigh to create a sense of anti-symmetry in your marker set. If you wish to have thigh markers on both legs then try and have them at slightly different distances from the knee markers. This preserves a sense of antisymmetry between limbs. If you are using the Helen Hayes gait analysis marker set, thigh and shank markers are required in precise locations. Refer to gait analysis documentation for details of marker location.

The Feet.
You can if you wish attach four markers to the feet. The number depends on the type of model you will animate with the captured data. The foot can be described as one or two segments with varying degrees of freedom. In its simplest form this can mean the foot is a single segment with one degree of freedom about the ankle and using one marker attached to the toes. This is simplistic but widely used. More details can be measured by placing markers on the heel, the joint of the small toe (known as MT5) and the center of the big toe. This can be seen in the following photo. An additional marker can be placed on the dorsal surface of the midfoot (known as DOR) for extra accuracy.

73

routine user

3.2

putting markers on a real person

Suitable locations for markers on the foot

RDOR

RANK

RTOE

RHEE

RMT5

The Arms.
For each arm, you will probably want to capture markers which will define the upper (humerus) and lower (radius) arms as well as the hand. This is achieved by placing markers on the elbow, wrist and hand. Start by placing a marker on the outside of the elbow joint, at the widest part of the bone. If the rotation of the radius (forearm) is to be independently defined then two markers are required on the wrist. We suggest that you use a wrist bar or rod with the markers attached at each end. The bar should be positioned just above the flat area of the wrist, adjacent to the back of the hand, but above the wrist joint. We have found that attaching the rod to a sweatband is a very convenient way of doing this but take care to ensure that it doesnt move around. A second approach for the wrist markers which may be more appropriate if the bars restrict the subjects movement, is the wrist extender (WRE, WRI). Again using a sweatband and some of the tools found in the accessories kit, these can be furnished to provide an extender (WRE) which will stand approximately 3 above the arm perpendicular to the plane created by the hand. The inner marker is then set on the same side of the arm on wrist/hand plane right on the watch face. The wrist markers are differentiated as WRA (the marker closest to the thumb) and WRB (closest to the little finger).

For capturing the hand, we recommend a single marker positioned on the hand, near the knuckle of the middle finger, as shown in the following image. 74

putting markers on a real person

3.2

routine user

Two approaches

LWRE

recommended for the wrist WRA/WRB and WRE/WRI

LFIN

RWRB RFIN

RWRA LWRI

If the hand markers are not used, then BodyBuilder will not calculate the flexion of the hand.

You can capture the fingers by placing markers on the knuckles between the index and second fingers but remember that these finger segments are dummies, in that they do not move independently of the hands. For those of you interested in actually capturing the fingers and hands in more detail then have a look at the Advanced User Guide.

The Head.
The head can also be marked with either three or four points, and for similar reasons to those mentioned above, it is recommended that four markers are used. The easiest way to mark the head is to put a head band around the hat line, and either attach markers directly to this, or use it to hold the bases of short marker sticks (also called stand-offs or extenders). The front head markers should be placed above the temples. The back head markers should be placed diagonally opposite the front head markers. To enhance the accuracy of the Autolabeler, consider constructing the headband with the two markers in the front closer to each other than those in the back of the head. If all the markers are nearly equally distant from each other, some confusion may occur.

How Does Vicon Describe Them ?


Vicon provides a number of standard files which will enable you to capture and generate an animated skeleton if you desire. Within Vicon, the one of interest for these captures is the marker file (wizard.mkr) which provides the following information.

75

routine user

3.2

putting markers on a real person

A list of marker labels. The links between markers so you can display the green stick figure. A list of labels which identify the body segments for the Autolabeler. The links between the body segments.

This default text file can be found in \\Vicon\Models\Wizard.mkr. For most part, you dont need to worry about this file but if youre interested there is a detailed discussion on the marker file format in the Reference Guide. As our subject will be waving, make sure weve included hand markers in the Wizard.mkr file and that, if applicable, your output model / character has hand markers.

So Have You Marked Up Your Subject?


Your little helper should now be stood in front of you with 35 or 37 markers on their person. Have you got them all? Check this quick list to make sure youve included all the essential markers.

Head Chest Arms Waist Legs and Feet

Four markers placed symmetrically with the front two on the temples. Two on front and two on back plus a further two markers located on the shoulders. Four markers on the elbow, inner and outer wrist and top of the knuckles. Two markers on front of pelvis and two on rear. Five markers on knee, ankle, heel and toe and small toe.

You should have also included optional markers on the Right Shoulder Blade, Left Upper Arm and Right Thigh to help the Autolabeler. You will find that there are certain markers attached to the subject that are relevant only for autolabeling. They have no impact on any subsequent processing that generates skeletal information. For example, optional markers on the shoulder blades or scapula (R10, L10), upper arms (RUPA, LUPA) and the thighs (RTHI, LTHI) have been found to be particularly useful in achieving clean labeling. Its worth noting that you dont need all of them to label. Using R10, LUPA and RTHI may prove optimal. 76

capturing and calibrating your subject

3.3

routine user

Capturing and Calibrating Your Subject. 3.3


We can wait no longer. You are now ready to capture your first human moves! This section focuses on capturing your subject and also how to calibrate your subject prior to introducing autolabeling in the next section. [refer to section 3.4]. This simple procedure is the key to minimising the time wasted labeling trajectories in all your future captures with this subject.

Lets just check the following list to make sure everything is set up. The system has been unpacked, plugged in and a link has been established between Workstation and Datastation. The cameras have been arranged around a suitable capture volume and the apertures and sensitivities set to ensure that the smallest marker used is still seen at furthest point from each camera. You have created a new session directory with which to store your data. You have performed a successful calibration using DYNACAL with suitably small calibration residuals. You have captured various wand waves as well as some of the other captures suggested at the end of the New User guide.

With your subject in their bodysuit and covered in reflective markers, you should now ask them to move into the capture volume.

Have You Adjusted Your Camera Sensitivities?


The final task prior to capture is the re-adjustment of all the camera sensitivities. It is vital for good results that you reset these sensitivities. You are now going to capture the smaller markers attached to your subject rather than the larger 50mm markers of the calibration apparatus. You may find it optimal to increase the sensitivity of each camera in the System|Live Monitor view, moving the slider up by a tenth or so of the scale.

DO NOT PHYSICALLY ADJUST THE APERTURES AS YOU WILL ALTER THE CAMERA SETTINGS WHICH WILL
REQUIRE YOU TO RE-CALIBRATE.

YOU

DO

NOT

WANT TO DO THIS

! 77

routine user

3.3

capturing and calibrating your subject

A good way of checking for sensitivity settings is to view your subject with move to at the far and near edges of the volume relative to each camera. At the far edge, youre trying to ensure that markers appearing as more than three video lines. At the near edge, you are trying to minimize marker merging and blooming.
Using near and far views of the subject to adjust apertures settings

When youre happy with the sensitivities of each camera view, you are now ready to begin.

Lets Capture Your Subject.


Now its time to capture. Make sure that you are in the correct session with the most recent calibration. You will hopefully remember from the previous New User guide about capturing data. [refer to section 2.6]. As a quick reminder, select Trial|Capture and then choose one of the trial types such as General Capture. Hit capture and, when you and the subject are ready, start capturing. A good first capture is something as simple as walking through the volume. After 5 or 10 seconds, end your capture by hitting Stop and before you know it the data will have been uploaded to the Workstation. Depending on the trial type definition, reconstruction will start immediately, with a 3D workspace window opening automatically. 78

capturing and calibrating your subject

3.3

routine user

If not, you should now reconstruct this data. First open the trial by either double clicking on the trial name or number. With the trial open, select Trial|Reconstruct and watch the status bar as the processing speeds through your capture. We will discuss the user definable parameters later in this guide but for now the default values should give you good results. [refer to section 3.6]. So, what does it look like? Well, it should look like the picture underneath. If you use the playback controls, your cloud of white circles in the workspace window should now move around just like your subject. Can you see the moves? We have found that its quite remarkable how much you can perceive from this unlabeled display. Remember to use the right mouse button to zoom in and out of the view and the left button to rotate the data.
A recontructed but unlabeled figure walking around the workspace

You should capture a few more times so that you and your subject can become more comfortable with the process. A suitable sequence of moves may include the following and youll find our captured versions of these moves in the examples; Warming up exercises - stretches, arm swings, leg bends. Walk in ever increasing circles from the center outwards. Walk around the edge of the volume with one arm held aloft. 79

routine user

3.3

capturing and calibrating your subject

This is particularly useful for checking that your camera placement is covering the required height throughout the volume. We also find that this initial stage of captures is very beneficial for checking that the system is operating correctly and producing good data. Run around edge of volume. Walk from one side into the center of the volume and wave with one hand - the first of our Hello World captures.

At this stage, although you can see that its your subject moving around, the system sees it only as a set of unlabeled trajectories. You can apply some meaning to these trajectories by either manually labeling them or, far more appealing, letting the Workstation automatically label them. Before this can happen you have to manually label your subject.

What is a static trial and why do we do it?


A static trial is a short data capture (one to five seconds) in which the subject remains still. There are several reasons for doing a static trial: You need to label one trial by hand so that subsequent trials can be labelled automatically. It is easiest to do this with static trial data Some models use extra markers to define anatomical points in a static trial, which are removed before collecting data in motion. The extra markers are labelled by hand in the static trial Some models require a static trial, in a certain position, to calculate parameters (such as segment lengths) which are used later when applying the model to motion data You may need static trial data for two or three or these reasons. Often, a single static trial will serve all of the above purposes. This depends on the details of the model used. When taking a static trial for autolabelling purposes, you have to go through the process each time you apply markers to the subject. Each new subject is a unique individual and each time you place the markers, you may put them in slightly different positions, so you should always generate a new subject file. Also, if a marker is moved or if it falls off and is reattached, then you should create a new subject. 80

capturing and calibrating your subject

3.3

routine user

Lets Capture Your Subject In The Static Pose.


Select Trial|Capture, and choose the Subject Calibration trial type. This will capture TVD data only for 3 seconds with the options set for a static trial and for the pipeline to reconstruct immediately after capture. FOR
THOSE OF YOU WHO WILL BE USING

BODYBUILDER

MAKE SURE YOUR

FILE|USER

PREFERENCES

ARE NOT SET TO SAVE TRAJECTORIES IN MARKER ORDER.

The subject should take a static pose giving good marker visibility with arms and legs out, and the elbows and knees slightly bent. We call this the motorbike pose and its shown in the following image. This pose is recommended as it will give all the markers reasonable visibility in a sufficient number of cameras and also help the calculation of the subjects internal parameters. Having the elbows and knees flexed helps to define the joint flexion axes precisely.

The static pose and its labeled c3d version.

When your subject is in position, hit capture and Vicon will capture video data for the default duration of three seconds and then reconstruct the 3D trajectories for all the frames. You are expecting to see a result of approximately the same number of trajectories as markers on your subject. For our example of 33 markers, a result of say 40 trajectory segments or less tells you this system is working as expected. IF
THE NUMBER OF TRAJECTORY SEGMENTS IS LESS THAN THE NUMBER OF MARKERS THEN YOU

WILL NOT BE ABLE TO CREATE A COMPLETE SUBJECT FILE. RECONSTRUCTION PARAMETERS. ALSO POINT TO THE WRONG

YOU

MAY WELL HAVE INCORRECT

SIGNIFICANTLY HIGHER NUMBER OF TRAJECTORY SEGMENTS MAY PARAMETERS OR EVEN A POOR CALIBRATION.

[REFER

TO

TROUBLESHOOTING]. 81

routine user

3.3

capturing and calibrating your subject

Make sure your subject has their joints slightly bent and that they try to stay still during the capture.

Manually Labeling The Trajectories.


At present, the marker trajectories in the workspace are all unlabelled. In other words, they have no meaning in relation to our subject. Therefore, for the only time in this session, you will have to identify each trajectory and label it with a predefined name. First we need to select a file which contains the list of our markers suitable for this type of capture. To do this you should select Trial|Attach Marker Set from which you will be able to browse through your directories to find your choice. Weve included the file Wizard.mkr in the directory \\Vicon\Models\.. Once you have selected this file, you will see the list of names appear on the right hand side of the screen. Do not panic, they are not Russian swear words. They are in fact abbreviations given to the marker locations. For detailed discussion on marker files, have a look in the Reference Guide. If you are going to use the same marker file for all your captures then go to File|User Preferences and select it as your Default Marker Set. You can also select Default Marker Sets for different trial types.

The art of manually labeling is a tricky one to describe yet so simple to perform. First, if not all the markers are visible in the first frame, step forward by dragging the time bar slider to the right until you find a frame with all markers visible. Sometimes the first frame is missing a few. Move the view until you can distinguish the front of your subject and the four markers on the head. Zoom in until the markers fill the screen, and rotate to any point of view which you find clear. If you now move the mouse icon over a marker or, more importantly, a trajectory youll see what is known as the Mouse Helper appear on screen with a dash in its dialog box. The dash indicates that it is an unlabelled trajectory. The Mouse Helper is a great tool in assisting selection of trajectories. When you label a marker, you actually label its trajectory. Watch that you dont confuse yourself by selecting Workspace|Hide All Trajectories which will stop you labeling. Also, make certain the Mode is set as Label in the bottom right corner. Just to make it clear, the workspace show a circle for the position of each marker in the current frame (the frame number shown below the time bar) and a line for the trajectory. By default the line extends one frame forward and one frame back 82

capturing and calibrating your subject

3.3

routine user

from the current frame, but you can make the trajectories longer if it helps. Usually, short trajectories give the clearest view, so the default display has short trajectories.

If you hold the mouse over what you think is the Left Front Head marker and click the left mouse button, the marker and trajectory will change color from white to yellow. This indicates that its been selected. If you now move the mouse to the right hand list of marker names, hold it over the name LFHD and click the left button once then the color of the trajectory will change to light blue and a 1 will appear next to the name. If you now move the Mouse Helper over the marker, the box will appear saying LFHD. Its that straightforward. The colors of trajectories will become familiar as you use Vicon more and more. White represents unlabeled and cyan (light blue) is labeled. The order of selection goes from yellow, red, magenta, dark blue, yellow, red, etc in a cyclic order.

You now select the rest of the trajectories and label them in the same way. Vicon realizes that this can be frustrating if you could only label one at a time and hence it allows up to 20 trajectories to be selected in the workspace before you apply the marker names. Vicon will label the first selected trajectory with the first selected name so remember the order you picked them. Once youve done it a few times its fairly obvious and each user has their own preference for the order in which the markers are selected. If you select a trajectory by mistake, simply click the right mouse button once (while pointing at black background area) and it will return to its original color of white (unlabelled) or light blue (labelled).

If you have labeled a trajectory incorrectly then reselect it and move the mouse to the bottom right of the screen and hit the Unlabel button. Your trajectory will now turn white. Alternatively select it, and point straight to the correct label, and click again. The correct label will over-write the previous label. Make sure that the box in the top left corner says (no subject) when labeling trajectories prior to creating your subject.

83

routine user

3.3

capturing and calibrating your subject

You will have also seen that as you label up your trajectories, a green stick figure appears. You will become very familiar with it as a useful visual aid to check that the trajectories are labeled correctly. It also lets you visualize your captured move very quickly. Anyway, from the static capture, you should now see the green figure in the motorbike pose with all trajectories labeled. Youre now in a position to create the unique description of your subject. If you are using the same subject with the same marker set and locations for a number of sessions then you can use the last subject parameter file as an initial guess for your current session. Copy the subject.sp file into the current session directory. Select Trial|Options and check the old subject. Autolabel the static data and, if successful,

Create a Subject.
With a fully labeled figure, you should now go to Trial|Create Subject from the menus and the following dialog box will appear. Firstly, input the name of your subject, select the whole trial for the field range and all trajectories. There is no need to select the option to create a subject.C3D file unless you plan to create an Acclaim skeleton file later (a feature used mostly for animation purposes).
Trial|Create Subject...dialog box

If your subject has been still throughout the trial, you need not worry about the Field Range and can select the whole trial. You should also know that you can create a subject using a smaller number of fields around the current frame. To do this, select a 84

capturing and calibrating your subject

3.3

routine user

period when all markers are visible and set an appropriate region around the current frame. This is useful if the subject moved during the trial or if there is bad data in some part of the trial for other reasons. Alternatively, if you forget to perform a static capture at the start of your session, you can create a subject from any trial. When you are happy with the settings, hit OK and Vicon will create your subject and store it as a subject.sp file in the current session directory. You can now use your subject to autolabel trials rather than go through the laborious process of manual labeling; which saves time. IF
A SUBJECT TAKES THE MARKERS OFF OR THEY FALL OFF OR IF THE MARKER LOCATIONS ARE

CHANGED THEN YOU MUST DO ANOTHER SUBJECT CALIBRATION.

85

routine user

3.4

autolabeling

3.4 Making Life Too Easy - Autolabeling


In Vicons quest to fully automate motion capture, we have developed a sophisticated procedure to remove the frustrating task of identifying and labeling the 3D marker trajectories. The labeling procedure looks for groups of trajectories which correspond to the markers on each body segment. These marker sets are defined by the fixed distances between them. Thus, the markers on the head, or on the pelvis, move together as a group, maintaining almost a fixed relationship. This makes them identifiable as a group. The labels of markers which belong to a particular group is specified in the marker list file (MKR file). Weve supplied a selection of default mkr files which should be sufficient to deal with most types of capture. For those of you wishing to develop your own marker files then please read the relevant section in the Reference Guide.

If

you

havent

been

capturing

then

look

in

the

example

session

\\vicon\userdata\wizard\prodn\ for our sample data.

Using the Autolabeler.


To use the Autolabeler, you must first select the subject. Initially, lets have a look at the data you captured earlier before using the subject data on a new set of captures. When post-processing your data, select Trial|Options and the following dialog box will appear. Select the name of the subject captured. The subjects name and the relevant marker file will now appear in the top left corner of the screen. The subjects are taken from the list of the sp files stored in the current session directory. A very quick and useful way of checking that your labeler is correct is to re-label your static trial. Re-open the trial and select Edit|Unlabel All Trajectories and then autolabel the trial with your subject (press the Label button).

Well assume that youve reconstructed a motion trial and have the workspace open and displaying a cloud of white trajectories (newly reconstructed but not yet labelled). To label these trajectories, simply select Trial|Autolabel or the Autolabel button on the toolbar. As the labeling progresses, you will see two types of messages appear in the Status Bar at the bottom of the screen. The first message will say Gathering statistics : frame**. 86

autolabeling

3.4

routine user

Trial|Options...dialog box

During this stage, Vicon scans through the entire trial testing the separation between every marker trajectory with every other trajectory present at that instant. It looks for constant separations between marker pairs in all frames, to oompare with the separations stored in the subject file created earlier. The second stage of Labeling trajectories scans through the trial several times to find the best matches between the marker pairs measured in the subject file and the relative motion of the trajectories, starting with the root segment (thorax or pelvis in most cases) and working out along the chain. If you start the Autolabeler by mistake or wish to stop the process, simply press <ESCAPE>. Note that you will lose all the labeling information. Once youve post-processed your existing trials, you will want to go and capture using your calibrated subject. You do this by selecting Trial|Capture and then checking the box against your subject in the Data Capture dialog box. This will automatically assign this subject to this and all subsequent trials until you de-select it. Now, you should go off and do a few more captures to become familiar with the use of subjects. Were sure youve got loads of ideas but heres a few suggestions to highlight the power of the Autolabeler. Youll find our versions in the example sessions. Repeat your Hello World capture but, this time, waving with both hands. Walking through the volume, starting outside the field of view. Running in and out of the volume. 87

routine user

3.4

autolabeling

Spinning on the spot. Throwing punches and kicks while moving around the volume. Press-ups or any other exercises.

Dont feel restricted by our suggestions, try whatever you like. We would expect that all of these will label automatically but you may encounter trials when the Autolabeler fails which is where assisted labeling becomes an option.

Lets Check the Results.


Simply, there are three outcomes to the autolabeling process. 1. All valid trajectory segments are labeled correctly. Hurrah ! 2. None of the trajectories have been labeled. Oops! 3. Some of the trajectories remain unlabeled or incorrectly labeled. Boo ! If the Autolabeler has completely failed to label any trajectories there is probably a straightforward reason which may be one of the following; The trial you are trying to label is particularly noisy due to poor reconstruction with a larger than acceptable number of ghosts markers and marker flipping. [refer to Troubleshooting]. You have calibrated the subject incorrectly. This can happen if you use frames that have markers missing or occluded. The selected subject or attached marker files are incorrect. This could be the result of trying to use the SP file from someone else or using an edited marker file which has forgotten to list key markers. The Autolabel parameters; Maximum Deviation and Minimum Overlap; are incorrect. [refer to section 3.6]. If any of these have happened, just work back through this guide to correct the problem. You may have problems with non-labeled trajectories due to the mis-identification of a marker higher up the linked list of markers. For example, the failure to label the shoulder correctly may well mean that all the arm markers remain unlabeled. Most of 88

autolabeling

3.4

routine user

the misidentifications can be cleaned up using the Vicon edit tools. [refer to section 3.8]. Another potential problem is the flipping of body parts such as the left leg being labeled as the right leg and vice versa. This is less likely to happen if you use the recommended extra markers to cause asymmetry. Dont worry, if either of the above have happened as you can help the labelling process when it has problems. Well discuss this in a moment.

Labeled subject with missing arm

Finally, markers have moved on the subject. This can be as extreme as a marker falling off but it may also be a result of the subjects suit stretching thus displacing the markers. You can resolve this by creating a new subject file and using that for all subsequent trials. It is advisable to use a new subject name as your previous trials will have been captured and processed using the old subject. For the lazy, weve installed a function which allows you to adjust the stretch factor where the machine is looking for rigid segments. Under Trial|Autolabel Parameters you will find a maximum deviation box which defaults to 3% but can be increased slightly to account for modest marker movement. The purpose of this parameter is to allow for natural variations and measurement noise which vary the distance between 89

routine user

3.4

autolabeling

markers on the same segment. If markers are displaced or fall off then you should perform a new subject calibration if possible.

Assisted Labeling for Removing Those Stubborn Trajectories.


You will sometimes find that the Autolabeler has failed to label some or all of your trajectories, or got some of them wrong. Again, dont despair when youre looking at your blur of white dots in space as you can easily help the system by hand. This involves kick starting the labelling by manually identifying and labelling a number of key markers in a single frame using the same technique that you used when creating your subject. Simply identify three or four markers such as LFHD, RSHO, LBWT and RTHI then select Trial|Autolabel or hit the icon. If this also fails, then have a look at our troubleshooting guide. [refer to Troubleshooting].

So What Have We Just Done, Wizard?


Phew, we have just worked through the majority of key elements necessary for achieving quick and impressive results. There was a lot mentioned so lets briefly summarise the key steps. Attach the markers to your subject. Always look where possible to create subtle asymmetries in marker locations to help the Autolabeler. This is particularly relevant for multiple subjects. Adjust the sensitivities of all your cameras to ensure that even the smallest marker is always seen throughout the volume. Select Subject Calibration as the trial type and then capture your subject in the static motorbike position. Reconstruct your data. Attach Wizard.mkr as the default marker set and then manually label all the trajectories. 90 Select Trial|Create Subject and type in the name of your little helper.

autolabeling

3.4

routine user

Save the trial as a precaution then unlabel all trajectories. Go to Trial|Options and select your subject and check include subject name in labels.

Autolabel your static trial for subsequent processing in BodyBuilder. Got to Trial|Capture, select your subject and General Capture, then capture another move.

Reconstruct and autolabel your data. If youve got a very complex move with lots of occlusions then you may want to help by labeling a few key trajectory segments.

Er, thats it. Yes, autolabeling is that straightforward to use.

If you forget to perform a subject calibration, dont panic. You can create your subject using a number of frames from any trial.

91

routine user

3.5

pipelining

3.5 Pipelining - Motion Capture Made Simple.


From the previous sections, you will have come to realise the power of Vicon in capturing your 3D motion and converting it into labeled virtual trajectories ready for subsequent processing . What Vicon now introduces is an approach to automate the whole capture process and allow the user to concentrate on achieving the desired performances. In this section, we are going to discuss the Vicon Pipeline and how you can use it in a number of different ways. Vicon has developed the pipeline process to let you capture away to your hearts content safe in the knowledge that Vicon is doing the lions share of the work. All you need to do is calibrate your capture volume and your subject and then hit the capture button.

Introducing The Pipeline Process.


The Pipeline takes care of the whole capture process from the moment you start capturing, which is good news for you as it lets you grab another slice of pizza. Let us consider the ways in which you can use it; As part of a trial type - processing the data immediately after the capture has taken place. On previously captured trials - letting you fully process a trial whenever you wish. On batches of trials - allowing you to leave Vicon processing the data over lunch or when you go home for the night .

Using the Pipeline in Trial Types.


If you select Trial|Trial Types then you open the Trial Types dialog box. This allows you to review the settings of each trial type, or create new ones. A trial is governed by various parameters, such as the length of data capture, the types of data to be captured ("video" means motion data, "analog" refers to force plate and EMG data, and "movie" means JPG picture video files), and the pipeline of automatic processing which you want to follow on from capture. You can define several types with different parameters, which speeds up your capture work. For example, if you click on "New" , you could create a trial type called "Static" which captures video (motion) data only, for three seconds, with the static trial flag set and reconstruction selected in the Pipeline. 92

pipelining

3.5

routine user

Pipeline dialog box for pipeline capture.

Now, let's create a New trial type called "Motion Trial" (you can use any name you like, but make sure it describes the trial type). Choose Indefinite as the default duration, which means capture will continue until you tell it to stop. Under Options, near the bottom of the box, choose "Select last used Subject" for labelling, and don't select the other two. We will assume that if you are doing a motion trial, you have already done a static trial and created a subject. We also assume that you don't want to flag a motion trial as Static, and we don't want to include subject names in labels - that's only useful if you have two subjects in view. Now click the Pipeline button to choose the automatic processes. Select the options which set the Workstation to reconstruct, autolabel, fill all gaps under 10 frames in duration and save the trial. All these are tasks youve been doing in stages before, and well deal with filling gaps in section 3.7. Check that your Reconstruction and Autolabel parameter values are correct by highlighting the process and then selecting the options button in the bottom right corner of the dialog box.[refer to section 3.6].

Using this trial type, you should try capturing some more data. Get your subject to do 93

routine user

3.5

pipelining

another walk across the volume but with our third type of Hello World, say like Al Jolson down on one knee. Select Trial|Capture or hit capture icon and then select Motion Trial. If youre happy with the settings then capture away as before. Check that your subjects name is listed in the subject box and it is selected. If its not then you may not have selected last used subject in the trial type or you may have missed the previous section on creating subjects.

What happened? Well hopefully your subject gave an enthusiastic hello and youre pleased with that. But, on the screen, whats been going on? From the moment you stopped capture, you will have seen the reconstruction of all the frames and then the Autolabeler will have gathered statistics before labeling the trajectories. Finally all gaps less than 10 frames in length will have been filled and the trial saved. This results in the green stick figure walking through the volume giving the very same wave.
Hello World processed using pipeline capture

When you are processing a pipeline, by default you will see what is called the processing log. This tells you the trial thats being processed, the current stage its at and how long it took to complete. You can select when you want to see the Processing Log using File|User Preferences . 94

pipelining

3.5

routine user

Now that feels good doesnt it? You hit start and stop and within a few seconds you had a 3D representation of your subject without having to do anything else. You do not have to use Motion Trial, you can create your own Trial Types with your own selection of processes such as reconstruct only. If you have analog input you can select that (see later section on force plate setup).

Try a few more captures of different walks, runs, kicks, etc to get used to the power of pipeline processing. After youve done a few captures and are feeling confident that Vicon is delivering the goods in terms of high quality, labeled 3D data then youll probably want to maximize your time with the subject (remember time is money) and leave all the processing till later. For this, you could define a trial type which does nothing but go on to capture the next one. Call it "Repeated Capture" and in the pipeline, select only "Capture Next Trial".

Using Repeated Capture To Save Time And Energy.


Heres where the pipeline can be used to great affect and save on lots of button clicking. This time, when you capture select Repeated Capture as the trial type. By using the previous settings, the Workstation goes directly to the Arm|Start dialog box when youve finished capturing. This gives you the flexibility to get a lot of captures done in a short space of time. Again, have a go and get your subject to try things again and again until you get a good take. To stop capturing, hit cancel in the Arm|Start box.
Capture Next Trial Options Box.

If youre not using a move sheet then we recommend displaying the trial dialog box as you can enter a description for each trial prior to the capture. This will help no end in the post processing as you can easily have 50 to 100 moves captured in a days session.

95

routine user

3.5

pipelining

How to Pipeline Previously Captured Trials or Batches of Trials.


It is advantageous to process some of your trials immediately after they are captured to check that Vicon is generating good data. Though we recommend this for at least the first few trials and at certain points in the session, we have enabled you to capture a lot of trials quickly and then process them off-line later. You achieve this by either selecting each trial one by one or by batch processing a number of them. A number of our existing users will capture a session of moves, assess the first few captures and, when happy with their reconstruction and autolabel parameters, pipeline the whole session offline. When they come back later, they can then look at the processing log to check the quality of the data.

In your session directory, an unprocessed trial is indicated by having tvd data only (a blue symbol with a V inside). By double clicking on the trial line, you can select Trial|Reconstruct followed by Trial|Autolabel then Edit|Fill All Gaps and finally File|Save. Alternatively, you can select Trial|Pipeline , check each of the above processes and then let the Workstation do the hard work. The final result is just the same. Now you probably dont want to do this for each trial so Vicon allows you to batch process your session data. With the current session open, but no trial specified, select Trial|Pipeline . The dialog box will be the same as before except you can now apply it to either all trials or to only the unprocessed trials. These are defined as those which do not already have a c3d file. IF
YOU DONT APPLY THE PIPELINE TO UNPROCESSED TRIALS ONLY THEN

ALL

TRIALS WILL BE WILL PIPELINE

REPROCESSED AND POSSIBLY OVERWRITE GOOD DATA. UNPROCESSED TRIALS ONLY.

BY

DEFAULT,

VICON

Select the functions you want to apply to your data, typically reconstruct and label and maybe fill gaps. The other functions are discussed at the end of this section. When you are happy, then click on Process Now (not OK as this merely accepts your selection) and Vicon will begin pipelining your captured trials. At any point, you can pause and then resume the processing or merely stop or cancel it. REMEMBER 96
TO SELECT

SAVE TRIAL,

ELSE ALL THAT PRECIOUS PROCESSING WILL BE LOST.

THIS

CAN

BE IS VERY FRUSTRATING.

pipelining

3.5

routine user

There are Other Pipeline Options Too.


Lets take a moment to briefly review the current functions available to the pipeline. Select Trial|Pipeline again and have a look at the dialog box.

Reconstruct and adjustment of parameters. This will convert your raw video data
into unlabelled 3D trajectories. By selecting options, you can adjust the reconstruction parameters if you desire. [refer to section 3.6 for discussion].

Autolabel and adjust parameters. Using your selected subject/s this will
automatically label all your trajectories. Again selecting options will allow you to change the parameters if you wish. [refer to section 3.6].

Fill Gaps and adjustment of interpolated fill gap. This is a function which will
remove all the breaks in all your trajectories provided the gaps are less than the user defined maximum. [refer to section 3.7].

Remove subject prefixes from labels. When you have been labeling your
trajectories, you may well have been adding the subjects name to each label. This is particularly important for multiple subjects but for a single subject it is not essential and may even get in the way. This function will remove all reference to the subject's name prior to saving the data.

Save trial. Saves the .c3d file in the session folder. Save subjects separately. This is suitable for multiple subject capture where you
want to export the data of each subject as an individual c3d file.

Dump to ASCII File. This is an example of our Plug-in API. A Plug-In is a small
program, separate from the Workstation program, which is available within Workstation as an extra function. It is a fairly straightforward function to write out your c3d data in an ASCII format suitable for analysis by spreadsheet software, such as Microsoft Excel. [refer to section 4.11].

Create CSM. ,Another example of the Plug-in API. This exports c3d data to one or
more CSM files (one per subject) which is compatible with 3DSMax animation software. A specific marker set is required, which was developed from the Vicon standard marker set already discussed. [refer to the Animation Pipeline].

Run Gait Model (optional extra Plug-In). If you have ordered the gait analysis plugin function, you can have this standard gait model applied to your data automatically 97

routine user

3.5

pipelining

via the pipeline. The model requires a particular marker and label set and is documented separately.

Capture next trial. Decide whether to show type dialog box or not which is of
particular use when you want to capture a large number of trials very quickly for processing later. The pipeline allows you to select any of those shown as well as any plug-ins you may wish to add. Remember Vicon now offers a full plug-in interface allowing you to add extra functionality to the motion capture pipeline. Vicon provides a full software developers kit, including header files, documentation and examples. [refer to the Advanced User Guide].

What Have We Learned In This Section?


So there you go, weve introduced to you the concept of pipelining your motion capture data. Vicon allows you to process your data as and when you desire. By defining a trial type and a pipeline you can automate your capture session and improve the productivity of your system. You should now be familiar with using the pipeline during the actual capture to, say, perform a series of repeated captures of each 5 seconds long as well as being able to pipeline the reconstruction and save trials off-line. Not sure, how to do these? Go to Trial|Trial Types , select Repeated Capture and then set the default duration to 5 seconds. To pipeline data off-line, open the trial of interest, select Trial|Pipeline, check Reconstruct and Save Trial and hit Process Now. It is that simple.

Throughout the last few sections, weve been asking you to capture your subject performing various moves ranging from ordinary walks to slightly more funky Hello World trials. You should have seen something on your screen that looks like the following image - our little green stick figure walking up and waving with the same characteristics as your performer - aint that cool? You should now feel comfortable directing your subject to perform whatever takes your fancy. This is up to your imagination and the skill of your subject. Weve included 98

pipelining

3.5

routine user

The waving little green Vicon figure

a wide variety of different types of moves in the sample sessions which you can pinch ideas from. In these examples, weve also used multiple subjects and props which we will discuss in the Advanced User Guide. You should now feel free to capture whatever you like and feel confident that you and Vicon will produce high quality data. Have fun !

99

routine user

3.6

reconstruction and autolabel parameters

3.6 Lets take a moment to consider Reconstruction and AutoLabel


Parameters.
Weve been talking about the reconstruction of our 3D trajectories throughout these two guides but now is a good time to discuss it in a little more detail. A thorough explanation can be found in the Reference Guide. Vicon automatically generates 3D co-ordinates of each marker from the 2D video image data. To allow the system to measure a wide variety of actions from large volumes to small facial expressions, a set of parameters are available to the user to control the reconstruction process. These parameters can be altered to improve data that initially appears to be poor, according to some of the following guidelines. If these parameters are set too loose then you will tend to find that a number of false trajectories or gaps will have been created which need to be dealt with using the Vicon editing tools. [refer to section 3.7]. If, on the other hand, your parameters are set too tight, you will find that known markers will fail to reconstruct or your trajectories will have a significantly higher number of breaks. This can result in loss of vital data and a lot more work is required of the Autolabeler. Dont despair, as its not really such a minefield of potential errors. The tried and tested default values will generate good data for the majority of your capture scenarios.

Reconstruction parameters Option Box

100

reconstruction and autolabel parameters

3.6

routine user

The Maximum And Minimum Vectors.


These six values are the dimensions of the reconstruction volume in mm. This is the volume shown by the purple box in the workspace. Select View | Reconstruction Volume or the icon on the tool bar. These volume parameters are a powerful tool for enabling reconstruction to run quickly and smoothly. They define the region of interest and hence automatically discard the majority of phantom or false markers resulting from other visible light sources or stray markers. We have selected a default volume of +/- 3.5m about the x and y axes relative to the origin. The height in the z axis is set at 2.5m and -0.1m below the floor. The minimum Z vector is normally set to -100mm rather than zero as a precaution against variations in the floor surface - if you set the limit at the z=0 level, and the floor dips below this, markers close to or in contact with the floor may be discarded from reconstruction causing unnecessary breaks in trajectories.

OK, lets summarize. If you have a larger reconstruction volume than necessary then you may generate more false markers and may slow down reconstruction but will capture every possible trajectory. If you have a smaller volume then youll have far fewer ghosts and the software will run faster but you run the risk of cutting off useful trajectories. So always take a minute to consider your optimum limits. When experimenting with a new camera set-up, it is useful to set the parameters larger than your measured volume to ensure that reconstruction is initially only limited by the marker visibility. Once youve reconstructed a couple of trials, such as the walking around edge of volume, then you should optimize the vector values to speed up the process.

Maximum Acceleration (mm/s/s).


This parameter is used when starting new trajectories. If its value is set too low, then new trajectories will not start resulting in missing data. If its value is set too high, then the chances of assigning the wrong point to a trajectory are increased, especially when many markers are in close proximity. We recommend setting the default value to 50. If capturing fast moving objects, increase this and vice versa for slower objects.

101

routine user

3.6

reconstruction and autolabel parameters

Maximum Noise Factor.


This parameter is used in the tracking of the trajectories and has an effect on their continuity. Using a lower value will mean that predictions have to be more accurate to continue trajectories without breaking but they will reduce the possibility of trajectory cross-over due to misidentification of predicted points. On the other hand, higher values will be more tolerant of noisier data and erratic movements. We recommend a default value of 7 but you should experiment with this value to see if it improves tracking, increasing it if breaks occur or if trajectories fail to start easily, and decreasing it if trajectory crossovers occur. Lowering the value of the noise factor to as little as 3 will generate more gaps in the trajectories as noisy data is discarded but may provide better results if the trajectory segments can be autolabeled correctly.

The Intersection Limit (mm) And Residual Factor.


These two parameters are used in conjunction with each other and with the average calibration residual of your cameras. The Intersection Limit is used to define the upper limit on the separation in mm between the rays from two cameras which may or may not contribute to the reconstruction of a marker. The optimum value depends on the size of the reconstruction volume and the quality of the camera calibration. For capture in a smaller volume, the calibration residuals should also be smaller resulting in a lower limit.

If your calibration is producing low residuals (less than 2.0) then set the Intersection Limit to about 12. If the residuals are higher then try increasing the limit. You will find that this results in fewer breaks and smoother tracking. However, the trajectories will start later and, beyond a certain limit, trajectories may not start at all. You should reduce the setting if trajectories start to crossover or fail to start. Remember that the Intersection Limit has a direct effect on the residual calculated for each point on a trajectory. [refer to section 3.7 on poop-up graphs].

If the limit is increased then you are more likely to include all the rays belonging to the 102

reconstruction and autolabel parameters

3.6

routine user

same trajectory. Its advantage is allowing correct but noisy data to be used which can be filtered later. If the limit is reduced, rays which should form a single genuine trajectory may be allowed to form a false or ghost trajectory. You may also see the crossing over (flipping) of trajectories of two markers which are located close to each other. Again, this will lead to a far higher segment count. When experimenting with a new set-up, it will be essential to vary the intersection limit to find the optimum value. This is a process of trial and error whereby you should take an easy capture - say a gentle walk - and then reconstruct with different values until you get a minimum number of segments.

Setting a larger value for the Residual Factor will allow a larger acceptable error in the contribution of a camera ray to the reconstruction of a particular point . This will cause fewer ghost trajectories to be generated but it may prevent certain camera rays from being used for the reconstruction of another point which may be more appropriate. Setting a lower value will tighten the required distance between intersecting rays. This may well result in a greater number of ghosts.

Predictor Radius (mm).


When a trajectory has been created, a predictor then looks for the next point of that trajectory in the next frame. It does this by extrapolating the point forward in time and comparing with the newly calculated 3D points. As long as the new point is within the allowable error of the predicted point then it is considered a point on the same trajectory. The predictor radius is this limit. A higher value will provide a wider capture volume for continuing to track a trajectory. But once this parameter exceeds a certain value, the setting may have a negligible effect since the errors on the previously tracked points will become more of a limiting factor. Set the Predictor Radius to about 30mm and try increasing this value if trajectories tend to break easily or if you are capturing fast or erratically moving markers. Setting a large value may well generate larger numbers of mismatches (or flippings). Setting a smaller value will limit the tolerance of acceptable reconstruction points belonging to a given trajectory and result in more trajectory breaks. 103

routine user

3.6

reconstruction and autolabel parameters

A good initial radius may be set to half the minimum marker separation, typically a value of 30mm for a human subject. If you are capturing a face or hand then this value should be far smaller, typically 5mm.

Non Circular Markers.


In the reconstruction process, Vicon is optimized for spherical markers which yield maximum accuracy. However in the real world of motion capture, a marker as seen by a single camera can become partially obscured resulting in a non-circular image. Vicon allows three ways of dealing with these occurrences. YOU WILL ALSO GET DISTORTED CIRCULAR DATA IF THE CAMERA APERTURES / SENSITIVITIES ARE SET
INCORRECTLY CAUSING BLOOMING AND MERGING.

[REFER

TO SECTIONS

2.4].

Discard. You can select to always discard these marker images based on the fact that
youve got plenty of clean data from all your other camera views. This is the default as it is the most reliable.

Accept. You can also choose to accept the derived marker center. This is of use when
using non-spherical markers (such as flat circular discs) which are unlikely to overlap or be partially obscured. This does however set a wider tolerance.

Split. You may also decide to split the non-circular marker image into two. This is
applicable only when the markers are spherical and may be overlapping but you feel you only have a few views of the markers. The effects of this feature are limited, so we dont recommend that you focus a great deal on this.

Reconstruction and the Segment Count.


When reconstructing, the first indication of the precision of your system is the number of trajectory segments created. This is the figure shown in the status bar when reconstructing. It is also displayed in bottom right corner as the total number of unlabelled trajectories upon completion of processing.

So can you guess the perfect situation?


Yes, the number of trajectory segments in each frame and in total is equal to the number of markers visible on your subject. In reality there will be only a small number 104

reconstruction and autolabel parameters

3.6

routine user

of occasions when this will happen as youre going to get your subject to produce all sorts of moves likely to cause occlusions and hence breaks in trajectories. If the segment count appears high dont panic! There are a number of reasons why it may be higher than you expected / wanted. Incorrect reconstruction parameters.

This is the primary reason why you have a larger total. Taking time to optimize these at the start of your session will give you significant savings in labeling and editing. Actions are occluded by other objects or subjects.

You will tend to see a higher total when youre capturing another subject or props. The use of more cameras to increase coverage of the capture volume is one solution. You will also see an increase in the number of segments if your subject/s move in and out of the volume. The Autolabeler will happily deal with most of the labeling issues to do with these sort of increases in the trajectory segments. Poor calibration caused by either insufficient DYNACAL data or camera set-up.

This is your worst case scenario and could be a symptom of a few different things. A more obvious reason for poor reconstruction is as a result of a camera being knocked or moved after calibration. If this could be the case then youll have to recalibrate your volume. [refer to section 2.5]. YOU
SHOULD ALWAYS CREATE A NEW SESSION IF YOU RECALIBRATE TO ENSURE THAT YOUR DATA IS

PROCESSED USING THE CORRECT CALIBRATION.

When reconstructing, if the number of segments created in each frame never matches the number of markers youve attached to your subject then you have incorrect reconstruction parameters. This is a good one to check for when you take your static calibration as your subject should have all markers visible in majority of views and for most frames.

105

routine user

3.6

reconstruction and autolabel parameters

Autolabeling And Its Parameters : Maximum Deviation And Minimum Overlap.


We believe that the Autolabeler works in the same way as most skilled Vicon users, but in a fraction of the time. The labeling process involves the fitting a model of the subject to the reconstructed 3D trajectories. The information it uses in building this model comes from the following sources; By the processing of the subject calibration trial. By the automatic refinement of the model for each trial through the analysis of the pattern of the 3D trajectories. By a pair of autolabel parameters and a marker file supplied by the user.

As the subject moves, the separations between pairs of markers on the same body segment remain relatively constant while those between markers on different segments continually change. The Autolabler tests the separation between every marker trajectory and all the others that co-exist. Because of occlusions causing breaks in trajectories, many trajectories do not exist for the duration of the trial. Therefore the Autolabeler will look for trajectories that co-exist or overlap for at least the number of samples defined by the parameter Minimum Overlap. We have carried out a wide range of tests on human motion trials and found that a values of 20 to 30 samples is satisfactory. If the trial is short and very fragmented with many trajectory segments then you should consider reducing the overlap gap. Similarly, if it the trial is long with large number of continuous trajectories then you should increase the overlap.

Autolabel Parameters Option Box

106

reconstruction and autolabel parameters

3.6

routine user

YOU

SHOULD NEVER SET THE OVERLAP TO BE GREATER THAN

25%

OF THE TOTAL DURATION OF

THE TRIAL.

To test whether a pair of markers lie on the same segment, the Autolabeler sets a threshold for the change in their separation during the entire period of overlap. This threshold is defined as an acceptable percentage change in the separation and is known as the Maximum Deviation. If the separation is less than this value, they are said to represent a tight pairing. Generally, this period of overlap is a lot longer than the Minimum Overlap.

Again, our tests have shown that a Maximum Deviation of 3% is generally satisfactory. If you find that the Autolabeler is failing to recognize segments where markers are close together or if there is a lot of body tissue movement, then you may want to increase the deviation value. If you choose a large value, you run the risk of pairing markers which are not on the same segment. We recommend a maximum value of 5%.

What Have We Learned In This Section?


You should now feel confident enough to go in and tweak the various parameters to improve the quality of your data. Vicon lets you change these values because we know that you will be using the system to capture in many varied scenarios, with different cameras as well as different sized volumes and markers. Let us summarize a couple of guidelines for troubleshooting reconstruction errors. When setting up for a new volume, ensure that your Minimum and Maximum Vectors provide a sufficient reconstruction volume. The smaller the distance between markers, the smaller the Predictor Radius. This is mirrored by the need to use smaller markers to capture smaller displacements. The faster the action, the higher the Maximum Acceleration. In a larger volume, increase the Intersection Limit.

Alas, the next section is the point at which we have to expose you to the realities of 107

routine user

3.6

reconstruction and autolabel parameters

motion capture. We need to highlight some of the problems that you may come across and the tools that Vicon provides to help you produce high quality data from nonperfect conditions.

108

3d data editing

3.7

routine user

After the Shoot : 3D Data Editing.


Weve hopefully explained the whole motion capture process in a no frills way which means youre comfortable with everything from setting up your cameras to calibrating your volume. You can capture and label your subject and now appreciate the simplicity of pipeline processing your data. We hope that you have had a chance to actually run your own session and produce lots of high quality data. Because the Vicon system needs at least two cameras to see each marker to find its position in 3D space, there may be times when markers are occluded. In these cases we have provided a powerful suite of tools in BodyBuilder which will help you reconstruct missing data. So take a close look at the factors which may have led to the breaks in the data things like the number of cameras you are using, the number of subjects being captured simultaneously, the shape of the capture volume, and the complexity of the moves. If you have done your best with maximizing the system, then it is time to look into BodyBuilder to put the final touches on the move. If you do not have BodyBuilder, many of the same editing tools are available in Workstation, but not the more "creative" ones - only the ones a respectable scientist can use! For those of you who have not been capturing your own data, have a look at our examples. A number of the trials are only partially cleaned up, giving you the chance to familiarize yourself with the various editing tools.

3.7

The Problem of Occlusion - What it is and Why it Happens.


This is the most common problem associated with the present approach of all optical motion capture systems. The 3D reconstruction of a marker is calculated by the projection of the 2D marker center in the camera view as a ray beam. The point at which this ray intersects with another is said to represent the 3D location of the marker. The more cameras that can see the marker mean the more rays that intersect and hence deriving a more precise 3D location. An occlusion represents the moment when the marker is hidden from view. This can be due to either the other body parts of the your subject or when youre capturing with other props or subjects. A prime example of this occurs when your subject faces in a direction away from two of the cameras. The markers on their back will be clearly visible but the CLAV and STRN markers will be occluded in these two views.

109

routine user

3.7

3d data editing

The main problems occur when a marker is occluded in all views (or all but one camera view) which results in breaks in the trajectory. This is where the placement of your cameras becomes important to minimize such scenarios. If you are going to have actions where your subject is bending over or get close to the floor then consider repositioning two or three of your cameras lower to allow them to view those markers most likely to be occluded.

The main clues to occlusion are a higher total segment count due to the presence of gaps in your trajectories. These can be seen either in the workspace when playing the data or by glancing at the continuity chart, which well discuss in a moment.
An occluded figure - with one arm unlabeled.

As we have seen, the Autolabeler is sufficiently flexible to deal with labeling the broken trajectories. Vicon also provides an edit tool called Fill Gaps which as the names suggests, fills the gaps caused by occlusions by interpolating between the sections of actual trajectory.

110

3d data editing

3.7

routine user

What Is A Phantom Marker And How Can I Avoid Them ?


A phantom marker is a false 3D trajectory calculated by Vicon. They are primarily a consequence resulting from poor camera calibration, poor linearization or relative camera positions. They can also be the result of a rare optical occurrence when two co-planer cameras see two markers as shown in the following diagram. When they do appear they are easily recognizable at some distance from our subject, typically on the edge of the reconstruction volume, and then only for a few frames. Using a larger number of cameras means that more rays are used to reconstruct a trajectory further reducing the chance of a phantom marker. We have designed Vicon 512 so that there are far fewer occurrences.

What Are Ghost Trajectories?


A ghost marker is a trajectory that appears very close to an existing trajectory over a short duration. They are commonly found to be separated by a value just beyond the intersection limit. The following picture shows an example.

Example of Ghost Trajectory

111

routine user

3.7

3d data editing

They are a result of either poor calibration, incorrect reconstruction parameters and the occasional creation of false centers in the 2D image, which well discuss in a moment. The majority of these ghost trajectories will be thrown out by the Autolabeler as it looks through the whole trial for continuous trajectories. For the most part, they can be simply ignored as noise but occasionally you will find that they have been labeled causing trajectory overlaps. Have a look under Window|New Continuity Chart to see if you can establish which cameras are the main contributors of the ghosts. These are the cameras you may want to adjust in terms of location, orientation and/or sensitivity settings.

When two trajectories are reconstructed in a certain range of consecutive frames, and are given the same label, a warning is given that trajectories with the same label overlap. If this is an error of manual labeling, the mistake should be corrected before saving the trial. If the overlap is the result of poor reconstruction, the system will assume that the later trajectory is the correct one. If this leads to acceptable results, saving the trial will reduce the two trajectories to one. If not, the data will be saved as two separate trajectories. For example, LTOE and LTOE-1. Dealing with ghost markers is a tricky scenario. If there are many present in your reconstruction then you may wish to consider increasing the Intersection Limit. If this fails to resolve the issue then you should consider re-calibrating your volume. THINK
CAREFULLY BEFORE INCREASING THE INTERSECTION LIMIT AS THIS CAN HAVE AN ADVERSE

EFFECT ON YOUR RECONSTRUCTION, POSSIBLY INTRODUCING MORE NOISE ON YOUR DATA. TO SECTION

[REFER

3.6].

How Do Those False Centers Occur?


Another cause of an increase in trajectory segments and ghost trajectories is poor 2D video data. These are principally the result of not taking time to set your apertures and sensitivities correctly. You should always remember that it is a compromise when positioning your cameras as they need to visualize markers at both the near and far edges of the volume. The three main examples of poor sensitivity are the blooming of the marker, the flickering of the marker and by the merging of two or more markers. 112

3d data editing

3.7

routine user

Blooming occurs when a marker is visible close to a camera with a high sensitivity, then Vicon may actually consider it to be two smaller markers and derive two false centres.

Flickering occurs when a marker is at the far edge of the volume and the sensitivity is too low so that it appears to be flashing in and out of view.

Merging can occur when two markers, positioned close to each other on the subject, are aligned in the same view of the camera. It can also occur as a result of sensitivity being too high. Either way the result could be that the system merges the two markers into one larger false marker. You can see this in live monitor view.

You will always see some merging due to the fact that markers will appear partially occluded by other markers in camera views. These will occur in only a few frames.

The solution to these potential problems is to take time at the start of your day to get the sensitivities right. Remember to use the far and near approach to check that you can capture good data throughout the field of view. After reviewing some data, you may decide on better camera positions which will also help reduce ghosts and false reconstructions, as well as making real trajectories more continuous. Such visual discrepancies are more evident and harder to deal with when using a very wide lens such as 6mm.

What Caused My Broken Trajectories?


The broken trajectory (hence a large trajectory segment count) is the main visual representation of occlusion. It may also be an indication that your reconstruction parameters are not optimized. You may have broken trajectories because your subject has walked out of your volume. Finally it is also a possible indicator that your cameras have been disturbed or that you have a poor calibration. Youll know when your subject has walked in or out of your volume by watching the reconstruction. You can use the trajectory save range to keep only the data of interest. [refer to section 2.3]. You may be able to reconstruct your subject even after they are out of the volume because the cameras will always see a little bit more. You can achieve this by 113

routine user

3.7

3d data editing

increasing your reconstruction minimum and maximum vectors.

To decide between occlusions and poor reconstruction parameters, have a look at the Continuity Chart. If at the moment just before the gap occurs, there are only two or three cameras contributing to its reconstruction then it is likely that the break is the result of an occlusion. If, however, it appears that a significant number of cameras are contributing in the frame before the gap then that indicates that the reconstruction parameters need adjustment. [refer to section 3.6]. If you need to find a trajectory segment, change the mode in the bottom right corner from Label to Select. If you now double click on the marker name in the list that trajectory is automatically selected.

The loss of calibration of one or two cameras is most obviously seen by an increase in the number of ghost trajectories. When you look at these in the Continuity Chart, you will see the same cameras always contributing to their occurrence. Simply, re-calibrate your volume and apply it to a new session. The reality is you may only improve the situation so far and then you will need to consider manually editing the data prior to exporting. Well discuss the Vicon edit tools in a moment. Users of BodyBuilder may also wish to use the graphical edit tool available in that package to create new trajectory segments. (refer to Reference Guide) The skill of recognizing the cause of the broken trajectories is something that comes with experience. There are no hard and fast rules but if you are methodical in your assessment of the feedback that Vicon gives you, it need never be a hair pulling incident.

Where Is That Unlabelled Trajectory ?


When you have completed your pipeline processing of your captured trial, you may well end up with a number of unlabelled trajectories. These are typically phantom markers or ghost trajectories but they may actually be useful data that the Autolabeler has missed. You can see how many unlabelled trajectories remain in your trial by the figure in the bottom right corner. The quickest way to check these trajectories is to select Workspace|Next Unlabeled Trajectory or hit <F10> or select the icon on the 114

3d data editing

3.7

routine user

toolbar. This will then highlight the unlabeled trajectory in yellow and re-focus the workspace display, including the time frame if necessary, so that it appears. An example is shown below. Vicon also lets you find the Previous Unlabeled Trajectory in the same way. The purple rotation center will appear at the mid-point of the trajectory, allowing you to zoom in and move around the selected point.

Alternatively, you can search for the unlabeled trajectories by looking at the Continuity Chart [hit <SHIFT>+<F10> to find the previous unlabeled]. Remember that an unselected and unlabeled trajectory is displayed in white whereas an unselected labeled trajectory is displayed as light blue.

An unlabeled trajectory

If the display shows the center at the origin when you expect to see an unlabeled trajectory, dont panic as it means that the mid-point of the trajectory is actually a gap. You should jog through a few frames until you spot the yellow trajectory.

115

routine user

3.7

3d data editing

How Can I Find Those Stubborn Gaps ?


When you have completed labeling all your trajectories, the next stage is to fill the remaining gaps. This is straightforward. You can find where the gaps by selecting Workspace|Find Next Gap or by hitting <F9>. The workspace will then highlight the trajectory, open up the display range and show the selection crosses at the start and end frames of the break. An example is shown below. This lets you easily decide how best to fill the gap, using Edit|Fill Gaps in : ____ or Edit|Copy Pattern. You can find the previous gaps by using <SHIFT>+<F9>. You can also find the gaps by looking at the Continuity Chart.

A gap highlighted using Find Next Gap

All These Trajectories Are Driving Me Mad


When youre editing data, youll probably only want to concentrate on a few trajectories, say those on the waist. If you select these trajectories and then open out the display range, it looks a bit of a mess. But by selecting Workspace|Hide Other Trajectories, you are left with only the ones of interest. Similarly, if you want to hide the selected trajectories then use Workspace|Hide Trajectories. And if you select Workspace|Hide All Trajectories, there are no surprises as the workspace no longer 116

3d data editing

3.7

routine user

displays any trajectory data. HIDING


TRAJECTORIES MEANS THAT YOU CAN NO LONGER SELECT THEM FOR LABELING OR EDITING.

To display the trajectories, you can use a similar set of commands. Workspace|Show Trajectories will display all the trajectories which have been selected from the marker list on the right hand side of the display. You can use this command after you have hidden the trajectories. Workspace|Show All Trajectories will, as expected, restore all trajectories after they have been previously hidden. You open out the trajectory display range by dragging either of the two sliders on the bottom of the play bar slider with the left mouse button. Double click on the sliders and theyll return to the default display range of 1.

What Does The Continuity Chart Show ?


By selecting Window|New Continuity Chart with a processed trial open, a new window will appear on your screen similar to the one shown below. So what can you see? The continuity chart shows which cameras have contributed to the reconstruction of each trajectory in every frame. When a marker is seen by all cameras, the chart will appear as a solid band of lines, if however that marker is visible in just two views, then the chart will show only two contributors. The trajectory is represented by the solid horizontal black line and the contributing cameras displayed as dots below it. The first dot represents camera 1 and then all other cameras shown in descending order. Trajectories are numbered and shown in the order in which they are first started. By selecting View|Trajectories by|Label, the charts will now be displayed in an order matching the marker file.

If you select a trajectory then its chart will also change color matching the colors shown in the workspace. The play bar is displayed as the red vertical line and the trajectory display range is shown as the two green vertical lines. You can open out the display range by clicking and holding the left button whilst the mouse is held over one of the green lines. Clicking and holding on the red line changes the current display frame number. 117

routine user

3.7

3d data editing

Example of the Continuity Chart

To de-select a trajectory, simply click the right button of the mouse. The chart is a useful diagnostic for assessing data should you have an excessive number of trajectory segments. From the charts, you can examine which cameras are producing the unlabelled segments and also easily see where the gaps are occurring.

What can I spot with the continuity chart?


If youre concerned that a camera has not calibrated as well as the others (for example, because its linearization is out of date) then you will probably see this in the 3D workspace as a ghost trajectory. If you select that trajectory and then open up the continuity chart and look at the selected trajectory then you will see which cameras were responsible for its reconstruction. Nine times out of ten there will only be two cameras and if you look for other ghosts one of these cameras will also be responsible for their reconstruction.

Why should I use the Pop-Up Graphs?


To help in assessment of the quality of data, Vicon provides you with a tool to display graphs in your workstation display. An example of reconstruction residual is shown 118

3d data editing

3.7

routine user

below. The precise values of a graph variable are displayed when the mouse is moved across the window and the left button is held down. Remember, you can not edit graphs.
Example of a Residual PopUp Graph

Once you have selected a single trajectory, you can see the following graphs: The absolute distance between the origin derived in calibration and a marker location on the selected trajectory. Can display Absolute and X, Y or Z components. The absolute velocity between the origin and the marker location. The absolute acceleration of the point on the selected trajectory. The absolute distance traveled by the marker along the selected trajectory. The reconstruction residual of the points on a selected trajectory.

If you select two trajectories, you can display the following: The absolute distance between the co-instant points on two selected trajectories. The absolute angular velocity of the vector between co-instant points. 119

routine user

3.7

3d data editing

The absolute angular acceleration of the vector between co-instant points.

If you select three or four trajectories, you can display the absolute angle between the two vectors which lie between the co-instant points. You can print a Pop-Up graph by selecting File|Print .

If you select Graph|Options or double click on the graph the following dialog which is fairly self-explanatory. You can modify the display and change the axis scales and labels and display the mean value and standard deviation. It means that you can have a look at specific regions of the graph if you desire.
Graph|Options Dialog Box

You can use a pop-up graph are as a check on the quality of data being processed. Two graphs commonly used are reconstruction residual and distance between. The residual is a useful representation of the noise present on a trajectory. It is derived on the mean distance from the calculated marker location. You can use Graph|Distance Between to check that two markers on the same segment are maintaining a relatively fixed separation. This will also highlight any trajectory flipping. 120

3d data editing

3.7

routine user

This is particularly useful in checking the quality of calibration. Capture and reconstruct the wand being waved through your volume. Select and Highlight both trajectories and select Graph|Distance Between. The mean distance should be close to 500mmwith a standard deviation of around 2mm. Of course if you abuse your wand, the actual distance between the markers may have changed also adjusting the measured distance.

What Can I Do About Unwanted Data ?


Ok, so now you know how to get to the data that needs editing quickly and efficiently. Vicon has also provided you with a number of tools to deal with unwanted points or whole trajectories. These are found in the Edit Menu. Most of the time you will only have to remove unlabeled trajectories caused by ghosts or phantom markers. You can either select one specific trajectory and delete it or, if you are certain that all the remaining unlabeled trajectories are unimportant, then select Edit|Delete Unlabelled Trajectories.
Noisy L:ast Frame of a Trajectory

For dealing with specific points, Edit|Delete Point is a very straightforward tool to erase an individual marker point. It deletes the selected location indicated by the white cross. While it may appear odd to want to delete data, you will sometimes find that the first or last frame of a trajectory segment is noisier than the rest. This is typical of 121

routine user

3.7

3d data editing

a trajectory exiting the reconstruction volume and is shown in the image below. It is often more convenient to delete these frames of data than let them add noise to the trajectory. These frames of data can also cause the overlaps at start and finish of trajectory segments which must be deleted to ensure that Vicon can defragment your trajectories and not leave multiple named segments. If you wish to delete more than one frame of data from a given trajectory, select the first point, hold the <CTRL> key down and then select the last point. When you now hit Edit|Delete Points you will erase the selected points and all those inbetween.

Another useful tool is Edit|Delete and Fill which lets you replace an unwanted reconstructed point with a new location calculated using a cubic spline interpolation. This lets you remove spurious glitches from your data without leaving a gap in your data. You can delete and fill a range of points by holding the <CTRL> key down whilst selecting the starting and finishing frames.

When Do I Want To Snip A Trajectory?


The quick key is F5

You are most likely to snip a trajectory to fix crossover errors which may occur in reconstruction. Once youve snipped a trajectory you can re-label the two resulting segments as appropriate. Edit|Snip Trajectory breaks a trajectory into two segments at the selected point. If a range of points has been selected then the points between the selections (i.e. excluding the selected points) are deleted.

Hey, Ive Got Breaks In My Trajectories, What Can I Do?


The quick key is F8

Vicon provides a utility to fill the gaps in either the most recently selected trajectory or in all trajectories using a cubic spline interpolator. You can also select a range of points and then only the gaps within that range will be filled. You define the limit on the size of gap that may be filled by the interpolator by selecting Edit|Maximum Fill Gap and entering your preferred limit. You should always take care when selecting an

122

3d data editing

3.7

routine user

unrestricted gap size as this can cause un-natural results. If gaps exist between separate trajectory segments with the same label, these will not be filled. Use the Defragment command to join such segments before filling.

FILLING

LARGE GAPS MAY PRODUCE UNDESIRABLE RESULTS SO A LIMIT IS RECOMMENDED. IS A MORE APPROPRIATE MEANS OF FILLING LARGER GAPS.

COPY

PATTERN

What Do We Mean By Defragmenting Trajectories ?


When you select Edit|Defragment Trajectories, Vicon automatically joins together all trajectory segments that have been assigned the same label. Where overlaps occur, the overlapping section is saved to a new trajectory with a label based on the original. This prevents data loss in the case of mislabeled trajectories, allowing later repairs to be made. Trajectories are automatically defragmented when the trial is saved to disk.

When Can I Use Copy Pattern?


When you select Edit|Copy Pattern, Vicon copies the pattern of points from a selected trajectory to the chosen range of points on a previously selected trajectory without leaving discontinuities. You should use copy pattern to fill gaps in trajectories where the cubic spline interpolator will produce un-natural results. YOU
SHOULD ALWAYS COPY THE PATTERN OF A MARKER THAT IS ON THE SAME BODY SEGMENT AS

The quick key is F7

THEY WILL BE FOLLOWING APPROXIMATELY THE SAME PATH COMPARED WITH OTHER MARKERS.

Firstly, select the exclusive range of points in the trajectory to be copied to. This is most usefully performed using the Workspace|Find Next|Previous Gap commands or by hitting <F9> / <SHIFT>+<F9>. Then select an arbitrary point in the trajectory you want to copy from. This should be a trajectory in close proximity to the first trajectory that follows roughly the correct path. When you are happy that you have made the correct selections, use the Copy Pattern to perform the operation and all points in the selected range will be overwritten.

123

routine user

3.7

3d data editing

When Should I Use Exchange Points?


Edit|Exchange Points simply swaps the selected point or range of points in a trajectory with those of the previously selected trajectory. You should use this command to fix transient crossover errors in reconstruction. If you have a significant number of these transient crossovers, then you should consider adjusting your reconstruction parameters. [refer to section 3.6].

Select one of the trajectories (at any arbitrary point) then carefully select the point or range of points to swap in the second trajectory. When you are happy, select the Exchange Point(s) command to perform the swap.

Is That it?
That completes our introduction to data editing. What we recommend now is getting your hands dirty and either cleaning up all your captured data or having a go at our examples. Youll find most of the samples are only partially cleaned up leaving you with plenty to work through becoming more comfortable with the edit suite. Vicon lets you undo up to 10 actions by either selecting Edit|Undo or <CTRL>+<Z>.

124

closing words

3.8

routine user

Closing words Lets review what weve just learned.


Well, how does it feel to be a successful motion capture user? It wasnt too painful was it? You should be sitting there with a big grin as you view your funky data of your subject walking and waving in 3D. Feels good and remarkably easy doesnt it? Before we let you out on your own, we will briefly review what weve worked through in this guide and where you can look for more information. With this guide by your side, you are now at a stage where you are able to deal with nearly everything that optical motion capture can throw at you. You have a basic knowledge of all the tools that Vicon provides to ensure highly accurate capture in most volumes. You should always consider the following issues discussed in this guide whenever you are running a capture session.

3.8

Reconstruction and adjustment of your parameters for your particular session. Calibrating your subject and autolabeling. The once only stage of subject
calibration creates a unique measurement and description of your subject. The Autolabeler takes this description and removes all the hassle of manually labeling your 3D trajectories

Pipeline processing. Use the pipeline processor to automate even further Vicon
motion capture. It speeds up your capture session and turns it into a single stage process from capture to labeled 3D data.

Cleaning up your data. When the data is not as good as you want you are
comfortable in using the Vicon edit tools to clean up. You should also feel comfortable moving around the different windows of Vicon especially the 3D workspace. Some of you may now want to stop reading the book and get more hands on experience. Thats fine as you already know more than enough to capture successfully. Remember, we have provided you with a comprehensive on-line help system and there is a Reference Guide at the end of this tutorial. This guide includes checklists, a troubleshooting guide and other suitable reference materials. For those of you who want to know more about some of the extra tools at your disposal then move on to the Advanced User Guide. This guide will go into detail about 125

routine user

3.8

closing words

using Vicon for more specific tasks. There is an explanation of how to set up force plates and EMG capture. It will discuss how to work with multiple subjects and props as well as helping you to capture specific parts of the body such as the face or hands. There is discussion on the practicalities of capturing in large or irregular volumes. It also explains how to linearize your cameras, capture movie data, write a plug-in and use timecode and Genlock to trigger your captures.

126

closing words

3.8

routine user

127

advanced user guide

This Guide will cover expert use of your Vicon 512 system. We will explain certain new

advanced elements of running the system and also cover various concepts in more detail. This is essential reading for the Vicon user pushing the envelope.

4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8

Whats this Guide All About? Linearisation Movie Capture Vicon Plug-Ins Using Timecode and Genlock Sample Skip Multiple Subjects Closing Words

129

advanced user

4.1

whats this guide going to tell me about?

4.1 Whats This Guide Going to Tell Me About?


Welcome, Vicon advanced user. I am going to assume you fall into one of two categories; An existing member of the Vicon fraternity who has upgraded to the Vicon 512 A new user on the fast track to Vicon expertise having come through the previous two sections with flying colours. In this section we will explain certain advanced elements of running the system and also cover various new concepts in more detail.

130

linearisation

4.2

advanced user

Linearisation
Before a camera can be used for accurate 3D calibration and reconstruction, a 2D correction matrix must be calculated to compensate for distortions in the lens, and for small variations in sensor mounting. This process is called linearisation. All camera/lens combinations (except a "perfect" pinhole camera) distort the image to some degree. The most common type of distortion, in which the image is wider in the middle than at the top or bottom, and is taller in the middle than at the edges, is known as "pin cushion". "Trapezoidal" distortion is also common if the sensor is not placed exactly central and normal to the optical axis of the lens. Prior to the shipment of your Vicon system, all cameras will have been linearised in our studio. Please note that linearisation is focus-dependent and all cameras are adjusted at the factory so that the image is at its best with the lens focused on infinity. If you are using the factory linearisation, the focus ring should be set to infinity. At this setting, everything from 10cm - 20cm to infinity will be in good focus, so there is no need to adjust the focus if you are doing close-up work.

4.2

There are a number of reasons why you should consider re-linearising your camera/s. You have changed the lens of your camera or rotated the focus ring. The lens has been dislodged or the camera knocked.

Note that this doesn't extend to varying aperture settings, which can be changed +/- 2 f-stops without affecting the accuracy.

It has been at least three months since they have been linearised. This is good practice.

An indication that a camera should be re-linearised is the generation of a higher than acceptable calibration residual for a known volume. Note that if all residuals are higher then you have calibration errors rather than linearization errors [refer to section 2.5].

What Equipment Do You Use?


The main piece of equipment is the very precise linearisation grid supplied with your Vicon. Mount the linearisation grid sheet on a flat vertical wall with its longer side approximately horizontal. Ensure that the sheet is flat and free from creases, bulges or 131

advanced user

4.2

linearisation

folds. A permanent mounting location with good visibility of the Workstation screen and access to camera cables is the most suitable. Attach the Alignment Mirror in the marked spaces in the centre of the grid, using masking tape or other adhesive tape. Ensure that the weight of the mirror does not cause a bulge in the grid material. Have a look at the images below.
Actual LIN Grid and corresponding Live Monitor view.

IF YOU FAIL TO MOUNT THE GRID AND, MORE IMPORTANTLY, THE MIRROR ON A FLAT SURFACE, THEN
YOU MAY WELL PRODUCE INFERIOR RESULTS DUE TO A MIS-ALIGNMENT BETWEEN CAMERA SENSOR AND THE GRID.

Preparing to Linearise a Camera


Position the camera so that it lies on a line exactly perpendicular to the centre of the grid. The accuracy of linearisation is dependent on the accuracy of camera placement, so great care should be taken at this stage. Open the flap on the front of the mirror. Due to the fact that the camera is positioned close to the grid, you should close the aperture of the camera to f5.6 or f8.0. Open a Live Monitor window and adjust the sensitivity control until the grid markers appear to be as large as possible while remaining clean circles and without specks of noise in between. The circularity of the markers is not essential as this depends somewhat on the size and aspect ratio of the monitor window.

Maximise the live camera monitor window during linearisation set-up. Adjust the camera position until all markers are visible and fill the monitor window as much as possible. Fill the field of view to the edges of the monitor window. Ensure that no markers are clipped at the edges.

132

linearisation

4.2

advanced user

Throughout this adjustment process, ensure that the camera remains on the grid's perpendicular axis.

Switch off the View|Useable Area display (white rectangle) as this reflects a previous linearisation which no longer applies. When the camera is on the grid's perpendicular axis, the reflective image of the camera's strobe is visible as a bright, hollow circle in the centre of the grid due to the reflection from the alignment mirror. If the camera is mis-aligned, part of the circle is missing. Adjust the camera position until the complete circle is visible as shown below. This is the most difficult part of linearisation, take your time to get it right.

Live Monitor view prior to Linearisation. Note mirror reflection at centre of image.

These adjustments may require moving the camera closer to or away from the grid, changing the camera height and lateral position, and rotating the axes of the camera. The final position and orientation of the camera are determined entirely from the image in the monitor window, even if the camera appears to be pointing in slightly the wrong direction. The distance between camera and grid will vary depending on the video rate of the 133

advanced user

4.2

linearisation

camera, as it affects the resolution of the image, and the size of the lens. If you're linearising all your cameras, try and do all cameras with the same lens size at the same time.

When you are happy with the position and alignment of the camera, close the flap on the front of the mirror.

How Does Vicon Linearise a Camera?


Linearisation should be carried out with no session open, because it applies to all users and sessions. Just start Workstation software and don't select a session. Select the System|Linearise... command or the icon on the toolbar and enter an identification code for the camera. This code may be any unique string of digits and characters. At the factory, we use the manufacturer's serial number of the camera, which appears on the rear of the camera itself. You may choose a simple code like ONE, TWO, THREE. or ALPHA, BRAVO, CHARLIE. Corresponding with identifying stickers on the front of the strobe. It's up to you. If the camera has been linearised before, the name chosen appears in the list with the date of the last linearisation and may be selected from there.

You may also wish to include the video rate of the camera such as 0007_120 or 0007_240. If you are using a mixture of lenses, then it can be useful to include the lens size in the name. For example, 007_9_120. Click "OK" in the camera selection dialog and either "Yes" or "OK" in the confirmation dialog depending on the linearisation status of the camera. Make sure that the linearisation camera channel offered in the next dialog is correct and click OK to accept the default number of fields to capture. The standard Arm/Start dialog is now displayed. When you are ready, click "Start" to capture the data. Vicon will automatically capture 20 frames of data and then implement the linearisation process. The linearisation calculations will be completed in a few seconds.

134

linearisation

4.2

advanced user

What Do The Results Mean?


After linearisation, a summary of the linearisation is displayed as shown in the following image. Check that the number of detected grid points is correct (20 by 15) and that the linearity figures are acceptable. Click on the "OK" button to save the linearisation for the camera as an LP file with the same name in \\Vicon\System\... directory. [refer to Reference Guide for file formats].

Linearisation results dialog box.

The figures displayed as the X and Y linearity for Even and Odd Fields are the average non-linearity in the "horizontal" (wider) and "vertical" (narrower) axis of the image, after correction. Figures in the range 0.02% (1:5000)- 0.05% (1:2000) are usual, depending on the focal length of the lens. It is normal for the values to vary from camera to camera, especially where different focal length lenses are used but, in general, any value less than 0.05% is satisfactory.

THE

TOTAL NUMBER OF LINEARISATION MARKERS USED FOR THE CORRECTION IS SHOWN AS THE FOR

GRID SIZE

EVEN

AND

ODD FIELDS. IF

THE NUMBER IS SMALLER THAN THE ACTUAL NUMBER

OF MARKERS USED, THE POSITION OF THE CAMERA SHOULD BE ADJUSTED.

Repeat the process for each camera to be linearised. Current lp files for all cameras linearised on the system are stored together in the \\Vicon\System\... directory. 135

advanced user

4.2

linearisation

If the linearisation for any camera fails or the results are unacceptable, repeat the process making slight adjustments to the camera position and/or sensitivity setting.

136

lets capture movie data

4.3

advanced user

Lets Capture Movie Data 4.3


This section introduces movie capture and playback synchronised to your marker trajectory data. This is a new addition to the Vicon capture tools, allowing the user to capture direct video image data to disc simultaneously with TVD data, bringing a new visual dimension to your 3D data. Movie capture let's you store MPEG recordings as reference material for your capture session which can be invaluable when you come to post-process your session. We will discuss how to set up the hardware and software to allow movie capture and then visualise the images through either the live movie window or via MPEG playback.

Hardware Requirements and Installing Broadway Card.


At present, Vicon supports the Broadway video card (version 2.5) from Data Translation. The video card must be installed according to the manufacturer's instructions, together with the software supplied with the card. You will have to acquire a camera, from the wide selection available, which has a compatible output; either S-VHS or composite video (NTSC or PAL). Please remember to ensure that you have a cable that fits the video card input sockets.

When you first install the video card and connect the camera, it's worth checking its operation with the Broadway software prior to running the Workstation. This software displays a live video image window so you can adjust the camera to get a good clear image prior to your capture session. If you are capturing some fast moves then it would be beneficial to use a high speed shutter - if the camera has this facility.

Software Settings.
When you are happy with the live images coming from your camera then you can start the Workstation. There is a straightforward user interface, which is fully integrated into the Workstation. You should not run the Workstation and the Broadway software simultaneously as both applications cannot control the hardware at the same time. 137

advanced user

4.3

lets capture movie data

Movie Setup Dialog Box

Firstly, you should go to System|Movie Setup from the menu bar in order to initialise the video card device drivers. This may take a few seconds then the following dialog box will appear. You should select Broadway MPEG Capture/Compression from the list of currently installed video cards. The options for Frame Rate and Capture File Size are only applicable if you decide to capture AVI files. We recommend that you capture movies using the MPEG format, which is selected as the default on installation. Therefore, the Capture File Size should merely be a value greater than zero. Note this value does not the limit the eventual size of the captured file. At present, MPEG is the only format supported. Please contact Vicon for details on AVI capture.

By pressing either of the three buttons on the dialog box, you will see a new dialog box for Camera Options. This box is a feature of the Broadway software and the only one of interest is the Compression tab. Within this dialog, you should deselect Audio Capture, as it is not currently supported by Vicon, and we recommend that you select High Quality for Image Type. Other image quality settings are available which vary the degree of compression applied to the data as it passes through the video card. From our tests, High Quality has produced the best results so far. If you click on the Options button, a slider will appear which allows you to control the rate at which data is written to disc. If you select the highest allowed rate, you will end up with larger files for a given length of capture. We recommend just above the middle range as a suitable setting. We have found that any rate above the middle of the range produces no noticeable improvement in image quality. 138

lets capture movie data

4.3

advanced user

Using the Live Movie Window and Capturing Movie Data.


You should now have a Workstation capable of capturing movie files. So let's have a look at the images in the Workstation by selecting System|Live Movie. A window will now appear displaying the image from the camera in real time like the one shown below. Cool eh ?
Movie Capture Screen

The Live Movie window is automatically closed before every data capture starts.

You can capture movie data by selecting a trial type which includes movie capture. If, for example, you select Trial|Capture... and then choose "Video and Movie Capture" from the trial types, when you hit capture it will record MPEG and TVD data simultaneously. So now go and have a try capturing both video and movie data and perhaps some movie only. If you don't have a camera or Broadway card available then have a look at our examples. The movie files are always stored with the same trial name as its corresponding tvd and c3d files and all within the current session directory.
Movie files are indicated by the icon

139

advanced user

4.3

lets capture movie data

When you first use movie capture, you will notice that the kinematic and movie data are out of sync. This is because of a delay in movie capture due to the MPEG compression process on the Broadway card. What you need to do is synchronise the movie capture with the video capture of the Datastation. Because MPEG compression causes a small delay, it is a good idea to start data capture a few seconds before the important movement. This ensures that all of the important moves are captured in the movie file.

Synchronisation and Playback.


When you have captured your trial, you can see the movie data by double clicking on the movie icon shown in the directory window next to the trial number. If the workspace or video monitor are already open then select Window|New Movie and your captured movie will appear. Simply use the same playback controls at the bottom of the screen to view your movie. By default, playback is at real speed but you can change the playback rate using the slider, up to a maximum of four times real speed. Remember you can set Vicon to automatically open a Movie window after capture by selecting Open Window|Movie Data in the trial type.

Movie Synchornisation Dialog Box

As we stated above, when you first start capturing video and movie data you will notice the delay between the two sets of data. You can correct this by using System|Movie Synchronisation... which will then display the following dialog.

140

lets capture movie data

4.3

advanced user

The quickest and simplest way to achieve synchronisation is to capture data which has a distinct 'clapperboard' feature such as a marker bouncing off the floor. This moment should be easily recognised in both movie and workspace windows as shown below with our clapperboard. Set the correct delay in terms of fields (frames) by adjusting either the slider or the up/down buttons. The buttons allow you to single step one frame forward or backward whilst the slider can step through any number of frames. When you are happy that the windows are synchronised, check the box in the bottom corner. This will apply the same synchronisation to all subsequent captures.

Using movie and c3d data to sychronise subsequent capture.

The time delay is constant for given hardware and data (compression) rate and can hence be set as a default.

If you forget to set your synchronisation when you capture your data, don't despair. You can always set the delay rate for each trial at a later stage. UNFORTUNATELY
YOU CAN NOT CHANGE THE BASE RATE FOR THE WHOLE SESSION.

Final Points on Vicon Movie Capture.


As well as playing in the Workstation, the MPEG files can be viewed by a variety of other multimedia applications, including Microsoft ActiveMovie, and it is possible to publish them on your web site. If you are sending processed data to someone else, we have found the movie file very useful in explaining the original movement.

That's it. You have now added another useful tool to your Vicon and are now capable 141

advanced user

4.3

lets capture movie data

of recording simultaneous movie data to accompany your motion capture. Remember to check all the movies included in our example sessions.

Analog Data Capture and Force Plate Set Up


Up to now we haven't discussed force plates and EMG. These additional sources of information about human movement are very useful in gait analysis, biomechanics and general research. Force data is so completely integrated into Vicon that, once the force plates have been installed, data collection and storage requires almost no attention. Force vectors appear in the workspace and if present, will be processed by Plug-In Gait, Vicon Clinical Manager, and BodyBuilder. However, setting up the force plate requires some attention to detail and is described here. To capture analog data, you need an (optional) analog to digital convertor. If you are not sure whether you have one, contact Vicon. If you want to add one, it is a simple task which can be done by a Vicon engineer or by you, if you are happy about opening up the Datastation and installing a new card. Full instructions will be provided in this case. Force plates are made by a number of companies. Vicon supports Kistler, AMTI, Bertec and Kyowa-Dengyo force plates. If the plate or plates are bought with the system, a Vicon engineer will visit to install the plates and carry out the setup procedure. If you buy a force plate independently, it is up to you to install the plate and connect it to Vicon 512. Again, call Vicon for advice about this. We will assume that a force plate is already installed, connected to the amplifier, and the amplifier is in turn connected to the Datastation. If this is not the case, some hardware work is required which is beyond the scope of this manual. The first thing to consider is calibration. The camera calibration must use an origin and axis system in which the force plate occupies a known position. Since the origin and axis directions are determined by the L-Frame, the best way to achieve this is to place the L-Frame on the force plate so that it is positively and accurately located relative to the plate. The Clinical L-Frame is designed for this purpose, with steel flanges that locate precisely with the edges of the plate. Make sure that the arms of the L-Frame are parallel to the sides of the plate, and that they are level. The corner of the force plate which is directly beneath the vertex of the L-Frame will then be the origin, with the X and Y axes parallel to the sides of the plate, and the Z=0 plane level with the 142

lets capture movie data

4.3

advanced user

surface of the plate.

Force Plate Corner Parameters


Force plate manufacturers refer to an internal frame of reference when specifying the calibration of the force transducers. The origin of this frame of reference is normally within the force plate itself, with the Z axis pointing down. This reflects the fact that force plates measure applied force, while clinical and biomechanics workers want to know the ground reaction, which is equal and opposite. Since force plates may be placed in any orientation relative to the laboratory axes, it is important not to allow confusion to arise between the internal axes and the lab frame of reference. Therefore, Workstation software has, built into it, the characteristics of the popular types of force plate, and the user does not have to consider the internal axes of the force plate. All that is required is to enter the locations of the four corners of the plate in the correct order.

For an AMTI plate, the order is: 1) Look down on the force plate from above, with the connector emerging from the plate by your feet. 2) The corner at top left is corner number one. The corner at top right is number two. The corner at bottom right is number three. The locations of the corners are entered in terms of the lab axes, not the internal axes. If you are using the plate to locate the L-Frame, and you place the L-Frame over corner number one, the corner locations will be One Two Three Four (0,0,0) (0,464,0) (508,464,0) (508,0,0)

If the L-Frame is placed over corner number two, the corner locations will be One (464,0,0) 143

advanced user

4.3

lets capture movie data

Two Three Four

(0,0,0) (0,508,0) (464,508,0)

and so on.

These numbers should be entered in the Force Plates Setup dialog box (System menu | Force Plates Setup).

If you are using a Kistler force plate, the same procedure applies, but the numbers are different because the plate is a different size. The corners of a Kistler force plate must also be listed in clockwise order when looking down on the plate. To find corner one, refer to the maker's booklet. Corner one is the corner with positive internal X and Y coordinates. On the 9281B type plate, for example, the connector is on the short side between corners one and two. So, if the L-Frame were placed on corner one, the corners would be One Two Three Four (0,0,0) (0,400,0) (600,400,0) (600,0,0)

If you are adding a second force plate, you need to measure the locations of the corners of the new plate in the frame of reference already established relative to the first plate. One way to do this is to place markers on the corners of the plate and measure their locations using Vicon. This gives approximate results, but the best way is accurate measurement with a steel ruler or tape. The "Origin" parameters also depend on the plate type. For a AMTI plate, they are listed in the manufacturer's calibration sheet. A typical set of values is (0,0,42). For a Kistler type, the first two values refer to the distances between the internal axes of the plate and the sensor elements; a typical set of values would be (132,220,39). The third value is the depth of the sensor elements below the top plate. You must enter the ADC input channels to which each force plate is connected. There is a list on the right of the Force Plate Setup dialog box which allows you to select the 144

lets capture movie data

4.3

advanced user

appropriate channel numbers. The person who makes the electrical connection should provide a list of input channel numbers. If you are in doubt about this, call Vicon.

Finally there are the scale factors.

These are derived from the manufacturer's

calibration of the transducer sensitivity, and are therefore different for each force plate and each individual transducer. For AMTI plates, find the sensitivity matrix headed "Sensitivity Matrix C(I,J)" and select the version in which the units are "microvolts / volt / N". Each table contains 36 values, but the crosstalk elements are so small that we only use the diagonal elements, from top left to bottom right. A typical value might be 0.664 microvolts per volt per Newton. This means the output from the force plate amplifier is Output (V) = 0.664 10-6 10 4000 applied force

where 0.664 is the sensitivity quoted; 10-6 reflects the fact that sensitivity is quoted in microvolts; 10 is the bridge excitation; 4000 is the amplifier gain, and the applied force is measured in Newtons. If the amplifier settings are different, change the equation accordingly.

Since the input range of the analog to digital convertor is 20V, corresponding to 4095 possible output values, the scale factor is Scale factor = (20/4095) (0.664 0.04) = 0.184 N/digit

There is an additional factor of 1000 in the equation for the moments, because we measure distance in mm rather than metres. A typical scale factor for a moment channel might be 70 (horizontal) or 35 (vertical).

Finally, the scale factor for all channels must be negative in order to calculate the reaction force instead of the applied force.

145

advanced user

4.3

lets capture movie data

For Kistler force plates, there are eight outputs, all of which carry force measurements, rather than moments. The first two outputs share a common calibration value, as do the second. The last four outputs are the vertical force components and also share a common calibration value. The units quoted by the manufacturer are picoCoulombs per Newton, since the Kistler transducer produces a charge rather than a voltage. To work out the scale factor, you need to know the gain of the charge amplifier as well as the sensitivity of the transducer. The choice of gain setting depends on the magnitude of forces you are likely to study. You need to set the gain which gives the largest output without cutting off the force peaks. This can be determined by experiment.

If the X-channel calibration value is (for example) -9pC/N and the gain is set to 5000pC for full scale output (10V), then a 10V output corresponds to 5000/8 = 625N applied force The scale factor for this channel would be (20/4095) (625/10) = 0.3053 Newtons per digit and as with AMTI scale factors, should be entered as a negative number.

Finally, there is one more number to set up in the Force Plate Setup dialog box. This is the Zero Sample Range. If you set this as 0 to 10 or 0 to 20, Vicon will take the first 10 or 20 values in the trial, and, assuming that there is no weight on the force plates in that interval, average those values and set them equal to zero. This has the same effect as zeroing the force plate amplifier hardware, taking account of any drift effect in the electronics. This is put into effect if you select the Reset Force Plate Offsets command on the Trial menu. You have to select this command for each trial it does not happen automatically.

Analog Setup Dialog


In the System menu, select Analog Setup. Here, you can control the rate at which you sample analog data, which channels are sampled, and the scale factors for each channel. If you connect EMG equipment, enable the relevant channels for capture and 146

lets capture movie data

4.3

advanced user

select a suitable sampling rate. The type of analog hardware fitted and the number of channels available is fixed by the hardware and cannot be changed in software. Once set up, these parameters may not need to be changed. If in doubt, contact Vicon for advice. Once the analog subsystems are configured, data collection will be automatic and force vectors will appear in the workspace window with no further processes required on the part of the operator.

Testing Force Plates


When you have set up a force plate, and on a regular basis afterwards, you should test the system to ensure that you are getting accurate data. The following tests will give a timely indication if something is going wrong: 1) Weight test. Obtain an object of accurately known weight. Start a data capture and gently place the object onto the force plate. A carpet or rubber mat will help to reduce any shock loads. Stop the data capture. Examine the vertical force component (with Kistler plates, use Reporter or other analysis software to examine the net vertical force). The vertical force should equal the weight of the object in Newtons, which is 9.81 times the mass in kg, within acceptable margins of error. 2) Stick test. Put two markers on a walking stick. Start a data capture and press on the force plate, in different places and at different angles. The resulting force vector should be nearly parallel to the line joining the two markers. If the vector is in the wrong place, look at the corner parameters and the moment scale factors. If the centre of pressure stays in the centre of the plate, the moment scale factors are too small. If the centre of pressure moves too far, they are too big. 3) Gait test. Routinely carry out a simple gait analysis on one member of the lab staff. The results should be consisten with previous tests. This is a good all-round test of the kinematic and kinetic measurements, reflecting the real use of the lab.

147

advanced user

4.4

how to write a vicon plug-in

4.4 How to Write a Vicon Plug-in


This section briefly describes how to produce a plug-in for the Vicon pipeline using the skeleton source code we have provided. We have provided you with a full software developer's kit, including header files, documentation and examples which will have been installed as a zipped file \\Vicon\PlugIn\ViconPlugInSDK.zip. This section has been edited from the full document which is included in the zip file. For those of you who wish to write your own plug ins, please refer to the full document which contains all the necessary reference material.

Why Should You Want To Write a Plug-In For the Vicon Pipeline?
Vicon is proud to offer you a full plug-in interface which means you can now add extra functionality to the motion capture pipeline. From exporting data in your own format to filtering data, you now have access to manipulate the data in far more ways. And these are just a few of the possible reasons we, and some of our existing users, have thought of. We know there are many, many more functions that we have yet to think of! You now have access to Vicon data stored at all levels from Trials to Subjects to Trajectories. You can also set up user definable options for your functions and write messages to the processing log.

We have included the following examples on the installation CD:

Dump to ASCII File. This is a fairly straightforward function to write out your c3d
data in an ASCII format suitable for analysis by spreadsheet software, such as Microsoft Excel. The actual source code of this plug-in can be found in the sub-folder \\Vicon\PlugIns\SDK\ASCIIDump\..

Create CSM. This writes out c3d data to one or more CSM files (one per subject)
which is compatible with 3DsMax.

So, What is a Plug-In?


A plug-in is essentially a DLL (Dynamic Link Library) module that provides additional functionality to the application. However, the range of additional functionality is limited by what the application expects to find and can use, this in turn, is defined by the 148

how to write a vicon plug-in

4.4

advanced user

programming interface. Vicon plug-ins are based on the Microsoft COM architecture for potential compatibility and future expansion, but without the need to store information in the system registry. Each plug-in is expected to create suitable functional objects upon installation. The application then queries the plug-in to find out what it has and then makes use of any objects it finds that it can use. These objects may go on to create further objects as appropriate for the context in which they are used. The advantages of not using the registry are simplicity of installation (just copy the plug-in to the appropriate folder) and the elimination of any mess caused by removal of the plug-in. To disable a plug-in, simply remove it from the appropriate folder.

Any COM object may be created in a plug-in but, at the time of writing, the only objects that will be used by Vicon are processes. These are inserted into the processing pipeline and may be used to transform, export or otherwise manipulate trial data. The remainder of this section describes the contents of the plug-in development kit and how to produce a plug-in using our sample code. Please refer to the complete version for the full details on the COM interfaces defined and used by Vicon.

Installing the SDK


The Vicon plug-in Software Development Kit (SDK) consists of a library files, C++ files and sample source code which you should copy to any convenient location. We recommend the use of a PlugIns\SDK folder off your Vicon root, e.g. \\Vicon\PlugIns\SDK.

You will find the following general files in \\Vicon\PlugIns\SDK\... ViconPI.h Header file containing all interface and utility class definitions.
OMLPrcID.h

Header file containing Oxford Metrics defined process identifier string constants. Include this in modules that implement processes.

149

advanced user

4.4

how to write a vicon plug-in

VPLib.lib

Library file containing OLE interface GUIDs. Link this file with your release builds.

VPLibD.lib

Debug version of library file containing OLE interface GUIDs. Link this file with your debug builds.

ViconPI.cpp VPIBase.h

Source code for GUID library. Header file forming a reusable base for plug-ins. Together with the corresponding implementation file, it provides most of the code for a basic working plug-in.

VPIBase.cpp CopyFiles.bat

Implementation file corresponding to VPIBase.h. A utility batch file which is useful when developing plug-ins.

Customise Visual Studio to add this as a command on the "Tools" menu, passing it "$(TargetPath)" as the first parameter and your Vicon plug-in path with a renamed extension (e.g. \\Vicon\PlugIns\*.vpi) as the second parameter. After a build, you can then execute the command off the menu to make your new plug-in available to Vicon.

Producing a Plug-In.
A plug-in is a DLL, renamed to have the .vpi (Vicon plug-in) file extension, and placed in a location that the application knows where to find it. The DLL must export and implement a set of functions which define the application/plug-in interface. Again, these are detailed in the comprehensive version later in the tutorial supplied with your Vicon. They have been implemented in a reusable way in the code provided. Simply add VPIBase.h and .cpp to your project and provide the additional support detailed in these files.

A hook is provided to add installation code. This must set up a few string resources used to identify the plug-in to users, and create the functional objects, adding them to a list. The implementation of these objects must be provided, of course, but the mechanism for making them available to the application is taken care of.

150

how to write a vicon plug-in

4.4

advanced user

How To Use The Skeleton Code.


The Skeleton code provided in the \\Vicon\PlugIns\SDK\Skeleton sub-folder produces a minimal plug-in which may be used to form the basis of a true plug-in. It contains two shell processes which do nothing; one has options and the other does not. You will find the following Skeleton plug-in files in the sub-folder

\\Vicon\PlugIns\SDK\Skeleton\...
VPISkeleton.dsp VPISkeleton.h VPISkeleton.cpp VPISkeleton.def VPISkeleton.rc Resource.h StdAfx.h StdAfx.cpp icon1.ico VPIBase.h VPIBase.cpp

Project file. Header file containing skeleton plug-in and process classes. Implementation file for above. Contains DLL export declarations. Contains plug-in icon and string resources. Header file for symbolic resource identifiers. Precompiled header. Precompiled header module. Plug-in icon. Local copy of general file. Local copy of general file.

The skeleton requires MFC and was produced using Microsoft Visual C++ version 5.0. It may be possible to build it with other versions. If you build the VPISkeleton project and copy the resulting DLL to a suitably named VPI file in your Vicon plug-in folder, two processes will appear in the processing pipeline; "Skeleton Process" and "Skeleton Process with Options". The only two significant files are VPISkeleton.h and VPISkeleton.cpp. These are clearly marked with the comment "### TODO:" in all places where they must be modified to turn them into something useful.

Here's a Further Sample of Plug-In Code


The \\Vicon\PlugIns\SDK\ASCIIDump sub-folder contains the source code to a 151

advanced user

4.4

how to write a vicon plug-in

complete plug-in that provides a process for creating tabular, comma-separated ASCII files from Vicon data. The project was created by following the steps described shortly. Like the Skeleton plug-in it requires MFC and has been produced using Visual C++ version 5.0.

The DgAsciiDumpOptions module contains a very simple dialog class which should be familiar to anyone working with MFC. The meat of the plug-in is provided by the ASCIIDump module which completes the overall plug-in support code with the very small CAsciiDumpPlugIn class, and implements a process with the CAsciiDumpProcess class. The code has many comments which should hopefully give some clue as to what it is doing and how it works. This code should be read in conjunction with the contents of this guide and the separate Vicon Plug-In document.

The Thirteen Steps To Creating Your Own Plug-In


The following steps assume that you are using Microsoft Visual C++ version 5.0. Some of the steps may differ slightly in detail for other versions or compilers. 1. Ensure that your Directories options point to the SDK path for both library and include files. You may also want to point to this path for source files for debugging purposes.

2. Create a new "MFC AppWizard (dll)" project for your plug-in. Choose the "Regular DLL with MFC statically linked" option. 3. Edit the output filename project settings (Link / Customise) to change the file extension from .dll to .vpi. Remember to do this for both the debug and release builds. This step is actually unnecessary if the CopyFiles.bat command batch file is used as described earlier.

4. Either copy the VPIBase.cpp and .h files into your project folder or use the copies in the SDK folder directly (but be careful not to modify them). Add these files to your project. 152

how to write a vicon plug-in

4.4

advanced user

5. Replace the AppWizard generated project .h, .cpp and .rc files with suitably renamed copies of VPISkeleton.h, .cpp and .rc from the SDK\Skeleton folder. The easiest way to do this is to copy the skeleton files into your project folder, change their properties to remove the read-only flag, delete the corresponding project files then finally rename the skeleton files.

6. Copy the Resource.h and Icon1.ico files from the SDK\Skeleton folder into your project folder (overwriting the AppWizard generated Resource.h file). 7. Edit Resource.h to change the comment indicating which .rc file this belongs to. If you do not do this then Visual C++ will complain when you modify resources which add to this file. If you are not careful and hit the wrong button in response to this complaint then you may lose changes.

8. Copy the contents of the EXPORTS section in the VPISkeleton.def file (also in the SDK\Skeleton folder) into your project .def file. 9. Add the VPILib.lib library to the link settings of the project for the release build and VPILibD.lib to the settings for the debug build. 10. Modify your new project .h and .cpp file as necessary, adding dialogs and other files as required. Areas to change are marked "### TODO:" in the code. The very least that is required is to change the skeleton file include to that of the project.

11. Build your project. 12. Copy the resulting DLL file to your \Vicon\PlugIns folder (you may need to create this the first time) and rename the file to give it a .vpi extension. You only need to do this if you didn't edit the output filename project settings (Link / Customise) to change the file extension from .dll to .vpi.

13. Run Vicon and test your plug-in. You can debug your plug-in by setting Vicon as the calling application. You may want to modify the Vicon .ini file to set the plug-in path (PlugInPath setting in [System] group) to point to your debug folder and rename 153

advanced user

4.4

how to write a vicon plug-in

the file there rather than copy it to the Vicon\PlugIns folder. And that's it ! Well, not really as you will need to know about all of the interfaces available for use by Vicon plug-ins. We won't go into these here but you should refer to the complete version of this document.

154

how to write a vicon plug-in

4.4

advanced user

155

reference

Here you will find a library of background information for running your Vicon 512 system.

5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8

Animation Pipeline Calibration Theory Reconstruction Theory Autolabeling Theory File Types Glossary Troubleshooting Checklists

157

reference

5.1

animation pipeline

5.1 The Animation Pipeline


Introduction to the Animation Tutorial. Your hands on guide to the Animation Pipeline.

5.1.1

Vicon 512
This section is essentially a summary of the points discussed in detail in the main user guides. Here, we will focus on the specifics of ensuring that your data will go through the Animation Pipeline smoothly.

System Preparation
A move sheet exists that lists all the moves required, the number of subjects, the props required, the size of volume, the capture rate and how the data is going to be used. The system is unpacked, connected correctly and a link has been established between the Datastation and the Workstation. The cameras have been positioned around your desired capture volume to allow reconstruction of every marker with at least two cameras. The apertures and sensitivities have been set to ensure that even the smallest marker can always be resolved as 3 or more video lines in the live monitor view.

Calibration
A new session is created and the volume is calibrated using Dynacal. The sensitivity settings should be set to capture the larger markers of the calibration objects. The final calibration residual values for each camera should be less than 4.0 (depending on the type of lens used and the size of volume). The calibration trial is imported and reconstructed using the default parameters. The graph showing the distance between the two markers should return a mean value of 500mm 3mm with a standard deviation of 3mm or less.

Capture a trial of the wand being waved around the edges of the volume (both high and low) to check that there is sufficient overlap between cameras throughout the volume. 158

animation pipeline | vicon 8

5.1.1

reference

Subject Preparation
Take the necessary physical measurements for the BodyBuilder model parameters (mp) before markers are fixed to the subject. These measurements should be noted (in millimetres) for later use by the model as parameters in the calculation of true centres of the joints. Measure the elbow, knee and ankle width as well as wrist and hand thickness. It is often more convenient to measure joint circumference, using a tape measure, then dividing the circumference by three to obtain an approximate joint width.

Attach markers firmly onto your subject/s according to the marker file you are using. Make sure that all markers are listed in the Autolabel section of the marker file. Capture a couple of trials of your subject walking in the volume and around the edge. Reconstruct the data and if necessary adjust the reconstruction parameters to ensure that there are a majority of continuous trajectory segments.

Subject Calibration
Capture the first subject in the static motorbike pose with all joints slightly bent. Reconstruct the data, attach the correct marker set to the trial and manually label the data. Create your subject file and save the data. Select Trial|Options and check your subject as well as static trial and include subject names in label. Unlabel all trajectories and then autolabel the static trial. Once complete, save the data to ensure that you can create an asf file from this trial. Select your subject in the Trial|Capture dialog box and then capture them carrying out a number of specific bending exercises. This is especially useful for leg movement and correct hip joint location. Reconstruct and autolabel these to ensure that your parameters are correct and you are capturing high quality data. If you have more than one subject or you are using props, then perform their subject calibration and repeat the previous task. 159

reference

5.1.1

animation pipeline | vicon 8

Pipeline Capture
Capture the trials defined in your move sheet using the pipeline. Remember to keep notes on the data captured in the session either in the trial description / notes or on your move sheet. At certain stages, reconstruct and autolabel a trial to check that you still have an accurate camera calibration. At the end of the session, create a new backup session and perform another Dynacal. Apply the calibration to this backup session and take a final static capture of your subject/s. Only use this if you feel the data has degraded in later trials. Once the capture session is complete, configure the pipeline to reconstruct, autolabel and fill all gaps less than five frames and save all unprocessed trials only. Set this running overnight if necessary and check the processing log in the morning for the results.

Data Clean-Up And Save


For each trial that you wish to use, remove all ghosts and unlabelled trajectories. Ensure that there are no overlapping trajectories, if necessary editing the data. You could save at this point and complete your editing in BodyBuilder.

Fill all the gaps in all the labeled trajectories using either Edit|Fill All Gaps, or Edit|Copy Pattern. Remember to only apply interpolation over a small number of frames to minimise the production of unrealistic trajectories. Set your desired trajectory save range and save your data as c3d. If you have more than one subject, you may wish to save the data as separate files. You may also write out your data to one or more CSM (Character Studio Motion) files (one per subject).

160

animation pipeline | bodybuilder

5.1.2

reference

BodyBuilder 5.1.2
BodyBuilder is a powerful set of tools for investigating human movement. Models and outputs can be as simple or complex as required. BodyBuilder is a data manipulation and modelling software package which enables the user to apply the following pipeline to the data produced by Vicon: Edit, modify and create new points and trajectories. Can also interpolate broken trajectories. Filter and resample data. Create a kinematic model of your subject from the static capture. Model body segments and joints. Output results to file.

BodyBuilder has a number of other features. Please refer to the BodyBuilder manual for more information.

Data Editing
The majority of your data editing will have been completed in the Workstation but BodyBuilder provides you a couple of extra tools that Vicon users often find useful. The editing functions include interpolation, trajectory manipulation, filtering and the re-sampling of trajectories to a different sample frequency.

The graph editing functions provides a range of tools allowing you to move points and alter trajectories as well as create new points and new trajectories. You can, if you desire, make substantial changes to the original data so that the output no longer represents what was captured. You should be aware of the degree to which the data has departed from the original, particularly if the results are to be interpreted as an accurate and objective study of human movement.

161

reference

5.1.2

animation pipeline | bodybuilder

Graph Edit window of c3d data

Filtering
BodyBuilder allows you to filter either selected or all trajectories. The effect is to smooth the trajectories, ironing out sharp features by passing only low frequencies. You can select the frequency characteristics of the filter function. Great care should be taken with applying filters as they can reduce the frequency content of the data. For example, the motion of body parts such as feet or fists, which naturally have high-frequency components, may become blurred by excessive filtering. They also have the effect of reducing the excursion of a point from the overall direction of its movement. With repeated heavy filtering, the trajectory of a marker which was moving in a roughly linear path, would ultimately become a straight line. Similarly, if a marker is moving in a circular path, without overall displacement, the effect of filtering will be to reduce the radius of curvature of the path. If repeated, this would ultimately bring the point to a standstill.

162

animation pipeline | bodybuilder

5.1.2

reference

Filter dialog box

Spike Removal
You can also choose to remove spikes from your trajectories. Spike removal is a special case of the Delete and Fill function, in which the program automatically identifies such points and carries out a delete and fill on each one. Spikes occur when the reconstruction process has located a single point which is to one side of the position which would be expected from the surrounding points. If spikes are a frequent problem, you should consider re-calibrating to see if a better result can be obtained.

This function should be used with care. If there are a lot of noise spikes, you should consider filtering the whole trajectory rather than spike-remove a high proportion of points. However, if the problem is not excessive, spike removal and filtering will enable you to produce satisfactory results without repeating data capture.

Resampling
If your desired output is required at a different frequency to that of the captured data, then you can re-sample the fully edited move. The re-sampling function works, in most cases, by fitting a curve which is a close approximation to the original data, and then using this to calculate new points. This function can produce new points at both a higher or lower sample rate than the original sample rate. There is a minor low-pass filtering effect. 163

reference

5.1.2

animation pipeline | bodybuilder

RESAMPLING

IS NOT REVERSIBLE. IF YOU ARE IN ANY DOUBT ABOUT RE-SAMPLING, SAVE THE DATA

TO YOUR HARD DISK BEFORE USING THIS FUNCTION.

YOU

SHOULD ONLY RESAMPLE AT THE END.

Resample dialog box

Set BodyBuilder Up To Create a Kinematic Model of Your Subject.


The key reason why you will want to use BodyBuilder is its ability to generate skeletal data from your moves. Most graphics applications animate their characters by defining them as hierarchically linked skeletons consisting of joints and bones. BodyBuilder provides you with a powerful tool to convert labeled c3d marker data into the Acclaim skeletal motion files. This format is, for the most part, classed as an industry standard for describing both your subjects skeleton and the underlying motion. For more details have a look at the description of .asf and .amc file types in the Reference Guide. There are many different ways of describing the skeleton depending on the complexity of your application. BodyBuilder has its own language which allows you to develop your own models and this is fully discussed with examples in the BodyBuilder manual. We have found that most Vicon users develop models for one of the following applications; Visual Effects or FMV, Full Motion Video. The model tends to be a full description of the human form of many individual segments. Computer Games Development. This type of model tends to a reduced set of body segments due to the current constraints of the genre. BodyBuilder can be used to develop models to describe any form.

164

animation pipeline | bodybuilder

5.1.2

reference

Depending on your application, you should set BodyBuilder up to generate your desired outputs. We have provided two suitable models for the above applications on your installation CD. ENSURE
THAT YOUR MARKER FILE, MODEL FILE, MODEL PARAMETER FILE AND ACCLAIM AST FILE

ARE ALL FROM THE SAME ORIGINAL MODEL.

The simplest way to ensure that BodyBuilder will process your data is to rename your chosen set of model files as subject.mkr, subject.mod, subject.mp and subject.ast and store them in your session directory.

The settings for the model can be changed in the Subject Settings dialog box. The first field in the box is a drop-down list of subjects for each of the trials loaded. Below this are the marker, model and parameter files currently selected for the given subject. These files may be changed using the Browse button beside each field.
Subject Settings dialog box

When an automatically labelled C3D file is first opened in BodyBuilder, the model and parameter files are assumed to have the same name as the marker set used to perform the auto-labelling. Thus, a subject labelled using the file Wizard.mkr is assumed to be compatible with the model Wizard.mod and the parameter file wizard.mp, if these are available.

165

reference

5.1.2

animation pipeline | bodybuilder

If you have never created a skeleton model for your subject then use the default wizard.mp model parameter set.

The set of model parameters provided is suitable for a male subject of average size and it is possible to generate both skeleton and motion files without any adjustment. If you need to adjust the joint parameters to match your subject, select Model|View Parameters which will appear as a text window. The main values you may wish to adjust are as follows; Knee Width. Ankle Width. Elbow Width. Wrist Thickness and Rotations. Hip Joint Centres. Pelvis Tilt. Thorax Offset and Tilt. Shoulder Offsets. Head Offset and Tilt.

For a full discussion on the variations in these values please refer to the BodyBuilder manual.

Create a subject.asf File


The Create asf command is intended for those users who want to produce an Acclaim Skeleton File using a BodyLanguage model and a .c3d file of static data. The .asf file produced is based on a template file with the extension .ast. The Create ASF command carries out the same procedure as the Run Model command. The result of running such a model is a text file with the extension asf. The .asf file created by this command will have the same name as the subject or it will be saved as anonymous.

If the trial data currently displayed in an active Workspace window has a single subject (performer) with no label prefixes, the subject is identified as "Anonymous" in such cases. If the trial data currently displayed has subject-identifier label prefixes, then it is possible to run the model in such a way as only to take effect for one named subject. 166

animation pipeline | bodybuilder

5.1.2

reference

This is useful if more than one subject appears in the data. The currently named subject is shown at the top of the marker list bar on the right of the Workspace.

The Run Model command applies the currently loaded BodyLanguage script (.mod file) and parameters (.mp file) to the current data (.c3d file). The sequence of operations is as follows: Delete any previous model outputs in the data. Read .mp and .mod script files. The .mp file is added to the beginning of the .mod file. Carry out kinematic modelling as defined in the script. Calculate required outputs. Update Workspace window to show outputs. Update parameters (if required).

If the above procedure fails at any stage, it is likely to be the kinematic modelling stage. An error message will appear highlighting the nature of the error, and if appropriate, opening a text editor window. Using the Run Model for Subject command allows the use of different models for subjects with different names, appearing in the same trial. In the creation of the .asf file, the subject calibration (or static pose) is used to calculate the bone lengths for the particular actor being captured. It is important to ensure that the actor does not move during capture. The Acclaim format is flexible about the actual position of your subject but you should ensure that the elbows and knees are bent. This is necessary because the bones attached to knee and elbow are calculated from just three markers. If these three markers appear colinear, they actually become only two defining markers and it is no longer possible to
Left: Create subject asf from static motorbike pose Right: The synthetic pose generated by Create ASF

167

reference

5.1.2

animation pipeline | bodybuilder

define the two required bones. In the latest models, the actual position of the subject in the static is no longer used as the base pose. A synthetic pose where all joints are normalised and aligned to each other are used instead. In summary, to generate your subject.asf, load the subject static c3d trial. Check that you have select the correct model, model parameter, marker and ast files. Select Model|Create ASF or hit the icon on the toolbar.

Create Your .amc Motion Files


Once you have generated a subject.asf file then you can process all your moves and create Acclaim motion files known as .amc files. You create a subject.trail.amc motion file using a BodyLanguage model and a .c3d file of motion data. The Create amc command, again, carries out the same procedure as the Run Model command, with the result being a text output file with the same name as the input .c3d file, and the extension .amc. AN .ASF
FILE IS REQUIRED BEFORE THIS COMMAND CAN BE USED.

You can also process .c3d files in batch mode to create multiple .amc files.
Skeletal data output generated by Create AMC

168

animation pipeline | mobius

5.1.3

reference

Mobius 5.1.3
Mobius is Vicon's proprietary motion capture editing and processing tool. It is intended to be a high level utility to address various difficulties in working with motion capture data, namely: Motion Blending. Motion Transforming. Keyframing. Inverse Kinematics.

This is an overview of the tools available in Mobius. For more information consult the Mobius manual.

Mobius isn't really a pipeline-type device. It's an artistic tool developed by Vicon to let you edit your skeletal data directly. You can import your skeleton model (subject.asf) and then attach the motion file (.amc) to form a motion scene. The scene in Mobius is the highest level working unit and is broadly the working document. Scene files have a '.mos' file extension. The scene follows the classical model of a scene in that it is a continuous piece of footage, shot in one location and containing one or more characters, each with a script (motion) to follow. Where the model is broken is in the absence of 'shots' as Mobius is not concerned with the behaviour of the camera except for viewing. There is one scene per session of Mobius. When Mobius starts up, you are presented with an empty scene named 'Untitled'. Whenever a Motion or Skeleton is loaded, it becomes part of the scene.

The Workspace is where you view and interact with the 3D spatial aspects of your characters and motion. Loaded skeletons appear as rendered box or diamond topologies or, if a mesh is available and active, as rendered smooth-shaded meshes. Loaded motions appear as rectangular ribbons and represent the trajectory of the root bone of the skeletal hierarchy through time. The orientation and translation of bones and trajectories can be directly manipulated using an appropriate tool and interaction mode. Mobius lets you alter the appearance of your character from the standard 'chip person' 169

reference

5.1.3

animation pipeline | mobius

by applying a polygon mesh which has been topologically aligned around the skeleton in a proprietary CG package.
Hello World as a scene in the Mobius workspace

The Mobius supplied mesh, MoBecks.obj has been pre-aligned to the MoBecks.asf skeleton file in this manner.

Motion Transforming
Motion Transforming is usually used to correct problems when motion captured for one Skeleton (the Source) is applied to another Skeleton (the Target) with the same Bone hierarchy but a different base pose or bone lengths. The aim of the operation is to align the end effectors (usually the Feet) of the Target Skeleton to those of the Source. The following step by step guide will help you Load the Target Skeleton (*.asf) Set the Source Motion filename (*.asf) in the Apply Motion Transform Dialogue. Set the Source Skeleton (i.e the Skeleton for which the Source Motion was captured) filename (*.amc) in the Apply Motion Transform Dialogue. 170 A new motion will appear in the Timebar which is the transformed motion. Apply Transform Locks to the end-effectors which need to follow the original end-

animation pipeline | mobius

5.1.3

reference

effectors. Adjust the root trajectory translation if necessary (for example so that the feet still touch the floor). Et Voila!

An example of motion transforming a long legged subject

Motion Blending
You can create one continuous Motion from two separate Motions by blending the end of the first Motion with the beginning of the second Motion. The following step by step guide will help you blend one move into another; Success is dependant upon having suitable Motions for blending. They should be as similar as possible over the transition period.

Load a Skeleton. Load two Motions. The two Motions should be loaded 'on to' the same Skeleton. When viewed the Skeleton should perform one move and then the other. Align the Motions in space by using the Move tool and the Rotate tool. Move the second Motion so that it starts near to where the previous Motion ends.

171

reference

5.1.3

animation pipeline | mobius

As you are going to overlap the Motions in time, you will also need to overlap the Motions in space. Overlap the motions in time. On the Timebar, click on the second Motion and drag it to the left. The overlap becomes the transition.

It can help to loop over the transition. Double-Click on the Timebar Ruler to enable the Replay Loop markers and then position them either side of the Blend.

Blended Moves

Inverse Kinematics and Bone Locks


Mobius' real-time inverse kinematics solver is used to lock bones to 3D points by position and, if required, orientation. The IK solver works up the chain from the lockedbone until it reaches a Terminator Bone. There are various IK parameters that may be tweaked to achieve the desired result/performance. The IK Terminator is a bone beyond which, the IK solver will have no effect. For example, if you lock your character's hand with a Bone Lock and you don't want the character to lean over to reach the goal, you could make the shoulder a Terminator.

172

animation pipeline | mobius

5.1.3

reference

An IK lock of the arm with terminator displayed

Bone locks apply for an adjustable range of motion as opposed to, their other form, transform locks which apply for the duration of a motion.

Bone Locks are usually used to correct problems such as footslip caused by problems in the capture pipeline or being introduced after blending. The Bone Lock uses the Inverse Kinematics solver to keep the Bone as close to the Bone Lock as possible. Bone locks may be rotated if the bone is to match the orientation of the Bone Lock.

173

reference

5.1.4

animation pipeline | exporting to cg

5.1.4

Exporting to your graphics package.


The main purpose of kinematic modelling is to produce files suitable for driving animation packages. This section describes the procedure for importing the data into the following packages: 3D Studio Max Maya Softimage 3D Nichimen Graphics N-World Wavefront Kinemation Alias PowerAnimator v7

In each case, the animation software requires Acclaim-format files (AMC and ASF files). The following information is for guidance only. We have found that our existing users have many different ways of getting their captured moves onto their characters.

It is assumed below that a suitable means of porting (transferring) text files from the BodyBuilder PC to the animation system has been established. Some familiarity with the packages concerned is also assumed.

3D Studio Max
The Acclaim Import Plug-In allows motion capture data saved in the Acclaim file format to be read in-to 3D Studio Max. The motion capture data is stored as a pair of files: The Acclaim Skeleton File (.ASF) contains the 'static skeleton' The Acclaim Motion Capture File (.AMC) contains the motion information that is applied to the skeleton. The Import plug-in works in conjunction with the Bone Controller that allows the motion to be fine-tuned in 3D Studio Max. The Bone Controller Plug-In is documented separately later in this section.

The plug-in file ACCIMP.DLI should reside in the plug-ins directory below where 3D 174

animation pipeline | exporting to cg

5.1.4

reference

Studio Max was created. For Example: C:/3DSMAX/plugins/ACCIMP.DLI The file can be copied here using the Explorer | File Manager. The next time Max is run it automatically connects to the plug-in. To check the list of plug-ins available select File | Summary Info... and press the Plug-In Info... button on the Summary Dialog that appears. The Acclaim Import Plug-In should appear towards the end of the list. MAX 2 requires a new build of ACCIMP .DLI distributed in the MAX2 Plug-Ins directory.

The import of motion capture data is controlled through the Acclaim File Import dialog. The dialog is accessed by selecting the File/Import... menu option and then selecting either an .ASF or an .AMC file from the import browser.
Acclaim File Import Dialog

Importing Static Skeletons


Select File/Import... to open the import file browser, and then select Acclaim from the file type combo-box. You can now browse for and select a .ASF file. Selecting an .ASF should open a dialog box like the one above. To import a skeleton without motion, press OK, leaving the AMC field blank.

175

reference

5.1.4

animation pipeline | exporting to cg

Importing Skeletons and Motion


Select File/Import... to open the import file browser, and then select Acclaim from the file type combo-box. You should now be able to browse for and select a .ASF or an .AMC. The file name you select appears in the appropriate text field on the dialog. Use the browse button to select the other file required. If a .AMC was selected initially, the .ASF field of the browser will automatically be filled with the .ASF generated by BodyBuilder.

Press OK to initiate the import. The skeleton section of the dialog allows you to select the destination for your motion. The skeleton can already exist in the MAX (Locked Selection Set) or it can be loaded from a file (Skeleton File (ASF)).

Load the Skeleton asf Dialog.

Applying Motion to Skeletons in MAX


If a skeleton either static or animated, has already been imported to max, motion can then be applied to it. By importing motion onto the same skeleton using different Max Start Frame's the motion can be chained together. The skeleton must have been made using the Bone Controller.

To apply motion to a skeleton you must first select the skeleton that you want to apply the motion to. The easiest approach is to double-click on the skeleton root (the pelvis). Your selection must be locked as the destination skeleton. To lock the selection, press the space-bar. The closed pad-lock icon (above) indicates a locked selection.

176

animation pipeline | exporting to cg

5.1.4

reference

Front view of selected skeleton in Max. Note that the skeleton is locked as the destination.

The locked skeleton in Acclaim File Import Dialog.

Then select the .AMC file in the usual manner, Locked Selection Set should already be checked as the destination and press OK. The motion section allows the user to specify the bones to be made in the scene, and the controllers that move the bones in the scene.

The bone controller settings dialog section.

177

reference

5.1.4

animation pipeline | exporting to cg

The Animation Controller


Radio buttons on the dialog allow you to select the controller type used to store the motion information and interact with the skeleton. The Bone Controller is documented separately. Importing large motion files with the PRS Controller may take several minutes. The progress is indicated at the bottom of the window. The Cancel button by the progress bar cancels the rest of the motion import, so pressing Cancel at 50% will still import the first half of the animation.

Bones Pro
The radio buttons on the Bone section of the dialog allow you to select whether the skeleton is to be made with the bones system that ships with MAX, or 'boxes' suitable directly for input to Bones-Pro. The check box Set Box Width let you specify a width for the boxes in current max units. The default is to produce boxes using a pre-defined ratio. The motion section allows the user to specify a source motion, the range of the motion you want and the destination time in MAX.

The Motion section in Acclaim File Import Dialog.

The Base pose for the skeleton appears at frame 0 of the MAX animation. Typically, the base pose is used to attach the 'skin' to the skeleton. (using additional software such as the Physique plug-in of Character Studio) To import a section of the motion select the Import: Some radio button, you can then set the AMC Start Frame field to the first sample to be extracted and the Frames to Import field to the number of samples to import. 178

animation pipeline | exporting to cg

5.1.4

reference

Set the Motion Capture Frame Rate field to the frame rate used for the capture.

The Max Start Frame field is the first destination frame in MAX, specified in the current MAX frame units.
Setting a specific starting frame using the import motion dialog.

From the above dialog, the AMC Browse... button was pressed and FullBody.AMC selected. Then the Frames to Import field was set to 100, the Max Start Frame field set to 25 and the AMC Start Frame field set to 100. Pressing OK on the above dialog will import from FullBodyD.AMC samples 100 - 200. Assuming that the MAX frame rate is set at 30 frames per second, the skeleton imported into MAX will appear animated between frames 25 and 75. The check box Key Reduction turns on key-frame reduction for all the data that is imported. This makes the motion easier to work with, but the extra computation slows the import process down. The threshold parameter alongside the check box indicates the acceptable amount of deviation from the original rotation data, in degrees. The default is half a degree. A higher threshold will remove more keys. The key-frame reduction feature is only affective when using the Bone Controller.

If there are any gaps in the motion capture data (zero filled) then activating the Interpret Zeros as Undefined check box allows Max to interpolate the missing values. You can also export the motion data directly from Vicon workstation in the CSM file format.

179

reference

5.1.4

animation pipeline | exporting to cg

Perspective view of subject.asf moving around

Maya
The Maya Acclaim plug-in will come with the Maya software or can be downloaded from the Alias|Wavefront web site. Load the plug-in using the Plug In Manager and use the ImportAcclaimWnd mel script, also supplied, to call the functions of the plugin.

Softimage 3D v3
1 2 In the motion module, select Channel -> Get Motion File -> Acclaim_Skeleton. In the dialog box specify the location of the ASF file and then the AMC file and click OK. Softimage now builds the IK chain automatically and imports all the channel data held in the AMC. If you just want to import the skeleton without any movement, just specify the ASF file location in the dialog box and leave the AMC file reference blank. The translator will then import just the ASF file and build the IK chain without any movement information.

Nichimen Graphics N-World


Importing the ASF: Select "Add New Object" from the GeoMenus bar.

180

animation pipeline | exporting to cg

5.1.4

reference

From the "New Object" popup window that appears select "Read" and a file dialog is displayed. Against "File Type" select the "asf" button. Reference a valid ASF in the Filename box. Click "Read In Object" and the IK chain is built in the Geometry window. Having imported the ASF in this way and built the IK chain automatically, you can now go through the usual process of applying a skin to the chain, as described in the Nichimen manuals.

Applying the AMC.


Go to the Dynamics window and create a new script if one is not already available. (Click-R) on the active script. From the "Script Operations" popup window that appears select "Add Parallel Subactions". From the "Add Parallel Subactions" popup window that appears select "Add Several" and select 2 and click "Do It". Go to the upper of the two newly created subaction windows and (Click-M) to edit the subactions properties. From the "Operation" box (Click-M) to select the geometry class required. From the class list select "Skeleton Animation" and the "Dynamic Operation" window will appear. Select "Read Acclaim Data" from this list and the list of subaction properties will change. In the skeleton box (Click-M) to list the currently loaded IK chains and select the relevant skeleton. Change the skeleton directory if necessary and reference the ASF file used to create the skeleton in the ".asf File Name" box. Select the directory you wish to load the AMC file from in the box "Motion Dir". 181

reference

5.1.4

animation pipeline | exporting to cg

(Click-R) in the ".amc File Name" box to display the AMC files in the referenced directory. Select the AMC you wish to apply to the skeleton. Click "Do It" and the name of the upper subaction will change to "Read Acclaim Data". Now go to the other subaction and (Click-M) to display the subactions property list. From the "Operation" box (Click-M) to display the list of operation classes and again select "Skeleton Animation". But this time from the "Dynamic Operation" window select "Update Skeleton".

Back in the subaction properties window (Click-M) on the "Skeleton Name" box and select the name of the skeleton you want to apply the AMC to from the popup list. Click "Do It" in the subaction property window and you can now preview the motion in the Geometry window, where you skeleton will now be loaded with the AMC movement information.

Wavefront Kinemation
Kinemation uses a command line translator to convert the Acclaim files into Kinemations native .KIN and .BOD files. Use of the translator is described below but for the latest information consult your Kinemation documentation or run the translator without any arguments for basic level help.
acclaim amc_file -h -b bod_file -k kin_file -c cpt_file -n -2 [-hn2] [-b bod_file] [-k kin_file] [-c cpt_file] asf_file

Print this help message Write a bod file to this filename Write a kin file to this filename Write a cpt file to this filename Do not generate a .bod file Generate version 2.0 .bod file

182

animation pipeline | exporting to cg

5.1.4

reference

asf_file an acclaim .asf filename amc_file an acclaim .amc filename

Defaults:
acclaim -b <acclaim>.bod -k <acclaim>.kin <acclaim>.asf <acclaim>.amc

where <acclaim> is the prefix of the amc file The default is to generate .bod and .kin files with same name as the .amc file

The .kin file is written to $WF_SCMP_DIR The .bod file is written to $WF_OBJ_DIR IF
THERE IS A -B AND NO -K, NO KIN FILE IS GENERATED

Example: acclaim -b bob.bod -k dance.kin bob.asf dance.amc

Alias PowerAnimator v. 7
The Alias Acclaim reader comes in two forms: Unix command-line programs, and integrated plug-ins which extend PowerAnimator v7. Thus you can use them directly from Alias or call them from scripts. There are two tools which each use the Acclaim formats in different ways. The first tool, AlImportASF, constructs an Alias style skeleton from an Acclaim ASF definition. This is a static structure that contains no animation but takes on the initial skeleton pose specified in the ASF file. The second tool, AlApplyAMC, uses both an ASF and an AMC file, to add animation to an Alias skeleton that was previously created using AlImportASF. In typical circumstances, you will first use AlImportASF to construct a correctly proportioned skeleton from an ASF file. From this starting point, you will use the modelling tools in Alias to add geometry on top of the skeletal framework. Alias gives 183

reference

5.1.4

animation pipeline | exporting to cg

you a variety of features to build skin, clothes, fur, muscles, and other shapes that will move realistically when the underlying skeleton moves. When your model is complete, you will use AlApplyAMC to add animation to your character. It is likely that you will capture many different motions use AlApplyAMC to add the animation to the same original skeleton.

How are Alias and Acclaim Skeletons Different ?


Alias and Acclaim skeletons play a similar role in the process of character animation in that they both represent simple articulated structures that can be used to animate more complex geometrys or skin. However, there are many differences in the way the two skeleton formats are implemented and these differences affect how one format can be converted to another. One main difference between the two formats is that Acclaim skeletons are constructed from bones, whereas Alias skeletons are made up of joints. A typical Acclaim skeleton of a human form might contain bones named lfemur", ltibia, and lfoot," while a parallel skeleton in Alias will probably have joints named lelbow, and lankle". An Acclaim skeleton can describe a single length of bone using one part, but an Alias skeleton needs two joints to give a bone length. The formats also differ in the way rotation order is handled. Every rotation represented in the Acclaim format is associated with a specific rotation order. Not only can Acclaim rotations be in XYZ, ZYX, or any other order, but a single skeleton may contain mixed rotation orders. The elbow might have an XYZ rotation order while the wrist has a YZX ordering. In contrast, Alias skeletons use a fixed rotation order of XYZ. Rotations also differ between the two formats in that Acclaim bones rotate around a local co-ordinate system and Alias joints rotate around a global one. Using the Acclaim format, a skeleton can be described in which a rotation about the Y axis of the lhumerus" will cause the bone to twist along its length. In Alias, the effect of a Y rotation on the lhumerus depends on how the bone length is oriented relative to the global co-ordinate axis in its initial pose. (NOTE: Alias skeletons can specify a local coordinate system for each joint, but this can only be used for interactive local rotations which are immediately converted and stored as rotations relative to the global axes.)

184

animation pipeline | exporting to cg

5.1.4

reference

185

reference

5.2

calibration theory

5.2 Calibration - Theoretical Background


3D reconstruction depends upon knowing exactly where the measurement cameras are located, and in what directions they are pointing ("pose"). determining these parameters is known as camera calibration. Vicon DYNACAL is a uniquely fast and convenient calibrator suitable for any size or shape of 3D measurement volume. Unlike most other calibration procedures, The process of

DYNACAL solves for the parameters of all cameras at the same time, ensuring that an optimal calibration is achieved. There are two quite distinct phases to DYNACAL, as represented by the two calibration devices; the wand and the L-Frame. The wand is used to generate the relative pose of the cameras from the dynamic phase of calibration. The L-Frame is used to establish the absolute position of the origin and the directions of the axes for the co-ordinate system in which the 3D measurements are made. This is achieved from data captured during the static phase.

5..2.1

Relative Pose (Dynamic Phase)


Remarkable though it may seem, it is possible to calibrate a group of cameras by observing a set of unknown points, provided those points are seen simultaneously by at least two cameras (the more the better). When the wand is in view of a group of cameras, a single pair of unknown 3D points is recorded as a pair of 2D points in the image of each camera. Over the period during which the wand is waved, a cloud containing hundreds or thousands of 3D point-pairs is built up. Provided the wandwaver takes appropriate care, this cloud of points should be distributed throughout the entire measurement volume. In the simplest case, the views of all cameras enclose the whole measurement volume and all cameras can therefore see the whole cloud of points. In practice, the body and clothing of the wand-waver obscures some of the points from some of the cameras for some of the time. More frequently, particularly in systems with large numbers of cameras, groups of cameras may have been deliberately set up to view only part of the volume. For example, if a sprint runner is to be tracked over 30 metres using 20 cameras, the most

186

calibration theory

5.2

reference

practical (and accurate) arrangement is to have overlapping camera views down each side of the track. In this case, DYNACAL has to generate an accurate and consistent set of calibration parameters between cameras that are not even viewing the same sub-section of the measurement volume. The strategy employed is as follows: The entire wand-waving capture is sub-sampled to reduce the total number of observations to a constant value. It does not matter how long the wand-waver takes to complete his task, only that the entire volume is covered.

The cameras are then selected to find the pair with the best distribution of overlapping wand marker observations. An initial "seed" calibration is generated for this pair.

The calibration is then "propagated" outwards from the seed pair by including uncalibrated cameras whose observations overlap with calibrated cameras.

The 3D co-ordinates of all the observed wand marker samples are calculated, and the average length of the wand is found. The camera calibrations are adjusted until this average is equal to the true length of the wand, read from the user-set .cro file.

3D Origin and Axes (Static Phase) 5..2.2


The relative pose phase of DYNACAL generates a set of calibration parameters for each camera that will result in "accurate" 3D reconstruction. However, in almost every case, these 3D points need to be estimated relative to some predetermined co-ordinate system in which, for example, "up" is the direction of gravity and the floor is level and horizontal. DYNACAL uses a mandatory, short (20 sample) observation of a static calibration object to establish this alignment. Vicon will often prove that your floor is not physically level and horizontal!

DYNACAL static calibration objects, of whatever size, always have four markers. Three of these markers are in a straight line, while the fourth, the "singleton", is away from this line. Two of the three markers in the straight line are significantly closer together 187

reference

5.2

calibration theory

to one another than to the third, making it possible to identify all four markers uniquely. A convenient mechanical structure for mounting the markers according to this set of rules is in the shape of the letter L. All new Vicon systems are supplied with an LFrame. However, other arrangements which obey the same rules will work just as well. DYNACAL proceeds as follows: The static calibration object must be in view of at least one pair of cameras. It is then "reconstructed" into 3D co-ordinates using the dynamic phase calibration parameters. The user-set co-ordinates of the static calibration object are read from the .cro file and the reconstructed points are matched uniquely to them. The "best" straight line is fitted through the three aligned 3D points. Very

commonly, although not essentially, this is defined in the .cro file to be in the direction of one of the desired co-ordinate system axes. For example, the points ({0,0,1500}, {0,0,200}, {0,0,100}) define the direction of the z-axis, the default vertical for Vicon. The points ({1000,73,1500}, {1000,73,200}, {1000,73,100}) would define exactly the same z-axis direction. Furthermore, the points ({0,52,72},

{0,82,112}, {0,352,472}) are in a straight line but are clearly not parallel to any axis. They could nevertheless be used to define a 3D co-ordinate system.

The line which passes through the "singleton" and is exactly perpendicular to the bestfit line through the three aligned points is then found. If the alignment of the three points is set parallel to an axis, this perpendicular line must lie in the plane of the other two axes. Again, very commonly but not essentially, this line is defined in the .cro file to be in the direction of one of the two remaining co-ordinate system axes. By the end of this step, the alignments of all three co-ordinate system axes are defined. Finally, the co-ordinate system origin is moved until the three co-ordinates of the "singleton" defined in the cro file are exactly correct. By the end of this step, both the alignment and origin of the desired co-ordinate system are matched to the camera parameters and the calibration is complete.

188

calibration theory

5.2

reference

Summary
There are several operational lessons to be learned from the steps outlined in the dynamic and static phases of calibration.

5..2.3

Wand Distribution.
measurement volume.

The wand should be waved through the whole of the

Viewing Overlap. It is essential that the views of distributed cameras overlap their
"neighbours" (in field-of-view, not necessarily in camera position) to a significant extent. This effectively means that both wand markers must be simultaneously visible to at least three cameras in all sub-regions. It is not essential that there be a common area of overlap for all cameras. A camera at one end of the measurement volume need not share any part of its view with a camera at the other end.

Origin. The origin of the co-ordinate system does not necessarily lie at the corner of
the L-Frame, or in line with the three aligned markers, and it generally best not to set up a calibration with this configuration. The "singleton" is the only marker whose position is exactly defined, and the origin should be placed at a known offset from the centre of this marker. For this reason, the pin beneath the singleton marker on Vicon L-Frames is not adjustable.

Mis-aligned Static Markers.

Any misalignment of the three nominally aligned

markers can cause significant misalignment of co-ordinate system axes. Furthermore, misalignment of the best-fit line through these markers can cause the intersection of the perpendicular line through the singleton to move along the best-fit line, creating an apparent offset error. The best way to avoid misalignment is to use the largest static calibration device conveniently available. This may be a large L-Frame, or it may even be better to use three markers on a long plumb line for an exact vertical, and a separate singleton marker. For example, if the singleton is 1 metre away from best-fit line, an error of only 1 will cause the intersection point to move by 17mm!

Marker Co-ordinate Descriptions. Although all three co-ordinates of the singleton are significant, the .cro co-ordinates that define the separation of the three aligned markers are used only for identification and need not be exact. To illustrate this, if any 189

reference

5.2

calibration theory

of three markers on a calibration plumb-line were to slip along the line by a small amount, the calibration would be unaffected. Similarly, the distance along the

perpendicular from the singleton to the best-fit line is used only for identification. However, the direction of this perpendicular is used and will influence the calibration. By following the steps in the Manual, it is possible to calibrate Vicon very simply, quickly, and accurately using DYNACAL. However, the more detailed description of DYNACAL internal operations given above should allow calibrations to be optimised for all possible experimental conditions.

190

calibration theory

5.2

reference

191

reference

5.3

reconstruction theory

5.3 Reconstruction - Theoretical Background


3D reconstruction is based upon finding the intersections of rays projecting from the "optical centre" of each camera. following : The 3D position and orientation (pose) of the camera, found by calibration. The 2D position of the centre of the corresponding marker in the camera image. The distortion corrections applied by linearisation. The direction of each ray is determined by the

The mathematics used to find the ray intersections is known as "resection" and is described in any standard textbook on photogrammetry. Mathematically, resection is a much simpler process than camera calibration but, in practice, it is complicated by a number of experimental factors which introduce noise into the measurements: Marker images may be wholly or partly occluded. Marker images may wholly or partly overlap. Cameras have limited resolution.

The method of determining the marker image centre yields "sub-pixel" resolution.

The small residual errors that exist in linearisation and camera calibration.

In order to establish intersections between imperfectly directed rays, an "intersection tolerance" must be used. This tolerance is controlled by the user through the

reconstruction parameter Intersection Limit. Advice on setting reconstruction parameters may be found in the Production User Guide.

Inevitably, some rays intersect within the specified tolerance even when there is no physical marker present. The number of these "ghost markers" depends upon the number and distribution of real markers in the measurement volume. Unless

something is done to reduce them, for a typical time-sample from a typical trial with 30-40 real markers, there would be more "ghost" intersections than real. The most powerful way of separating real from ghost ray intersections is to use that fact that markers generally move along relatively smooth 3D trajectories. It is thus possible to predict where a marker is expected to be on the basis of its recent 192

reconstruction theory

5.3

reference

kinematic history. Again, a tolerance must be applied to the prediction. The prediction tolerance is controlled by another parameter, Noise Factor (a multiplier applied to the average camera calibration residual), combined with current marker speed. An upper limit for prediction tolerance is set by the parameter Prediction Radius. In practice, this method almost totally eliminates "ghost markers" due to rays that are associated with "established" trajectories.

But what happens before a trajectory becomes "established", or if a marker becomes so occluded that it cannot be reconstructed? All rays not assigned to "established" trajectories (at the start of a trial this means all rays) are tested for intersections. At each time-sample, these unassigned 3D points are tested against unassigned points in the previous four time-samples. If a 3D

parabola can be fitted through a set of unassigned points in five consecutive samples, a new trajectory is started. The maximum permitted curvature of the parabola is controlled by the Maximum Acceleration parameter, and the tolerance of fit to the unassigned points is controlled by the Noise Factor parameter.

There are three further internal reconstruction procedures that are worthy of note: If the marker of an established trajectory becomes occluded from all but one camera, its trajectory is continued by finding the best intersection of the one ray with the trajectory prediction. If the marker of an established trajectory becomes totally occluded from all cameras, its trajectory is predicted for up to five time-samples. If a matching ray intersection is found, the trajectory continues. If no ray intersections are found which match the prediction for five time-samples, the trajectory is stopped. Lastly, if two or more marker images in a camera view overlap so closely that they cannot be separated in 2D, they may be combined into a single ray which then intersects with two or more real point. A shared ray is given less weight in 3D reconstruction than a ray which is unique to an intersection.

193

reference

5.4

autolabel theory

5.4 Autolabeling - Theoretical Background


Vicon 3D reconstruction is used to track markers without any reference to their relative positions. For example, a handful of markers thrown in the air are tracked just as well as markers attached to a walking subject. All Vicon markers are nominally identical -- they have no "identity" until they are attached to a particular location on a solid or articulated object. They then take on the identity of that location. In mathematical terms, attachment to an object is a

"constraint", and the constraints applied to a point can be used to identify it. Some attached markers are very "tightly" constrained to one another, for example those fixed to the head of a subject. Others groups of markers all lie on the same body segment but have more underlying "soft tissue". Examples of these are; the front and back chest markers, the pelvis or waist markers, and markers on the arms and legs. These groups of markers are still constrained to one another but with a tolerance that varies from one individual to another, and even from trial-to-trial. Finally, there is a type of constraint between groups of markers on either side of a joint. For example, the makers on the fore-arm move in approximately circular paths with respect to the markers on the upper arm because of the hinge at the elbow. By analysing all these types of constraint, Vicon autolabeling is able to identify individual markers attached to a subject with great reliability, even for complex moves like gymnastics. The constraints are characterised in 3 distinct steps: The user is required to indicate how markers are grouped and how groups are connected. The separations of grouped markers are measured (calibrated) for an individual subject. The inter-marker and inter-group constraints are characterised for an individual trial.

5.4.1

Grouping Markers
The marker groupings for the left arm of the Vicon default whole-body marker set is defined as follows:
LeftShoulder = LSHO,CLAV,T10

194

autolabel theory

5.4

reference

LeftUpperArm = LSHO,LUPA,LELB LeftLowerArm = LELB,LFRA,LWRA,LWRB,LWRI,LWRE LeftHand = LFIN

Each group contains the names of all the markers that it might contain. For example, the LeftLowerArm group contains: LELB,LFRA,LWRA,LWRB,LWRI,LWRE. However, as few as two markers, LELB and LWRI, may actually be used. LFRA is optional and the pairs LWRA,LWRB and LWRI,LWRE are alternatives. The actual markers in a group are determined during subject calibration.

The constraining links between groups of markers are indicated in two ways. If two or more groups contain the same marker, they are recognised as linked. Otherwise the link can be shown explicitly; LeftLowerArm,LeftHand Note that the grouping and linking is strictly geometrical and is not based on closeness. This finger marker on the hand is not linked to the markers on the wrist because the proportional change in the distance between them is so great.

Subject Calibration
The starting point for Autolabeling is the subject calibration. This records the averaged 3D locations of the full set of identified markers in the calibration trial. As mentioned in the previous section, the subject calibration also indicates which markers are present. It is therefore imperative that no markers are added, removed, or moved between calibration and trial. No special subject pose is required for calibration. Indeed, if no static trial is available, it is perfectly satisfactory to extract a single time-sample from a motion trial and use that for calibration. However, only a single time-sample should be used since the average value of each moving marker position is saved.

5.4.2

Trial Statistics
Autolabeling of a trial starts by the software searching for "tightly constrained" groups, using marker separations calculated from the subject calibration as its initial template. At this stage, the two interactive autolabel parameters, Maximum Deviation and Minimum Overlap, are used. In order for a group to be recognised as "tight", all the

5.4.3

195

preface

the manual

marker trajectories present in the group during the subject calibration must also be present in the trial and uninterrupted for a Minimum Overlap number of time-samples. During this period, the separations between each pairing of 3D points in a group must not change by more than the Maximum Deviation %. This stage should result in the provisional identification of one or more groups of markers. If it does not, autolabeling will fail. The autolabeler next identifies the "root" group. This is the group identified by the name "Root" or the group with the largest number of markers actually present in the subject calibration. The Thorax or Pelvis groups are commonly used. Using the predefined linkage tree, the autolabeler tests for groups directly connected to the root, and then for linked groups further down the kinematic hierarchy. The relative kinematics of all group connections are characterised throughout the trial.

5.4.4

Progressive Loosening of Criteria


By this stage, the autolabeler should be confident of the identity of some groups, and of the markers within them, for major sections of the trial. It then starts trying to fill in the gaps (creating more groups and linking them into the hierarchy) by progressively loosening the criteria for marker-pair separations first in terms of the duration of overlap and then for maximum variability of marker separations. Overlaps down to a single time-sample are permitted and an internal limit is set for separation variability. Eventually all real markers should be swept up into linked groups from which they can be identified. On occasion, the loosening of these criteria causes a group to be misidentified. If this happens, all groups further down the hierarchy are lost. A relatively common problem with this approach is that it is unable to differentiate between groups with matching upward linkages whose internal marker arrangements are almost identical. This may be true, for example, for the left and right thigh groups. The consequence is a left/right swap of the subject's legs. The remedy is very simple -- place an additional marker on one of the thighs and include it in the model.

196

autolabel theory

5.4

reference

197

reference

5.5.1

file types | c3d file

5.5 File Types


A description of all file formats used by Vicon. All Vicon data and parameters are stored in files. Most files are accessed automatically by the program but a few text files can be edited by using a suitable text editor, such as Notepad.

5.5.1

c3d File
A c3d file is the binary file created whenever video data is reconstructed, labelled, and saved. c3d files also contain analog data and parameters. By default, c3d files are saved in a directory specified by using the File | User Preferences... command and recorded in the users usr file. Complete details of the structure of c3d files are available from Vicon, on request.

198

car file | file types

5.5.2

reference

car File
car files contain all the parameters required for Vicon capture, analog, and reconstruction. There are two versions of car file, accessed and updated according to the current status of the program. Existing users should note that the <username>.car file is no longer used by Vicon.

5.5.2

default.car
IF
DEFAULT.CAR IS ACCIDENTALLY DELETED, A NEW COPY, IN WHICH ALL PARAMETERS ARE SET TO ZERO, IS CREATED BY THE PROGRAM.

Only one copy of this version of the file exists. It contains the default parameters for the installation and is held in the \\Vicon\System\ directory off the main Vicon program directory. default.car is read, but cannot be modified, by the program. It may be edited by using a text editor such as Notepad. Alternatively, it may be overwritten by copying another version of car file. This is not recommended

<session_name>.car
To change the contents of your session car file, modify parameters when the session is open. One <session_name>.car exists for every session. It contains the parameters to be used for trials in that session and is held in the session directory. A new <session_name>. car file is created, whenever a new session is started, by making a copy of default.car. <session_name>.car file parameters are modified by making changes to parameters when the session is open. By selecting System|Set Parameters, you can change the parameters stored in <session_name>.car to those of the selected session.

Parameters saved in the car file are modified by using any of the following commands: System|Video Setup... 199

reference

5.5.2

file types | car file

System|Analog Setup... System|Force Plates Setup... System|Control Setup... System|Movie Setup... System|Timecode Setup... Trial|Reconstruction Parameters...

All versions of car file are text files complying with the profile (ini) format with enhancements required for the Vicon system.

200

cp file | file types

5.5.3

reference

cp File
A cp file contains the calibration parameters for a set of cameras. These parameters are used whenever data from these cameras is processed together for 3D reconstruction. cp files are initially created, together with the lp files to which they refer, in the \\Vicon\System\ sub-directory. THERE
ARE NO USER-EDITABLE ENTRIES IN A CP FILE.

5.5.3

The System | Calibrate... command opens a Calibration dialog. After a successful calibration (new calibration data capture or change of some other calibration parameter), when the OK button is clicked, the new calibration parameters are saved in a new mrcalib.cp file. This file always contains the most recent calibration. The previous mrcalib.cp file is renamed as mrcalib.cpb.

For 3D reconstruction, a valid calibration must be set for the session in which the trial data is held. This is done in one of the following ways; By using the System | Set Calibration... command. When a new session is created. When a calibration is performed during an open session. In each case, the required cp file is copied, together with its associated tvd file and linearisation files, from the \\Vicon\System\ directory into the session directory. The cp and associated tvd file in the session directory are renamed to the session name.

Whats in the cp file ?


Here is an example of a section of a cp file created in a calibration.
!CP#1 D:\Vicon\System\example.cro Animation L-Frame 500mm wand 5 170 9.5 70

201

reference

5.5.3

file types | cp file

800 9.5 70 1000 9.5 70 9.5 1000 70 500 0 0 0069_60 D:\Vicon\System\example.cro Animation L-Frame 500mm wand 2 26359.5 -6173.83 -3207.72 2409.95 0.44402 -0.894644 -0.0495823 -0.225998 -0.0582741 -0.972383 0.867047 0.442963 -0.228062 2.2359e-008 1.05755 0076_60 .. .. .. D:\Vicon\System\example.cro Animation L-Frame 500mm wand 0 ..

The Vicon camera calibration (cp) file contains parameters for: Identifying the camera connected to a specific input channel. The principle distance of the camera and lens. The position of the cameras front nodal point. The cameras attitude.

The data is organised and delimited by lines, with space character delimiters between multiple parameters on a line: Line 1 Line 2 Line 3 202 file type and version identifier. path and name of calibration reference objects (.CRO file). object identifier in CRO file.

cp file | file types

5.5.3

reference

Line 4

number of markers in reference object.

Line 5+ co-ordinates of point in reference object. After Line 5, there is a block of 9 lines for each input channel for which a calibration exists. If no calibration exists for an input channel, only the first 4 lines appear.

Line c1

linearisation id (.lp filename) of camera on input channel (may be left blank if no calibration exists for this channel).

Line c2 Line c3 Line c4

path and name of calibration reference objects (CRO) file. object identifier in CRO file. calibration format identifier (currently 2), principle distance of camera and lens in Vicon internal, post-linearisation units.

If no calibration exists for input channel, this line contains 0.

Line c5

position of camera (front nodal point of lens) in calibration units (normally mm).

Lines c6-8 Line c9

attitude (direction cosine) matrix of camera.

square of average angular error for camera, calibration residual in calibration units.

Lines c1-9 repeated for each input channel with calibration, lines c1-4 (excluding principle distance) repeated for each input channel without calibration.

For Your Reference


The camera image dimension in the direction of raster scan lines is approximately 16384, so the camera view angle in the same axis is approximately equal to: 2*atan(8192/principle distance). Unlike the calculation of view angle from lens "focal length", this expression is independent of camera sensor size.

203

reference

5.5.3

file types | cp file

The angular error (radians) is the amount by which a ray from the calibrated camera misses the reference marker. The calibration residual (calibration units, normally millimetres) is the angle error multiplied by mean distance from camera to the reference markers. This is reported to user after calibration as an indication of the likely reconstruction accuracy contributed by this camera.

204

cro file | file types

5.5.4

reference

cro File
cro files are used to store the co-ordinates of markers on calibration reference objects, and are used during calibration. They should be kept in the \\Vicon\System\ subdirectory. [refer to section 2.5]. A cro files is created and edited directly, using a text editor such as Notepad.

5.5.4

A cro file can contain one or more calibration objects, whose marker co-ordinates are stored in separate sections within the file. Each section starts with an [Object Name]. Object names must be no longer than 32 characters and may contain spaces. When a cro file is opened in the Calibration dialog, all object names included in that file are displayed in the reference object name list. On the line following the object name is a single integer indicating the number of markers (lines) in the object. Each subsequent line in the section contains 3 coordinates for a single marker within the object. Marker co-ordinates for a single object must share a common co-ordinate system and, during calibration, serve to define it as the current kinematic reference co-ordinate system. Marker co-ordinates must be accurately measured using some independent method. Here is the template for an object:
!cro#1 [Object Name A] NumberOfMarkersA MarkerX MarkerY MarkerZ . . MarkerX MarkerY MarkerZ [Object Name B] NumberOfMarkersB MarkerX MarkerY MarkerZ . . MarkerX MarkerY MarkerZ

Dont forget to increase NumberOfMarkers when adding new reference points. 205

reference

5.5.5

file types | ini file

5.5.5

ini File
IT
IS NEVER NECESSARY TO MODIFY THE WORKSTATION.INI FILE USING A TEXT EDITOR.

workstation.ini File
The workstation.ini file is an ASCII text file used to define application-wide characteristics that are independent of the user. workstation.ini should reside in the same directory as the program file workstation.exe unless an alternative is provided with its full path name as a command line argument to workstation.exe. All entries in the workstation.ini file are modified by the program, either by using the commands such as System|Video Setup... and System|Analog Board Types..., or else automatically.

<username>.ini File
IT
IS NEVER NECESSARY TO MODIFY A

USERS

INI FILE USING A TEXT EDITOR.

This file type, one of which is generated for each user with the filename username.ini, sets program operating characteristics for each individual user. User ini files are held in the \\Vicon\System\ sub-directory. All entries in a username.ini file are modified by the program by using the command File|User Preferences... .

206

lp file | file types

5.5.6

reference

lp File
An lp file contains the linearisation parameters for one camera. These parameters are used whenever data from that camera is processed, either for calibration or 3D reconstruction. lp files are initially created in the \\Vicon\System\ sub-directory. THERE
ARE NO USER-EDITABLE ENTRIES IN AN LP FILE.

5.5.6

The System | Linearise...command opens a Linearisation dialog in which a drop-down list containing the current lp file names in the \\Vicon\System\ directory is offered for each camera to be (re-) linearised. If a camera is being linearised for the first time, a new identifying code (commonly the camera serial number) must be entered for that camera. After linearisation, this code becomes the name of the resulting lp file. AS 8
IT WILL BE USED AS A

DOS

FILE NAME, THIS IDENTIFYING NAME MUST CONTAIN NO MORE THAN

CHARACTERS.

For 3D reconstruction, a valid calibration must be set for the session in which the trial data is held. This is done in one of the following ways; By using the System | Set Calibration... command. When a new session is created. When a calibration is performed during an open session. In each case, the required lp files are copied, together with the calibration files, from the \\Vicon\System\ directory into the session directory. This ensures that the correct linearisation files are stored with the data in the session.

Whats in the lp file ?


!LP#1 0069_60 + -1 18205 4096 28216 1024 32243 286 -607 32767 -244 31945 -733 32767 819 7780 5733 18720 20 15 0.000279948 0.000270182

207

reference

5.5.6

file types | lp file

71 68 41 64 11 35 -6 16 -14 2 -17 -7 -17 -22 -10 -19 -12 -24 -2 -24 9 32 6 -24 20 -25 21 -22 22 -12 14 5 5 17 -11 37 -40 54 -65 75 59 52 29 39 5 24 -10 9 -18 -3 -22 -14 -21 -21 -17 -25 -10 -27 -3 -30 4 -31 14 -32 21 -29 27 -25 27 -17 22 -5 11 9 -5 25 -29 43 -62 62 .. .. ..

The Vicon linearisation lp file contains parameters for: Scaling the horizontal and vertical co-ordinates in the TVD file into an optimal resolution for processing. Identifying the co-ordinates of the optical centre of the image. Correcting distortions in the image. Making separate corrections to either field of an interlaced frame. Data are organised and delimited by lines, with space character delimiters between multiple parameters on a line: Line 1 Line 2 Line 3 Line 4 Line 5 Line 6 Line 7 file type and version identifier. camera identifier referenced in calibration cp file. + indicates valid data follows, -1 is a sub-version identifier. horizontal scaling; multiply by first integer, divide by second. vertical scaling; multiply by first integer, divide by second. 3x3 affine transformation matrix (linear with offset). grid spacing after scaling (scaled x = scaled y), x centre, y centre, and approximate principle distance. Accurate principle distance is found during calibration. Line 8 Line 9 numbers of detected columns and rows of correction points. average horizontal and vertical distortion (multiply by 100 to get displayed

values in %). Line 10 Line 11 208 horizontal and vertical correction pairs for 1st row. horizontal and vertical correction pairs for 2nd row.

lp file | file types

5.5.6

reference

For Your Reference


For maximum integer resolution, both horizontal and vertical scale factors are chosen as the largest values which result in scaled co-ordinates < 16k. The values for a1 transformation is: (x y 1) -> (x y 1)
/(a1/d1 a2/d2 0)\ |(b1/d1 b2/d2 0)| \(c1 c2 1)/ b1 c1 d1 a2 b2 c2 d2 are derived from the following

The matrix is chosen to minimise:


S{ cpt[i,j] - opt[i,j] * scaling * matrix }2

where:
cpt[i,j]

is

the

correct

position

of

the

i,j

point,

(in

the

sample

above:(i*819,j*819)).
opt[i,j] is the observed position of i,j point.

The grid centre is found by multiplying the spacing by the number of spaces (horizontal or vertical), dividing by 2, and rounding down. In the sample above:
819*(20-1)/2 -> 7780 and 819*(15-1)/2 -> 5733

The steps for correcting a measured point are: Scale the measured co-ordinates. Apply the transformation matrix. Divide by the grid spacing to find where it lies on the grid. Linearly interpolate the 4 closest grid corrections and add to the scaled, transformed point.

209

reference

5.5.7

file types | mkr#2 file

5.5.7

mkr#2 File
The current format of the mkr file, introduced with version 2.5 of Vicon 370, comprises one or more sets which contain some or all of the following elements: Lists of marker labels. Links between labels for display purposes. Lists of labels which identify rigid body segments. Links between rigid body segments.

There will frequently be two sets: a display set, and an autolabel set. However, there may be more than two sets if required, for example, in a multiplesubject trial. Each set is identified by a name in square brackets which appears above the elements making up the set.

The file begins with the type identifier !mkr#2 followed by the name of the first set, which might be [Autolabel].

Display Sets
The purpose of a display set is to allow a certain view of the data by focusing on a particular combination of points and links. A display set will only contain two elements; a list of labels, and a list of links between those labels. The mkr#1 file used in previous versions is equivalent to a display set.

List of Marker Labels


The first element of the display set is a list of marker labels, which are available for attaching to trajectories. Each label may be a maximum of 30 characters long, and (optionally) may be followed by a description of up to 32 characters. There should be a space (or spaces) between the label and the description, if any. Labels should not contain blank spaces or punctuation marks. Each label (and description) should occupy a line of its own. The list may include labels which are not necessarily used in every trial. For example, many models allow for alternatives and variations in the marker set to be used, and the label list can accommodate this by including all the alternatives, although 210

mkr#2 file | file types

5.5.7

reference

any one trial will use only some of the labels available. It is normal for labels to be short (four characters is adequate) and in capital letters, but this is only a convention. The number of labels no longer needs to be stated within the file.

Links for Display Purposes


The second element of the display set is a list of links for display purposes. These will appear in the Workspace Window as green lines joining the markers named. The simplest way to define a link is with a line containing the names of the two markers, separated by a comma:
LKNE,LANK

A closed chain of links may be defined by listing any number of markers on a single line, separated by commas, for example:
SACR, LASI, RASI

This would cause the markers listed to be joined by a ring of links, that is, the last marker listed is joined to the first. This is more compact than listing each individual link on a separate line. The number of links no longer needs to be stated within the file. Vicon determines the type of each line by its content. Line types may therefore be mixed in any order, as long as a label is listed before it is used in a link. Marker lists and link lists from mkr#1 files will work if cut and pasted into mkr#2 files, without the line count number.

Autolabel Sets
A special set named [Autolabel] is used to describe the marker arrangement used in the trials to be labelled. Autolabel sets contain the following elements: list of marker labels. list of labels belonging to rigid body segments. links between rigid body segments.

The lines which make up these elements do not have to be sorted by type, but may be mixed together if it helps to make the file readable, as long as a label is listed before it is used in a segment definition. 211

reference

5.5.7

file types | mkr#2 file

List of Marker Labels


The marker list takes the same form as in a display set - one label per line, with a maximum of 30 characters per label, with an optional, space-separated description. In the examples provided, the majority of labels have just four, capital letters, and no description.

Segment Definition List


The next element is a list of rigid body segments. Each entry in the list contains a group of markers which belong to the same segment, and therefore maintain approximately fixed separation from each other. The names of the markers belonging to one segment appear on a single line, separated by commas. All the marker names used must also appear in the list of marker labels (ie. the first element of the autolabel set). Some markers may be included in the definition of more than one segment. Not all the markers listed in the segment definition element must appear in a trial to be labelled. Each segment may be given a name of up to 32 characters, which appears at the beginning of the line, followed by an equal sign, for example:
LeftShank = LKNE, LTIB, LANK LeftFoot = LANK, LHEE, LTOE

The simplest form of segment definition, without a name and equal sign, is identical to the form of the definition of a ring of links, used in a display set to join several markers together in a closed chain, and has the same effect; the markers listed in a segment definition line will be linked together by green lines in the Workspace window. One segment may be named Root. The autolabeling process will then begin by identifying this segment. It often makes the process more effective if a Root segment is defined. If no segment is named Root, the process begins with the segment which has the largest number of markers.

Segment Linkage List


The connections between the segments are described using segment link lines, which contain the names of two segments, separated by a comma. The segments must first be listed in the segment definition list. For example: 212

mkr#2 file | file types

5.5.7

reference

LeftShank, LeftFoot

Such a segment linkage line indicates that the two segments have a common joint. This is called an explicit link. However, if two segment definitions share a marker, the implication is that the segments are linked at that marker (or on an axis through that marker). This is often the case at elbow or knee joints, for example. Such implied links are detected when the segment definitions are read, and do not need to be repeated in the segment linkage list. If there is a segment with only one marker on it, the name of that marker may be used in place of a segment name.

213

reference

5.5.8

file types | mpg file

5.5.8

mpg file
A mpg file is the compressed progressive scanned and interlaced video signal data file created whenever movie data is captured by Vicon. This is known as the MPEG file format. A mpg file and its associated trial are created by either; Capturing movie data within an open session, using the Trial|Capture... command. Importing an existing mpg file into an open session, using the Trial|Import... command. By default, a new mpg file is given the name of the trial to which it belongs. Each unique new trial name is formed by appending two sequential numeric digits to the name of the session to which it belongs. Thus session Jump may contain trial mpg files Jump01. mpg, Jump02. mpg, and so on. mpg files provide useful reference material when analysing your 3D reconstruction data.

214

obd file | file types

5.5.9

reference

obd File
The obd files are used to store the dimensions of fixed objects (e.g. furniture) in the workspace and are referenced in wks files. Here is an example. obd files are created and edited directly, using a text editor such as Notepad.

5.5.9

NumberOfNodeLines NumberOfLinkLines Node1X Node2Y Node3Z Node2X Node2Y Node2Z . . NodeNX NodeNY NodeNZ NodeLineA NodeLineB NodeLineC ... 0 NodeLineD NodeLineE NodeLineF ... 0 . . NodeLineX NodeLineY NodeLineZ ... 0

The first line contains 2 integer values, separated by one or more spaces (NOT tabs). NumberOfNodeLines is the number of following lines which define nodes. NumberOfLinkLines is the number of lines, following the node lines, which define links. Dont forget to change NumberOfNodeLines and NumberOfLinkLines when modifying an object.

A Node Line contains 3 real or integer values, defining the X, Y, and Z co-ordinates of the node, separated by one or more spaces (NOT tabs). A Link Line contains any number of integers, defining a continuous link between the nodes defined on the corresponding lines in the first part of the file. A Link Line always ends with a zero (0).

215

reference

5.5.10

file types | ses file

5.5.10

ses File
IT
IS NEVER NECESSARY TO MODIFY A SES FILE USING A TEXT EDITOR.

This file type holds all the details of a session which are not kept in other session data and parameter files. A new ses file is created automatically in a new session sub-directory whenever a new session is started. The ses file is updated whenever trials are created, archived, or deleted, and when session and trial subject and notes are added or edited. A ses file also holds the links between the trials, mkr and wks files held in the session. It also stores information on the movie synchronisation delay.

216

sp file | file types

5.5.11

reference

sp file
A sp file contains the calibration parameters for a subject. These parameters are used whenever the subject is used to autolabel the 3D reconstructed trajectories. sp files are always created, together with a subject.c3d data file if desired, in the current session directory. THERE
ARE NO USER-EDITABLE ENTRIES IN A SP FILE.

5.5.11

The System | Create Subject... command opens a Create Subject dialog. After a successful subject calibration, when the OK button is clicked, the mean 3D locations of each labeled marker, over the selected range of frames, are saved in a new subject.cp file. This file always contains the most recent subject parameters. THE
PREVIOUS SUBJECT.CP FILE IS OVERWRITTEN.

For successful autolabeling, a valid subject file must be stored in the session in which the trial data is held. This is done in one of the following ways; By using the System | Create Subject... command. Copying an existing subject.sp from an existing session. In each case, the required sp file is copied into the session directory. An example of a sp file is shown below.
!SP#1, wizard RFWT = { 912.054, -780.487, 907.083 } STRN = { 778.475, -751.078, 1095.36 } LFWT = { 645.24, -802.454, 929.611 } RBWT = { 854.619, -1012.34, 898.435 } .. ..

The first line includes the name of the marker file used when the subject was initially created. This will be loaded from either the current session or \\Vicon\Models\.. when each trial is opened. The order of markers has no significance. 217

reference

5.5.12

file types | tvd file

5.5.12

tvd File
A tvd file is the binary unprocessed data file created whenever video data is captured by Vicon. A tvd file and its associated trial are created by either; Capturing video data within an open session, using the Trial|Capture... command. Importing an existing tvd file into an open session, using the Trial|Import... command. Non-trial tvd files are also created during linearisation and calibration.

By default, a new tvd file is given the name of the trial to which it belongs. Each unique new trial name is formed by appending two sequential numeric digits to the name of the session to which it belongs. The user can define their own name in Trial|Capture dialog Box. Thus session Jump may contain trial tvd files Jump01.tvd, Jump02.tvd, and so on. The tvd files, together with associated cp and LP files, provide the input for 3D reconstruction.

218

usr file | file types

5.5.13

reference

usr File
IT
IS NEVER NECESSARY TO MODIFY A USR FILE USING A TEXT EDITOR.

5.5.13

This file type, one of which is generated for each user with the filename username, holds data file path and other information for an individual user. usr files are held in the \\Vicon\System\ sub-directory.

219

reference

5.5.14

file types | vad file

5.5.14

VAD File
A vad file is the binary unprocessed data file created whenever analog data is captured by Vicon. A vad file and its associated trial are created by either ; Capturing analog data within an open session, using the Trial|Capture... command Importing an existing VAD file into an open session, using the Trial|Import... command. By default, a new vad file is given the name of the trial to which it belongs. Each unique new trial name is formed by appending two sequential numeric digits to the name of the session to which it belongs. Thus session Jump may contain trial vad files jump01.vad, jump02.vad, and so on. Following 3D reconstruction, when data is saved into a c3d file, analog data from the vad file is copied into the c3d file, together with the current analog parameters.

220

wks file | file types

5.5.15

reference

wks File
The wks file is used to store the placement of fixed objects (example: furniture) in the Workspace display and they should be kept in the \\Vicon\Wkspace\ subdirectory. The default wks file is defined by using the File|User Preferences... command. The wks file is created and edited directly, using a text editor such as Notepad.

5.5.15

New wks parameters are loaded into the current trial workspace using the Trial|Attach Workspace... command. Here is an example.
!wks#1 [Floor] Outline = Xmin Ymin Xextent Yextent Tiles = Xtilesize Ytilesize Orientation = -1 0 0 [Objects] Count = Objectcount 0 0 1 = -3500 -3500 7000 7000

= 500 500 0 1 0

Dont forget to update Count when changing the number of objects in a workspace.
O1 = ObjectFilename XPosition YPosition ZPosition XScale YScale ZScale XAngle YAngle ZAngle ColourCode(1..7) O2 = ObjectFilename XPosition YPosition ZPosition XScale YScale ZScale XAngle YAngle ZAngle ColourCode(1..7) . . On = ObjectFilename XPosition YPosition ZPosition XScale YScale ZScale XAngle YAngle ZAngle ColourCode(1..7)

!wks#n is a version identification header which must appear as the first line of a wks file.

[Floor]
The parameters in this group are used to define a floor in the plane z=0, and to set the initial angle of view. Outline has 4 integer value parameters, with one or more spaces (NOT tabs) between. 221

reference

5.5.15

file types | wks file

The first 2 values are the most negative X and Y limits of the floor, in kinematic units (generally millimetres). The second 2 values are the X and Y extent of the floor, in the same units. Outline has a fifth optional value which defines the floor height default to 0 if not present usefult if use l-frame on table above floor surface (set z=-**mm) Orientation has 9 parameters, with one or more spaces (NOT tabs) between. These 9 values are the direction cosines which transform the right-handed 3D kinematic workspace into the left-handed display space (x left -> right; y bottom -> top; z into screen). The only one of these combinations is valid (preserving orthogonality, normality, rightto left-handed transformation, parallel or anti-parallel horizontal axes, and parallel vertical axes) is Kinematic -Y into screen -> -1 0 0 0 0 1 0 1 0 SETTING
TO ANY OTHER COMBINATION WILL RESULT IN UNPREDICTABLE BEHAVIOUR.

[Objects]
The parameters in this group are used to place objects into the workspace display. Count is an integer value of the total number of objects to be included in the display. Count must be edited when the number of objects, or repeated displays of the same object, is changed. Each object line has the following 11 arguments, with one or more spaces (NOT tabs) between; The first argument, ObjectFileName is the name of the obd file containing the object definition. The second, third, and fourth arguments are real or integer values specifying the location of the origin of the object within the kinematic co-ordinate system.

222

wks file | file types

5.5.15

reference

The fifth, sixth, and seventh arguments are real or integer scaling factors applied to each axis of the object, before rotation. These rotations are applied about x, y, and z axes in turn and are anti-clockwise (negative) rotations. The eighth, ninth, and tenth arguments are real or integer values specifying the rotations applied to the scaled object about the kinematic co-ordinate system axes. The eleventh argument is an integer value specifying the colour used to plot the object in the display. Valid colours are; 1(white), 2(green), 3(magenta), 4(yellow), 5(cyan), 6(blue), and 7(red).

223

reference

5.5.16

file types | acclaim file format

5.5.16

asf and amc Files - Acclaim Format


The Acclaim file format (as implemented in Vicon) describes two file types (.asf and .amc) used for the storage and transfer of motion captured data between computer systems used in the animation process. This description is intended for three groups of users and provides a more detailed explanation of certain key lines in an ASF file, and explains how BodyBuilder output uses the format: This description is intended to be read in conjunction with the format definition published by Acclaim Technologies.

Motion capture technologists - to explain all the range of options that the format provides for the storage of motion capture data.

Animators - to understand the properties of characters animated using motion capture data delivered in Acclaim format.

Software designers - to assist in the generation of Acclaim file readers and writers.

The Acclaim format comprises of two file types;

.asf - used to define a hierarchical kinematic skeleton for an individual character, made
up of bones with specified lengths, axis orientations, and degrees-of-freedom, and

.amc - containing the motion captured data in the form of sampled values of the
skeletons degrees-of-freedom. An .asf file is used for building a character, adjusting bone lengths, and attaching a surface mesh, or skin, to the skeleton. A single .asf file can be used for as many moves as that character performs. An .amc file is meaningless without its .asf file. However, an .amc file can, in principle, be applied to any .asf file for which the skeleton has the same degrees-of-freedom. Applying the motion of one character to another will generate odd looking results unless special software tools are used to adjust for the different sizes and proportions of the characters involved.

Both file types are saved in text format, using universal ANSI character codes.

224

acclaim file format | file types

5.5.16

reference

The Skeleton.
The skeleton, defined in an .asf file, is a set of bones, linked together at joints, to create a hierarchical, articulated structure. A bone has no shape and only one dimension - length. With one exception, explained below, every bone is attached to at least one other. Attachments to a bone are known as its parent or children. A bone can have many children, but only one parent (in other words, there are no closed loops permitted in a skeleton). The captured motion of a bone is specified relative to its parent, as a rotation or a change in length, or a combination of the two. One bone, the root, differs from all others. It has no parent, so its captured motion is specified globally, as a translation and rotation relative to fixed, global axes. The root is the ancestor of all other bones. A root has no length and need have no children. For example, an isolated root might describe the motion of a non-articulated structure such as a ball.

A root always has 6 degrees-of-freedom in the .amc file, 3 components of translation and 3 Euler (see below) components of rotation, specified with respect to the global co-ordinate system. When the root segment translates and/or rotates, all other bones in the model move with the same translation and rotation plus the cumulative effect of relative degrees-of-freedom between bones. Fixed to every bone, including the root, is a set of three orthogonal axes about which the components of the bones rotation are defined. These local bone axes can be embedded in any direction relative to the bones length, although there are conventions, described later.

Describing 3D Rotations.
The .asf and .amc format files use Euler (or more strictly Cardan) angles to describe all 3-dimensional rotations. Although 3D rotation is not a vector, Euler angles separate a rotation into three numbers, or components. For the Euler components to be reassembled into a 3D rotation, two further related items of information are required; the directions of the axes about which the components of rotation are calculated, and the order of the calculation. Without this information, a total of six possible results for the relative 3D 225

reference

5.5.16

file types | acclaim file format

rotation between two bones can be reconstructed from just one set of three Euler angles. Clearly, axis directions and rotation order are extremely important. Embedded axis directions and Euler angle rotation orders are defined in the .asf file, while the Euler angle components are saved in the .amc file.

226

asf/ast file | file types

5.5.17

reference

The .asf File.


key to .asf examples given below:
xxx 0 or 1 0.00 or 1.00

5.5.17

alphanumeric string integer number; integer or real, in decimal or exponentiated form

inf and -inf xyz

treated as valid numbers token selected from: xyz, yzx, zxy, zyx, yxz, xzy

global rad tx ty tz rx ry rz

token set to global or local token set to rad or deg string of tokens, all mandatory but with variable order

{tx} {ty} {tz} {rx} {ry} {rz} {l}

string of tokens, all optional and with variable order

{ } #

items in curly brackets are optional repeat the pattern indicates that everything which

follows on the line is treated as comment.


, and ()

commas and parenthesis are treated as whitespace.

An .asf file is divided into sections. The start of a new section is marked by a colon : at the beginning of a new line, followed immediately by the section name keyword. Within a section there may be additional lines starting with a keyword without a colon.

227

reference

5.5.17

file types | asf/ast file

The .asf Header Sections.


There are three mandatory and three optional header sections in an .asf file as shown in the following example;
:version 1.00 :name xxx :axis-rotation global {:skin xxx} {:units {mass 1.00} {length 1.00} {angle rad}} {:documentation}

The header sections are used in the following way.


:version

This is the version number of the .asf file format.


:name

This is the name allocated to .asf file (50 characters max.).


:axis-rotation

This indicates whether the components of rotation listed in the axis lines in :root and :bonedata sections are applied about either fixed global axes, or embedded local axes. Changing :axis-rotation from global to local (or vice versa) is equivalent to reversing the rotation order token in every axis line in the .asf file. If axis-rotation token is absent, the default setting is global.

mass

This is a scale factor to convert internal file units to required output units. For example, 2.20 = data in lb, local units kg.
length

This is a scaling factor to convert internal file units to the required output units. It converts the length quantities in the AMC and the ASF into inches. The multiplier is the number of acclaim-units per inch. Therefore, divide the numbers being read by the length multiplier to get inches. For example, 2.54 = data in .cm, local units in inches. 228

asf/ast file | file types

5.5.17

reference

angle

This indicates whether angles are in radians (rad) or degrees (deg).


:documentation

This series of text lines are ignored by any program reading. The file can therefore be used for any notes to be included in the file. This section ends when the next section begins.

The .asf Root Section.


Every .asf skeleton file must have one, and only one, :root section. The root bone (segment) is the only one defined in the asf file for which global translations and rotations are recorded in the motion capture (amc file). The root section is used to define the following: The base position of the root and the directions of its axes relative to the global coordinate system. The order of the Euler angle components of root rotation degrees-of-freedom. All other bones are moved relative to their immediate parent in the hierarchy. In the complete Acclaim specification, the position and orientation of the root can be pre-set, which therefore has the effect of offsetting all the motion captured data by the same amount.

The :root section in the .asf file always contains four keyword lines as shown in the following example:
:root position 0.00 0.00 0.00 orientation 0.00 0.00 0.00 axis XYZ order TX TY TZ RX RY RZ

The keyword lines are used to describe the following information.


position

This is the pre-set base position of the root, measured along global axes. 229

reference

5.5.17

file types | asf/ast file

BodyBuilder always sets position to zero to avoid confusion which exists between certain CG packages about whether this pre-set applies to fixed (global) or moving (root) axes.

orientation

This is the pre-set base orientation of the root axes, measured about global axes. BodyBuilder always sets orientation to zero, for reason given above.
axis XYZ

The axis line token indicates the the rotation order for the root pre-set orientation. When the pre-set orientations are zero (see above), this token has no effect.
order

The order line tokens determine the order in which root translations and rotations are written in the amc file and the sequence in which root rotations are calculated about fixed, global axes. The translation and rotation tokens may be interleaved in any sequence, but it is usual for translations to precede rotations. All six tokens must be present. Softimage only understands the sequence TX TY TZ RZ RY RX

The variables position, orientation, and axis allow base offsets to be applied to the position and rotation of the skeleton, through movements of the root segment. For attaching a "skin", it is generally most convenient to have the skeleton displayed in CG software at the global origin, with the root segment aligned with global axes. To achieve this, position and orientation must be set to zero.

The .asf Bonedata Sections


An .asf skeleton file can have any number of bones defined in the :bonedata section. Each bone has a parent, as defined in the :hierarchy section of the file. Individual bone definitions within the :bonedata section each have 6 mandatory and up to 11 optional lines. Every line, except the second and subsequent limits lines, starts with a keyword. 230

asf/ast file | file types

5.5.17

reference

:bonedata begin {id 1} name xxx direction 0.00 0.00 1.00 length 1.00 {bodymass 1.0} {cofmass 1.0} axis 0.00 0.00 0.00 XYZ {dof {tx} {ty} {tz} {rx} {ry} {rz} {l}} {limits 0.00 1.00 0.00 1.00 } end

The keyword lines are used to describe the following information.


begin

This marks the start of an individual bone definition.


name

This is the alphabetic identifier for the bone degrees-of-freedom in .amc file. This name must match (case-insensitive) the name of the corresponding segment in the associated BodyLanguage model (mod) file. [refer to BodyBuilder manual]. Softimage accepts only lower case bone names.

id

This is an optional number assigned to bone.


direction 1.0 0.0 0.0

In an asf skeleton, the position (origin) of a bone is determined by a vector offset from the position (origin) of the bones parent, in the parents co-ordinate system. The direction line indicates the unit vector along which the length of the bone is measured.
length

This is the initial length vector of the bones position vector. This can be modified by motion captured length degree-of-freedom but if no l dof (see below) is used, the 231

reference

5.5.17

file types | asf/ast file

length remains constant. The product of length and direction is a vector representing the bone.

bodymass

This is the optional mass of skinbody associated with bone.


cofmass

This is the optional position of centre-of-mass along bone.


axis 0.0 0.0 0.0 XYZ

This is a set of rotations used to define the base pose direction of the embedded axes of the bone. Rotations are made about global or bone-embedded axes, according to the setting of axis-rotation token in the .asf header section (see above). By preference, these axes should be aligned so that: a) one axis is aligned with the direction of the bone, b) if a bone is connected to its parent by a hinge joint, one local axis of the parent is aligned with the axis of the hinge. BodyBuilder, plus all CG software except Nichimen, interprets the axis order token to indicate the sequence of rotations about fixed, global axes. Nichimen interprets these rotations to be about moving, root axes.

After initialisation, bone axes are fixed to the bone and move with it. In the Acclaim format definition, a bones axes can have any initial orientation, including global (axis 0.0 0.0 0.0 XYZ). However, it is much more convenient for subsequent CG interaction if one of the bones axes is aligned with its position vector. BodyBuilder automatically ensures that this is so.

dof rx ry rz l

The degrees-of-freedom for translations (tx, ty, tx), rotations (rx, ry, rz), and length (l) between the bone and its parent. The dof line is optional. If it is absent, the bone is fixed to its parent. If it is present, at least one degree-of-freedom token must be included.

The dof line tokens determine the following; 232

asf/ast file | file types

5.5.17

reference

The degrees-of-freedom that exist between a bone and its parent. The order in which rotation degrees-of-freedom are applied about the bones parents axes. Rotation degrees-of-freedom are applied sequentially about the bones parents axes. The order in which these degrees-of-freedom are written in the amc file.

The inclusion of the l token indicates that the length of the bone varies. The length values in the amc file replace the initial asf bone length. Although the Acclaim definition refers to l as the stretch degree-of-freedom, it measures absolute length rather than a relative length changes. Softimage ignores l degrees-of-freedom.

KINEMATION ACCLAIM

TRANSLATOR CRASHES IF THE

DEGREE- OF-FREEDOM IS INCLUDED.

limits

This is the set of bounds (upper and lower) for each degree-of-freedom present. When more than one degree-of-freedom is defined, each pair of bounds appears on a new line after the limits line. When a generalised rotation is specified by three ordered rotation angles, if all three angles have the range of -180 to 180 degrees (or 0 to 360), there is no unique set of angles that corresponds to the resulting rotation. For this reason, it is recommended that the second rotation degree-of-freedom is restricted to the range -90 to 90 degrees.

end

This marks the end of an individual bone definition.

The .asf Hierarchy Section.


The complete skeleton is formed by the parent-child interconnection of bones, starting from the root. The order of interconnection is defined in the :hierarchy section of the .asf file by a series of lines, each starting with a parent, and continuing with its children. 233

reference

5.5.17

file types | asf/ast file

An example is shown below.


:hierarchy begin root bone_1 bone_2 bone_3 bone_1 bone_4 bone_5 bone_3 bone_6 bone_7 end

The keyword lines in the hierarchy section are used to describe the following information:
begin

This marks the start of the :hierarchy section.


root

This is the mandatory parent on the first hierarchy line to bone_2 and bone_3.
bone_1

This is the parent to bone_4 and bone_5.


bone_3

This is the parent to bone_6 and bone_7.

234

amc file | file types

5.5.18

reference

The .amc Header Section. 5.5.18


An .amc file starts with a number of optional header lines, followed by the main body of the file, containing degree-of-freedom data. All .amc header lines are optional.
{:asf-file xxx} {:asf-path xxx{;{;}}} {:samples-per-second 1} {:smpte xxx} {:sample-count 1} {:fully-specified}

The optional header lines describe the following information.


:asf-file

This is the name of .asf file with which this .amc file is associated.
:asf-path

This is the Windows or UNIX pathname(s) to .asf file. \ and / within pathnames are interchangeable. Multiple paths with ; separators are searched in order. :asf-path defaults to path of .asf file.
:samples-per-second

This is the sample rate of degrees-of-freedom data.


:smpte

This is the SMPTE code for start of degree-of-freedom data.


:sample-count

This is the total number of degree-of-freedom samples in .amc file. Since :sample-count is optional, all .amc readers must be able to read files from which it is absent, by parsing the entire file and counting samples.

:fully-specified

This indicates that, in the degrees-of-freedom section, a) the sample number appears at the start of every block of samples and b) the bone name appears at the start of every degree-of-freedom line.

235

reference

5.5.18

file types | amc file

The .amc Degree-Of-Freedom Section.


:degrees 1 root 1.0 1.0 1.0 1.0 1.0 1.0 bone_1 1.0{ 1.0{ 1.0{ 1.0}}} bone_2 1.0{ 1.0{ 1.0{ 1.0}}} bone_3 1.0{ 1.0{ 1.0{ 1.0}}} 2 root 1.0 1.0 1.0 1.0 1.0 1.0 bone_1 1.0{ 1.0{ 1.0{ 1.0}}} bone_2 1.0{ 1.0{ 1.0{ 1.0}}} bone_3 1.0{ 1.0{ 1.0{ 1.0}}} 3 root 1.0 1.0 1.0 1.0 1.0 1.0 bone_1 1.0{ 1.0{ 1.0{ 1.0}}} bone_2 1.0{ 1.0{ 1.0{ 1.0}}} bone_3 1.0{ 1.0{ 1.0{ 1.0}}}

Each degrees-of-freedom describes the following information.


:degrees

This indicates start of degrees-of-freedom sample blocks.


1, 2, 3

This indicates the integer sample number of the following block of degrees-offreedom. Consecutive sample numbers must increase monotonically, but may differ by any integer value.
root

This is a line containing the 6 mandatory degrees-of-freedom of the root, in the order indicated in the .asf file. THE
ROOT LINE MUST BE PRESENT IN EVERY SAMPLE.

236

amc file | file types

5.5.18

reference

bone_n

This is a line containing between 1 and 7 degrees-of-freedom for the named bone. The presence and order of degrees-of-freedom is indicated in the .asf file. Every bone name must be present in the first sample block but bones, other than the root, may be omitted from subsequent samples. The degrees-of-freedom of omitted bone samples are assumed to remain unchanged. Every degree-of-freedom specified in the .asf file for a bone must be included in every sample line for that bone.

237

reference

5.5.19

file types | csm file

5.5.19

Character Studio 2.0 Motion Capture (csm) File Format


This document describes the csm Character Studio 2.0 Motion Capture File format. The csm format is an ASCII file that is used to import positional maker data from various motion capture systems into Character Studio 2.0 to animate bipedal characters. Character Studio 2.0 is comprised of two plug-ins for 3D Studio MAX: Biped and Physique. The Biped plug-in provides for creation and high-level articulation of biped character "bone" systems, while Physique manages skin deformation and behavior, based on bone postures. Biped 2.0 provides direct input of csm files from disk, including comprehensive keyframe reduction, footstep extraction, and talent figure and pose calibration to provide a fast and accurate means of import for large volumes of positional data stored in the csm format. Biped 2.0 uses positional data stored in the csm format to pose a Biped character on a frame-by-frame basis and to move it forward in time. During import, xyz marker positions stored in the csm file are used to derive bone rotation data for the biped at each frame, eliminating the need to convert positional marker data into rotational form prior to import. Once imported, csm-based animations can be saved out as native Biped bip files, providing access to a comprehensive set of animation, motion mapping, and structural modification features that are built directly into Biped 2.0.

Overview of csm-Supported Marker Attachment Positions


csm marker values are time-varying xyz locations which are typically generated by reflective markers and optical motion capture systems. These xyz values may also be derived from other motion capture sensing devices that generate positional data in a word-coordinate system. Biped 2.0 supports a fixed set of known markers that correspond to reflective markers or sensors attached to different parts of a Human "talent" subject during a motion capture session. Biped's "superset" of supported markers was chosen to cover typical marker/sensor configurations used by various motion capture service companies and hardware 238

csm file | file types

5.5.19

reference

vendors. The complete set of Biped-supported marker names and attachments are listed below. Optional Markers are noted; these optional markers are not needed for import, but do contribute to a more accurate solution, if present. Biped to read a csm file successfully. All other makers are required for

Character Studio Supported Marker Names and Attachments


Head LFHD Left Front Head LBHD Left Back Head RBHD Right Back Head RFHD Right Front Head Chest CLAV top chest STRN center chest Waist LFWT Left Front Waist LBWT Left Back Waist RBWT Right Back Waist RFWT Right Front Waist Spine C7 Top of Spine T10 Middle of Back SACR Lower Back (optional) Left Leg LKNE Left Outer Knee LKNI Left Inner Knee (Optional) LANK Left Outer Ankle LHEL Left Heel (Optional) LMTS Left Outer Metatarsal LMTI Left Inner Metatarsal (Optional) LTOE Left Toe Right Leg RKNE Right Outer Knee RKNI Right Inner Knee (Optional) RANK Right Outer Ankle RHEL Right Heel (Optional) RMTS Right Outer Metatarsal RMTI Right Inner Metatarsal (Optional) RTOE Right Toe Left Arm LSHO Left Shoulder LELB Left Outer Elbow LILB Left Inner Elbow (optional) LWRE Left Wrist Stick End "Outer" LWRI Left Wrist Stick Base "Inner" LWRA Left Wrist Inner near thumb (alternative to LWRE)

239

reference

5.5.19

file types | csm file

LWRB Left Wrist Outer opposite thumb (alternative to LWRI) LFIN Left Hand Right Arm RSHO Right Shoulder RELB Right Outer Elbow RILB Right Inner Elbow (optional) RWRE Right Wrist Stick End "Outer" RWRI Right Wrist Stick Base "Inner" RWRA Right Wrist Inner near thumb (alternative to RWRE) RWRB Right Wrist Outer opposite thumb (alternative to RWRI) RFIN Right Hand

csm files that utilise Biped 2.0 marker names exactly as shown above may be read into Biped 2.0 directly. Additionally, for convenience, the csm format supports custom marker names for the known marker positions described above. These custom marker names must be correlated to the preset marker names listed above using a Marker name file (MNM). That is, the Left Shoulder marker need not be named "LSHO", as long as the csm string "myleftshoulder" is matched to "LSHO" using the separate Marker name file MNM. For more details on the MNM file format, see below.

Understanding Wrist Marker Alternatives


For specification of wrist markers, the csm file may contain either of two types of wrist marker attachments (but not both). For the first method, the wrist position/orientation may be specified by the use of a "stick" attached to the wrist at a 90 degree angle to the forearm, centered at the wristwatch position. These markers are used to describe the stick position:
LWRE LWRI RWRE RWRI Left Wrist Stick End Left Wrist Stick Base "Inner" Right Wrist Stick End Right Wrist Stick Base "Inner"

For the second method, The RWRA--RWRB and LWRA--LWRB markers define the alternative markers that are on the inner and outer positions of the wrist (via a wristband). The RWRA/LWRA markers are on the inside (closer to thumb).
LWRA LWRB Left Wrist Inner near thumb (alternative to LWRE) Left Wrist Outer opposite thumb (alternative to LWRI)

240

csm file | file types

5.5.19

reference

RWRA RWRB

Right Wrist Inner near thumb (alternative to RWRE) Right Wrist Outer opposite thumb (alternative to RWRI)

Overall csm File Structure


The csm file is broken down into the following sections, each introduced by a $section section title. These sections must occur in the following order. The $section titles may be upper or lowercase. The [ ] notation shown below indicates a required numeric value or string of the specified type (the [ ] notation gets replaced by an actual value or string in the file). The [ ] information may occur on the same line as the $section title.
$comments [string] OPTIONAL

Comments. Continued until the next $section title.


$firstframe [ integer ] OPTIONAL

The first frame of point data present.


$lastframe [ integer ] OPTIONAL

The last frame of point data present.


$rate [ integer ] OPTIONAL

The sample rate of the point data contained in the $point section below, computed in samples per second.
$spinelinks [ integer ] OPTIONAL

Indicates how many links are desired in converted biped (default is 3, range is 2-3).
$order [name1 name2 name3 ..nameN ] REQUIRED

The order of the point/marker data contained in the point section which follows. Each string name corresponds to a capture marker/point name. It is assumed that a value is provided for every point (not necessarily a measured marker) in every frame that is present. If there is any mismatch between the number of items contained in this

$order section and the number of items in the $point section then it is assumed that the file has been incorrectly generated and the import will be aborted. Each marker is assumed to have 3 values X,Y and Z, specified in that order.

NOTE

MARKER NAMES CAN BE OF ANY LENGTH BUT MUST NOT CONTAIN COMMAS

(,) 241

reference

5.5.19

file types | csm file

Marker names should match Biped 2.0 supported marker names by default.

If

custom marker names are used, they must be correlated to Biped 2.0 supported marker names using the Marker Name Mapping file (MNM) - see below.

$points

[ firstframe#_integer name1_float_x name1_float_y name2_float_x name2_float_y name3_float_x name3_float_y nameN_float_y

name1_float_z name2_float_z name3_float_z nameN_float_z ]

nameN_float_x

[ secondframe#_integer name1_float_x name1_float_y name2_float_x name2_float_y name3_float_x name3_float_y nameN_float_x nameN_float_y

name1_float_z name2_float_z name3_float_z nameN_float_z ]

[ Nthframe#_integer name1_float_x name1_float_y name1_float_z name2_float_x name2_float_y name2_float_z name3_float_x name3_float_y name3_float_z nameN_float_x nameN_float_y nameN_float_z ] ]

The following information is required. A list of the X Y Z translations of the markers defined in the order statement above, where the first item is the frame number automatically. All data is specified in XYZ order. All data is Z-up right handed axis system The first frame number cannot be assumed to always be 1 (see firstframe= field). Where an invalid co-ordinate value exists, the file will contain an empty field (ie ,,). The presence of invalid co-ordinate should not stop the import; the reader should simply skip to the next field. The data section will be complete: it will contain a line for every frame between $firstframe and $lastframe, and frames will be ordered in ascending order. 242

csm file | file types

5.5.19

reference

Each frame is delimited with a carriage return or line feed (CR or LF). The end of the $point section is reached either when the end of file is found or a new section indicator ($) is found. Units are millimeters.

General Syntax Rules


All sections must appear in the specified order. Any unrecognised $sections are ignored. All blank lines are ignored. Leading white space on any line is ignored.

Character Studio 2.0 Marker Name File (MNM)


The Character Studio 2.0 Marker Name file is used to match custom marker names in the csm file with Biped 2.0's preset list of known, supported marker names. For a complete list of supported marker names, see above. The general syntax of the MNM file consists of two columns of names. The left column corresponds to Biped's known marker names. The column on the right corresponds to the custom marker names stored in a specific csm file. There should be an entry for every custom csm marker name in the csm file. For completeness, all csm markers can be listed and correlated to internal Biped names, even if the names are identical. The listing below is an "identity" listing of fixed versus custom marker names files (the columns on the left and right are identical). Optional (missing) marker name entries not in the csm file can be omitted from the MNM marker file.

This list can be easily copied and modified (via the right column) for each unique 243

reference

5.5.19

file types | csm file

motion capture session sample.

LFHD LFHD LBHD LBHD RBHD RBHD RFHD RFHD C7 C7 T10 T10 SACR SACR CLAV CLAV STRN STRN LSHO LSHO LELB LELB LILB LILB LWRE LWRE LWRI LWRI LFIN LFIN RSHO RSHO RELB RELB RILB RILB RWRE RWRE RWRI RWRI RFIN RFIN LFWT LFWT LBWT LBWT RBWT RBWT RFWT RFWT LKNE LKNE LKNI LKNI LANK LANK LHEL LHEL LMT5 LMT5 LMTI LMTI LTOE LTOE RKNE RKNE RKNI RKNI RANK RANK RHEL RHEL RMT5 RMT5 RMTI RMTI RTOE RTOE

Example csm and mnm


This section provides examples of a corresponding pair .csm and .mnm files: fashion1.csm and fashion.mnm. 244

csm file | file types

5.5.19

reference

-----------fashion1.csm -----------$comments This is a Character Studio 2.0 csm File $firstframe 1 $lastframe 3 $spinelinks 3 $rate 60 $order C7 CLAV LANK LBHD LBWT LELB LFHD LFIN LFWT LKNE LSHO LTOE LUPA LWRA LWRB RANK RBHD RBWT RELB RFHD RFIN RFWT RKNE RSHO RTHI RTOE RWRA RWRB STRN T10 $points 1 980.6 -2365.8 1541.3 1030.6 -2239.9 1492.1 967.3 -2427.5 181.8 936.2 2330.1 1672.9 939.4 -2359.1 1109.3 782.9 -2263.3 1175.7 959.0 -2196.9 1753.8 762.3 -2135.1 862.6 949.3 -2187.2 1082.1 934.2 -2343.2 583.1 870.4 -2246.7 1525.4 1031.7 -2339.7 83.1 814.6 -2218.7 1318.8 799.4 -2132.3 993.7 729.2 -2230.2 966.5 1110.1 -2060.1 197.3 1066.8 -2351.6 1670.5 1114.0 -2391.4 1116.4 1290.5 -2574.6 1396.5 1105.7 -2231.2 1740.1 1217.5 -2369.8 1149.5 1159.8 -2233.6 1093.8 1138.8 -2119.2 605.9 1136.6 -2342.8 1564.9 1081.3 -2126.7 775.0 1065.2 -1944.5 112.2 1276.8 -2398.1 1271.2 1294.0 -2490.1 1183.6 1044.2 -2199.9 1395.9 988.1 -2404.3 1417.4 2 979.8 -2358.9 1540.1 1029.4 -2232.4 1489.5 966.5 -2426.7 181.8 936.2 2322.0 1671.1 940.7 -2351.6 1107.3 783.9 -2249.8 1173.7 959.4 -2189.0 1752.6 767.9 -2113.2 863.4 948.9 -2177.9 1080.7 934.4 -2338.9 583.3 871.7 -2242.9 1523.4 1031.7 -2339.5 82.9 814.6 -2207.6 1317.8 803.3 -2113.8 994.5 732.4 -2209.8 964.1 1108.3 -2039.1 197.5 1067.2 -2343.2 1669.3 1115.0 -2382.1 1115.0 1292.6 -2563.1 1400.3 1106.5 -2223.7 1738.5 1212.0 -2367.2 1155.5 1162.2 -2222.1 1093.8 1140.2 -2119.4 601.9 1135.4 -2334.3 1563.9 1079.9 -2109.1 774.4 1064.5 -1921.3 116.0 1278.4 -2396.2 1273.8 1284.5 -2483.4 1186.2 1043.6 -2192.5 1394.0 987.5 -2397.7 1415.2 3 979.6 -2351.7 1539.3 1029.2 -2224.5 1487.1 966.1 -2426.1 181.6 936.4 2313.5 1669.3 942.7 -2343.0 1105.1 784.3 -2236.2 1171.7 959.4 -2180.8 1750.6 773.8 -2090.6 864.4 949.3 -2168.6 1078.7 934.2 -2334.3 582.5 871.3 -2235.2 1521.8 1031.3 -2339.1 82.5 814.4 -2196.5 1315.4 808.1 -2095.0 995.7 736.1 -2189.2 962.5 1107.1 -2019.5 197.9 1067.8 -2334.5 1668.0 1116.2 -2372.6 1113.2 1294.4 -2555.6 1402.1 1105.7 -2214.4 1736.9 1206.2 -2358.5 1162.0 1163.4 -2214.2 1091.6 1140.2 -2115.6 598.3 1134.8 -2325.0 1562.3 1080.3 -2088.1 776.8 1058.9 -1897.7 119.9 1279.2 -2387.4 1276.6 1275.4 -2474.9 1189.0 1042.8 -2185.0 1391.8 987.7 -2391.0 1415.2 -----------fashion.nmn -----------LFHD LFHD LBHD LBHD RBHD RBHD RFHD RFHD C7 C7 T10 T10 SACR SACR CLAV CLAV

245

reference

5.5.19

file types | csm file

STRN LSHO LELB LILB LWRE LWRI LFIN RSHO RELB RILB RWRE RWRI RFIN LFWT LBWT RBWT RFWT LKNE LKNI LANK LHEL LMT5 LMTI LTOE RKNE RKNI RANK RHEL RMT5 RMTI RTOE

STRN LSHO LELB LILB LWRA LWRB LFIN RSHO RELB RILB RWRA RWRB RFIN LFWT LBWT RBWT RFWT LKNE LKNI LANK LHEL LMT5 LMTI LTOE RKNE RKNI RANK RHEL RMT5 RMTI RTOE

246

csm file | file types

5.5.19

reference

247

reference

5.6

glossary

5.6 Glossary
Glossary for the Animation Pipeline.

Acclaim
File formats .ast, .asf and .amc were devised by Acclaim Corp. for the description of kinematic models and motion data.

Aperture
The opening of a lens that controls the amount of light reaching the surface of the sensor. The size of the aperture is controlled by the iris adjustment. By increasing the f-stop number (f/1.4, f/2.8, f/4.0, etc.) less light is permitted to pass to the pickup device.

Assisted Labeling
A feature of VICON Workstation software which enables the system to identify most, or sometimes all, the trajectories in a trial, once an initial static trial of data from the same session has been manually labelled.

Autolabeling
The command Autolabel (on the Trial Menu) calls the assisted labeling function and performs the function without any user interaction.

Blooming
The defocusing of regions of a picture where the brightness is excessive.

BodyLanguage
The language in which the kinematic models for BodyBuilder are written.

248

glossary

5.6

reference

Cartesian frame of reference


A system of three perpendicular axes which meet at an origin. The position of any point relative to these axes can be specified by an ordered triplet of numbers.

CCD - Charge Coupled Device


A device that stores samples of analog signals used in cameras as an optical scanning mechanism. Advantages include good sensitivity in low light and absence of burn-in and phosphor lag found in CRTs.

CG - Computer Graphics
The generic name for the production of moving pictures by computer rather than by cine photography.

Character Studio
A plugin character animation package for 3DSMax. Consists of two plugins, firstly for generating a configurable human like IK chain which can be used with raw Vicon data and secondly a plugin to skin the character with configurable muscle bulge etc.

Current field
The field displayed in the Workspace window. The number of the current field is shown under the replay slide bar. May be referred to as Current Frame or Current Sample.

Depth of Field
The front to back zone in a field of view which is in focus in the televised scene. With a greater depth of field, more of the scene (near to far) is in focus.

Dongle
A hardware key which must be plugged into a computer before any Vicon software product can be run. Supplied to licensed users by Oxford Metrics Ltd. 249

reference

5.6

glossary

Field and Frame


A time-sample of kinematic data. In standard video terminology, a "frame" is composed of one even- and one odd-numbered field. In kinematic modelling, as in cine film, the distinction between even and odd fields is lost, and the word "frame" is often used interchangeably with "field".

FMV
Full motion video, a particular variety of CG animation used in Games to represent non in game animation where a higher degree of complexity is required.

Focal Length
The distance from the centre of the lens to a plane at which point a sharp image of an object viewed at an infinite distance from the camera is produced. The focal length determines the size of the image and the angle of the field of view seen by the camera through the lens. That is the distance from the centre of the lens to the pickup device.

Frame
In an interlaced video standard such as those used in television broadcasting, alternate image fields contain either even-numbered lines or odd numbered lines. A frame is a complete image made up of one even and one odd field. In kinematic modeling and in cine film, there is no distinction between even and odd fields.

Gait Analysis
The study of human movement for medical purposes.

Genlock
Genlock is a process of sync generator locking. This is usually performed by introducing a composite video signal from a master source to the subject sync generator. The generator to be locked has circuits to isolate vertical and horizontal drive.

250

glossary

5.6

reference

Global Point
A point which is defined in terms of the Vicon reference axes, which are constant (i.e. static) throughout a trial, rather than a local frame of reference that moves with a particular body segment.

Interpolation
Filling a gap in a trajectory by inventing new points. Small gaps in smooth trajectories may be interpolated reliably. Large gaps and trajectories which are not smooth are more difficult to interpolate, and the results may be unrealistic. Interpolation should be used with caution.

Forward Kinematics
Solving the equations of motion of a multi segment body by direct measurement of the position and velocity of each segment, which is a fully-determined problem with a unique solution.

Inverse Kinematics
Deducing the movements of segments from the desired overall result rather than measured motion. For example, deducing the movements of leg segments from the requirement that the feet do not move relative to the floor. This is an inverse problem without a unique solution.

Iris
The amount of light transmitted through a lens is controlled by an adjustable diaphragm, or iris, located in the lens barrel. The opening is referred to as the aperture, and the size of the aperture is controlled by rotating the aperture control ring on the lens barrel. The graduations on the lens barrel are expressed in terms of the focal length of the lens divided by the diameter of the aperture at that setting. This ratio is called the f-number.

251

reference

5.6

glossary

Kinematic Angles
Numerical description of the angular relationship between connected body segments of a kinematic model. Stored as three numbers. There are several conventions for defining the axes of rotation and the order in which the angles are listed.

Kinematic Model
A numerical description of a moving object, composed of segments and links.

Label
A name by which a point or trajectory is identified. Limited to 32 characters. Labels are attached to trajectories after reconstruction, either by the operator, or using assisted labelling.

LAN - Local Area Network


A short distance data communications network (typically within a building or campus) used to link together computers and peripheral devices (such as printers, CD ROMs and modems) under some form of standard control.

Local Point
A point defined in terms of the axes of a particular segment, which is itself able to move during the trial.

Marker
A reflective ball fixed to the performer. Also called a real marker, or physical marker, to distinguish between trajectories which result from actual measurements, and virtual ones created by modelling.

MPEG
A Standard for compressing (in principal) progressive scanned and interlaced video signals. MPEG1 has a bit rate is 1.5 Mbps whilst MPEG2 will allow compression rates 252

glossary

5.6

reference

with a range of bit rates from 1.5 to 100 Mbps.

NTSC - National Television Systems Committee


A formulated standards for the American system of colour telecasting which is used mainly in North America, Japan and parts of South America. NTSC employs 525 lines per frame and 59.94 fields per second.

Orthogonal Axes
Three axes which are at right-angles to each other. Vectors may be analyzed into components in any orthogonal system of axes, and the components added according to normal vector algebra.

Occlusion
When a marker cannot be seen by some or all cameras because it is covered by a body part or another performer, it is said to be occluded.

Overlap
When two trajectories are reconstructed in a certain range of consecutive frames, and are given the same label, a warning is given that trajectories with the same label overlap. If this is an error of labelling, the mistake should be corrected before saving the trial. If the overlap is the result of poor reconstruction, the system will assume that the later trajectory is the correct one. If this would lead to acceptable results, saving the trial will reduce the two trajectories to one. If not, the data should be edited before saving.

PAL - Phase Alternate Line


The name of the colour television system used mainly in Europe, China, Malaysia, Australia, New Zealand, the Middle East and parts of Africa. PAL features 625 lines per frame at 50 fields per second.

253

reference

5.6

glossary

Passband
The range of frequencies allowed through a filter. Motion at a nearly constant speed has low frequency components; motion involving high accelerations has high frequency components which may fall outside the passband of a filter.

Phantom Points
Points or trajectory segments which appear where no real marker existed. Most phantom points appear only for a few frames and may result from video noise or an accident of measurement geometry. If there are large numbers of phantom points, recalibration or adjustment of reconstruction parameters may help to reduce their number.

Pipeline
The procedures following data capture which can be selected to take place automatically and without supervision, for example, reconstruction, assisted labeling, interpolation and saving.

Pixel
The smallest visual unit that is handled in a pickup device, generally a single cell in a grid of numbers describing an image. In a component system, care should be taken to define a pixel as each individual sample of luminance or "Picture Element." "Square" pixels result when an image is scanned with equal resolution in both directions, i.e., the scanning frequency (number of scan lines per inch) is equal to the sampling frequency (number of samples per inch along the scan line). When scanning frequency is not equal to sampling frequency, rectangular pixels result.

Point
A location in space, specified by 3D co-ordinates. A trajectory (or segment of a trajectory) consists of a time-series of points. A point is stored in a .C3D file as three spatial co-ordinates and a residual, identified by a label. Points may represent the measured positions of real markers, or may be virtual (created by modelling). Point and marker are often used interchangeably. 254

glossary

5.6

reference

Reconstruction
The calculation of the position of a marker by a VICON 370 system, and the linking of reconstructed points into trajectories.

Resolution
A measure of the ability of a camera or television system to reproduce detail-the number of picture elements that can be reproduced with good definition

Sample Skip
The ability to reduce the amount of data being captured by not storing all the frames received from the cameras. For example, the user can specify to miss out every other frame thus re-sampling the data by a factor of two.

Script
A set of BodyLanguage definitions and functions which can be run by BodyBuilder. Scripts are contained in .MOD files.

Segment
In a kinematic model, body parts are represented by segments, which are assumed to be rigid elements linked by joints. Segments have both position and orientation. In AutoLabelling, a segment is a group of markers which maintain approximately constant separation from each other. (The term "trajectory segment" is sometimes used to describe a short or interrupted marker trajectory.)

SMPTE Time Code


Time code that conforms to SMPTE (Society of Motion Picture and Television Engineers) standards. It consists of an 8-digit number specifying hours: minutes: seconds: frames. Each number identifies one frame on a videotape. SMPTE time code may be of either the drop-frame or non-drop frame type. The SMPTE time code mode can allow an editor to read either drop-frame or non-drop frame code from tape and perform calculations for either type (also called mixed time code). 255

reference

5.6

glossary

SMPTE - Drop-Frame Time Code


SMPTE time code format that continuously counts 30 frames per second, but drops two frames from the count every minute except for every tenth minute (drops 108 frames every hour) to maintain synchronization of time code with clock time. This is necessary because the actual frame rate of NTSC video is 29.94 frames per second rather than an even 30 frames.

SMPTE - Nondrop Frame Time Code


SMPTE time code format that continuously counts a full 30 frames per second. Because NTSC video does not operate at exactly 30 frames per second, nondrop frame time code will count 108 more frames in one hour than actually occur in the NTSC video in one hour. The result is incorrect synchronization of time code with clock time. Drop frame time code solves this problem by skipping or dropping 2 frame numbers per minute, except at the tens of the minute count.

SMPTE - Time Code Editing


By recording a sequential time code along with the video and audio material, you can obtain a more precise reference for editing. Each frame has its own number or code which tells the time in hours, minutes, and seconds, and includes a frame number. The world standard code is called SMPTE and has also been adopted by the IEC (International Electrotechnical Commission). Time codes permit very fast and accurate editing. Automatic editing is possible under computer control.

Spike
A point which lies to one side of an otherwise smooth trajectory. Spikes can occur as a result of a mis-resconstruction in a single field, perhaps caused by video noise or marker obscuring.

Static Trial
A short capture during which the performer stands still. Used for creating Acclaim skeleton files and for deriving subject-specific parameters.

256

glossary

5.6

reference

Stick
A line connecting two points as displayed in the Workspace window. Defined in the current .MKR file, sticks are a useful graphic aid, but have no significance above that of the points they link.

S-Video - Superior Video


A widely accepted set of luminance and chrominance Y/C signals used to connect video equipment, providing a higher quality signal free of the cross luminance/colour problems associated with composite video signals.

System Memory
The working memory (RAM) of a computer. Data and programs in current use are held in system memory, and will be lost if the computer is shut down before they are written to disk.

Trajectory
The path through space followed by a marker, whether real or virtual. Stored in a .C3D file as a time-series of points with the same label. May consist of several trajectory segments separated by gaps, or a single, uninterrupted trajectory, depending on marker visibility. Displayed in a Workspace window as a line through the position of the marker in the current field.

Trial
A single data capture. The word also refers to the .C3D file resulting from a data capture.

Virtual Point
A point produced by kinematic modelling, in a position where no real marker existed during motion capture. Virtual points may represent internal features such as joint centres, where it is impossible to place a real marker.

257

reference

5.6

glossary

VITC - Vertical Interval Time Code


Contains the same information as the SMPTE time code. It is superimposed onto the vertical blanking interval of a video signal, so that the correct time code can be read even when a helical scanning VCR is in the pause or slow mode.

258

glossary

5.6

reference

259

reference

5.7

troubleshooting

5.7 Troubleshooting
If you've got a problem, the Vicon Wizard is here to help. This section will help you track down the source of the common problems you may be experiencing. We deal with each problem in turn and offer a number of reasons and matched solutions for each item. To make this division clearer and also more fun we have coded each problem, reason and solution with the following icons.

If your current problem isn't detailed here just contact Vicon.

Problem. Nothing is visible in the live monitor window


Reason. Solution. Reason. Solution. No reflective markers are in the capture volume. Put some markers on the floor to show where the volume is. The aperture of the camera is closed. Open the aperture to a value between f8 and f2.8 to ensure that sufficient reflected light from the markers can fall on the camera CCD. Reason. Solution. The marker is too far from the camera. Typically, the recommended distance from camera to marker is 12m for a 9mm lens. This is reduced as the lens gets wider. Reason. Solution. The strobe is not synchronising with the camera. Check that camera is connected to the strobe. Look to see that the

"pigtail" lead between the camera and strobe is connected and correctly seated. Reason. Solution. The hardware is not connected properly. Check all your cables are plugged in correctly and the hardware LEDs are illuminated. Switch power on to Datastation and once booted, select System|Start Link and then System|Live Monitors whereupon the strobes should illuminate.

Problem. Cant get a link between Datastation and Workstation.


Reason. The cable connecting the Datastation and Workstation is not a crossover cable. 260

troubleshooting

5.7

reference

Solution.

If using a hub, ensure that you use a standard network cable. If connecting directly, use a crossover cable.

Reason.

The network is not set-up correctly.

Solution1. Assuming you have a class c type IP address (which is the most common), set the netid section of the host PC IP address (i.e. the first three numbers in the address) to the same as that of the Datastation; and then use different hostid values (the fourth number in the IP address) for Datastation and Workstation. For example 194.207.57.217 if Datastation is 194.207.57.214. 255.255.255.0. Solution2. The HOSTS file in \WINNT\SYSTEM32\DRIVERS\ETC\ needs to be edited to display Datastation IP Address against the name PCX. The Datastation IP address is shown on the two line blue display on the front of the Datastation. The HOSTS file should look something like this:
# # # # # # # # # # # # # # # # # Copyright (c) 1993-1995 Microsoft Corp. This is a sample HOSTS file used by Microsoft TCP/IP for Windows NT. This file contains the mappings of IP addresses to host names. Each entry should be kept on an individual line. The IP address should be placed in the first column followed by the corresponding host name. The IP address and the host name should be separated by at least one space. Additionally, comments (such as these) may be inserted on individual lines or following the machine name denoted by a '#' symbol. For example: 102.54.94.97 38.25.63.10 rhino.acme.com x.acme.com # source server # x client host

Please note the subnet mask should be left at

127.0.0.1 194.207.57.214

localhost PCX

Problem. The data rate is slower than expected.


Reason. Your Datastation is connected to a network or PC that has a 10Mbit network card installed. The Datastation network will automatically 261

reference

5.7

troubleshooting

default to the slower speed. Solution. Change the offending card or connect the Datastation to Workstation as a standalone network. If you are in any doubt about the speed of the

network with a switchable card, it is possible to buy a hub that will display the speed of its traffic.

Problem. Cant see the red visible strobe ring.


Reason. You are actually using cameras with infra-red strobes rather than the default visible red strobes. Solution. Move a marker in front of camera and check that its visible in the Live Monitor view. Reason. You havent established a link and / or you have not opened the Live Monitor view which initialises the strobe. Solution1. System|Live Monitors will both establish a link and initialise the monitors. Solution2. If still no joy, check all connections between cameras, BOB, Datastation and Workstation.

Problem. The useable area or green circles arent visible.


Reason. Solution. View parameters set incorrectly. Select View|Useable Area to display camera identifier and useable area. Select View|Detected Markers|Solid and View|Recognised Markers|After Validation to display circles. Reason. No camera calibration (i.e. linearisation) associated with the camera channel. Solution. Note the correct camera identifier, select System|Calibrate and choose correct linearisation file for the channel of interest. Hit calibrate and perform a false calibration, resulting in calibration failure. Accept calibration and if desirable, apply to current session. Only do this prior to a real calibration.

Problem. The marker circles in the live monitor view are flickering.
Reason. Solution. 262 View parameters set incorrectly. Select View|Flicker Free.

troubleshooting

5.7

reference

Reason.

The markers are blooming due to being too close to camera.

Solution1. Aperture and / or sensitivity are incorrect. Adjust them using near and far approach to ensure that marker seen as a circle at both edges. Solution2. Re-position the camera to have sufficient separation from the near edge of the volume. Adjust aperture and sensitivity accordingly. Reason. Solution. Camera may be out of focus. Check that lens focus is set to infinity. If still distorted image, check the back focus. You will have to re-linearise the camera if back focus is altered.

Problem. live monitor view is displaying an unsynchronised image - flashing lines, etc.
Reason. Solution. The video set-up in the Workstation does not match the camera type. System|Video Setup and select correct camera type from list. Apply this to the current session and check the new type as the default.

Problem. useable area in live and video monitor view is only a proportion of the full view.
Reason. Solution. Video set-up is incorrect. Select System|Video Setup , choose correct camera type and apply to current session. Reason. Solution. Incorrect linearisation files attached to camera channels. Check current settings of cameras (120 or 240Hz) and then repeat procedure of attaching camera identifiers to channels by false calibration to ensure correct LP files are being used.

Problem. Cant manually select and label the markers in the workspace.
Reason. Solution. Reason. The Trajectory display range is set at zero. Double click on range arrows to set to default of +/-1. Trajectories have been hidden in the workspace using Workspace|Hide All Trajectories has been enabled. Solution. Reason. Select Workspace |Show All Trajectories. The mode is set Select rather than Label. 263

reference

5.7

troubleshooting

Solution. Select the label mode in bottom right corner of Workstation window.

Problem. Calibration residuals are higher than expected.


Reason. Solution. Insufficient wand data. Repeat dynamic calibration only ensuring that capture about 20 seconds of data of wand wave distributed evenly throughout the volume. Reason. Linearisation out of date. Camera/s may have been moved around a lot or had the lenses tampered with. Solution. Re-linearise camera/s with correct video set-up. Once complete, use new linearisation file/s to calibrate. Reason. Solution. Camera/s out of focus. Check lens focus set at infinity and if necessary the back focus of each camera. If focused incorrectly, re-calibrate and apply to a new session. Reason. Solution. Incorrect camera identifiers attached to the video channels. Check that each selected LP file matches the actual camera attached to the respective channels. Also, check that the selected LP files match the current video setting, either 60, 120 or 240Hz, to ensure the correct resolution is applied to the lens correction. Reason. Solution. Cameras are poorly oriented with respect to each other. Check the mrcalib.tvd, the video capture of the most recent calibration, in \VICON\SYSTEM\ and use View|Calibrated Marker Pairs to confirm calibration is using only calibration device for process and not random strobe head. If so, reorient or locate camera to avoid incorrect marker pair selection.

Problem. A Group of cameras failed to calibrate.


Reason. Solution. Insufficient wand data. Repeat DYNACAL capturing about 20 seconds of data of wand wave distributed evenly throughout the volume. Reason. Solution. 264 Cameras dont have sufficient overlap to propagate the calibration. Re-position the cameras to ensure that all regions of the volume are seen by at least two, ideally, three cameras.

troubleshooting

5.7

reference

Reason. Solution.

Other visible light sources are flickering causing multiple false centers. Reduce the camera sensitivity so that the light source appears as a single static center. Verify markers are not in view of cameras during calibration.

Problem. Calibration failure in systems set up to capture hand and face.


Reason. Reflections due to glasses, wristwatch or rings. They are visible due to fact that aperture is opened up to see smaller markers. Calibration fails because they are not in constant view and can be misinterpreted as wand markers. Solution. Remove all highly reflective objects and check live monitor view for any other reflective objects.

Problem. Reconstruction generates a large number of trajectory segments.


Reason. Solution. Reason. Solution. Loss of calibration due to cameras being knocked / moved. Start a new session and re-calibrate your volume. Incorrect reconstruction parameters. Capture a straightforward trial and experiment with reconstruction parameters such as Intersection Limit and Residual Factor. Also check that the maximum and minimum vectors encapsulate your volume (Reconstruction Volume). Reason. Solution. Lots of occlusions take place. If they are avoidable, repeat actions else consider re-positioning the cameras to minimize regions of non-visibility. Reason. Solution. Subject walks in and out of capture volume. Capture trial only when subject is inside volume or set trajectory save range to only save data when all markers are visible. Reason. Incorrect camera sensitivities causing marker merging and false trajectories. Solution. Adjust camera sensitivities so that can differentiate markers close to each other. Reason. Poor coverage of the capture volume. The overlapping regions of the cameras are insufficient to allow reconstruction. 265

reference

5.7

troubleshooting

Solution.

Re-position the cameras to ensure that all markers can be seen by at least two cameras throughout the volume.

Problem. Reconstruction creates less trajectory segments than the number of visible markers.
Reason. Solution. Reason. Incorrect reconstruction parameter settings. Adjust parameter values such as reducing the Intersection Limit. Specific body segments occluded throughout trial by props or other subjects or lying on floor. Solution. Solution. Move the occluding objects or re-position them within volume. If objects are supposed to be there then re-position cameras to provide sufficient coverage of the volume ensuring that all markers can be seen by at least two cameras. Solution. Check markers in live view to assure proper reflectivity. Dirty markers will not work as well as clean markers.

Problem. The trajectories of two markers keep flipping between each other.
Reason. Solution. Incorrect reconstruction parameters. Gradually decrease the predictor radius in the reconstruction parameters, until flipping stops occurring. But be aware that changing this value will impact other areas of reconstruction so check other areas of the trial to ensure optimization of predictor radius parameter. Reason. Solution. Markers too close on actor. Check in the video view of a capture to see if this is the reason prior to making any changes to marker positions. By using View | Recognised Markers | After Correction Only, you will be able to display when the images of two markers becomes close, whether the system sees them as two markers or one marker. If the system is consistently merging markers in this way in many views, it may be necessary to move the markers further apart on your actor, or alternatively bring the cameras nearer to the actor. Remember the system cannot only reconstruct what it can see.

266

troubleshooting

5.7

reference

Problem. Trajectories arent reconstructed even though the data is visible in two or more video monitor views.
Reason. Incorrect reconstruction volume parameters. Solution1. Select correct maximum and minimum vectors to ensure capture volume is included. Remember that position of L-Frame will determine origin. Solution2. Fast moving data (e.g. golf balls) need to be seen in at least 5 frames to be calculated using reconstruction parameter.

Problem. Feet disappearing in reconstruction.


Reason. Solution. Un-level floor surface and incorrect reconstruction volume parameters. Set minimum z vector to a value of say -100mm (more if the floor is very slanted).

Problem. Reconstruction generates lots of ghost markers.


Reason. Solution. Marker merging due to high sensitivity settings. Adjust camera/s sensitivities to ensure that all markers are differentiable from each other at both near and far edges of volume.

Problem. Left and right limbs are labeled incorrectly.


Reason. Solution. Subject has been labeled incorrectly. Check subject c3d to verify. Correct labeling error and recreate subject file.

Problem. poor results after re-Linearisation.


Reason. Linearisation performed incorrectly. Solution. Follow instructions on linearisation and test outcome.

Problem. I can see Vicon Workstation but there is no data directory window.
Reason. The directory was closed by mistake and you cannot re-open it.

267

reference

5.7

troubleshooting

268

troubleshooting

5.7

reference

269

reference

5.8

checklists

5.8 Checklists
Checklists for the Animation Pipeline.

5.8.1 Step by step review of the main processes


The following points are recommended as suitable for most capture sessions.

Preparation.
Prior to capture, think about the space available, the types of move to be captured and how the data is going to be used in your animation. You should have produced the following; A list of subjects and a move sheet. The dimensions of your capture volume and initial ideas on the positioning of cameras. Up to date linearisation files for your cameras. A set of props if required (with markers attached). A suitable set of marker files to fully label your subject/s and props.

System Installation and Practicalities.


Workstation software installed on suitable PC with 10/100 Mbit network card. Measure volume and position small markers around its edge. Unpack the boxes and position tripods or wall mounts around the capture volume. Attach mounting plates to cameras, connect cables linking cameras to breakout boxes and then link these to the Datastation. The Datastation should be kept reasonably close to Workstation PC. Connect the two network cards together via a cross-talk cable or via an Ethernet hub. Make sure the aperture setting of each camera is set at f4.0 or f5.6 and it is directed toward the capture volume. Note down the identification numbers of the cameras as they are positioned for use in calibration. When everything is connected, switch on the Datastation. After a few minutes, the display will say Awaiting Connection. Within the Workstation, select either 270

checklists

5.8

reference

System|Start Link or System|Live Monitor to establish link between the two. Opening the Live Monitor view will cause the camera strobes to illuminate and display the familiar red circles. Each camera monitor view should now display the markers on the floor. In the Workstation, create a new session and select System|Calibrate . Enter the camera identifiers in the correct order and run through a false calibration to ensure that the correct useable area is displayed in the monitor view. Adjust each cameras position, orientation and aperture in turn to ensure that correct zone of volume is seen and that the markers at furthest edge are visualised as three video lines in Live Monitor view. Use the wand to check that it is possible to capture at the height required. When happy with the set-up, replace markers from edge of volume with tape if necessary. Remember to allow the cameras 10 to 15 min from switching them on to calibration.

Calibration Using Dynacal.


When calibrating your capture volume check that you carry out the following stages; Adjust the sensitivity settings of the Live Monitor views to deal with the larger markers of the reference objects. Identify the cameras with their appropriate LP files and ensure that they are the correct camera rate. Select all cameras for calibration and ensure that the correct reference object is selected. Hide your subject away from view for the duration of the calibration. Capture the static object whilst ensuring the wand is hidden from view. Remove the static from view and capture the wand for approximately 20 seconds remembering to cover as much of the volume as possible. At the end of the calibration, check that the resulting residuals are all less than 4.0, accept them if satisfied and apply them to your current session. 271

reference

5.8

checklists

As security, import your calibration video data into your current session and reconstruct with default parameters.

Do one or two captures of the wand to get an idea of the size of your volume and also as an indicator of the quality of the reconstruction. The reconstructed

distance between 2 wand markers should be 500mm 3mm with a standard deiviation of less than 3mm.

Creating a Subject and Using Autolabeling.


Attach the markers to your subject. Adjust the camera sensitivities to ensure that the markers can be seen throughout the volume, both at near and far distances from each camera view. Select Subject Calibration as the trial type and then capture your subject in the static motorbike position. Reconstruct your data. Attach your marker file as the default marker set and then manually label all the trajectories. Select Trial|Create Subject and type in the name of your subject. Save the trial as a precaution then unlabel all trajectories. Go to Trial|Options and select your subject and check include subject name in labels. Autolabel your static trial for subsequent processing in BodyBuilder. Got to Trial|Capture, select your subject and General Capture, then capture another move. Reconstruct and autolabel your data.

272

checklists

5.8

reference

Typical Values For reconstruction And Autolabel Parameters


Capturing A Full Body Subject Wearing 25mm Markers. The default reconstruction parameters are as follows; Maximum and Minimum Vectors : -X : -3500 , +X : 3500 , -Y : -3500 , +Y : 3500 , -Z : -100 , +Z : 2500 Maximum Acceleration (mm/s/s) : Maximum Noise Factor : Intersection Limit mm : Residual Factor : Predictor Radius : 50 7 12 2 30

5.8.2

Also select Discard Non-Circular Markers. The default autolabel parameters are as follows; Maximum Deviation : Minimum Overlap : 3 %. 30 fields.

Capturing A Human Face / Hand Using 4mm Markers. The default reconstruction parameters are as follows; Maximum and Minimum Vectors : -X : -700 , +X : 700 , -Y : -700 , +Y : 700 , -Z : -700 , +Z : 700 Maximum Acceleration (mm/s/s) : Maximum Noise Factor : Intersection Limit : 50 4 5 273

reference

5.8

checklists

Residual Factor : Predictor Radius :

2 5

Also select Discard Non-Circular Markers. The default autolabel parameters are as follows; Maximum Deviation : Minimum Overlap : 3 %. 30 fields.

274

checklists

5.8

reference

275

index

277

index

the manual

A
amc, 168, 179, 183, 185, 187, 190, 191, 192, 193, 194, 195, 196, 197, 199, 242-244, 253255, 266

aperture, 26, 38, 40, 42, 44, 79, 80, 106, 114, 133, 135, 172, 266, 270, 278, 281, 283,
289

asf, 82, 86, 94, 168, 174, 179, 181, 182, 183, 185, 186, 187, 190, 191, 192, 196, 197, 199,
200, 242-244, 245-252, 253, 266

assisted labeling, 90, 92, 266, 273 ast, 180, 182, 183, 245, 266 autolabel parameters, 90, 98, 102, 108, 214, 292 autolabeling, 66, 70, 75, 77, 78, 79, 87, 88-93, 95, 96, 98, 102, 104, 107, 108, 109, 112,
117, 161, 162, 163, 164, 168, 173, 174, 212-215, 228, 229, 230, 235, 266, 274, 288, 290, 292

B
back focus, 134, 281, 282 batch processing, 98 battle droid, 112 blooming, 52, 80, 106, 115, 266, 281 BodyBuilder, 73, 77, 82, 88, 93, 111, 116, 173, 176-184, 192, 247, 250, 267, 274, 291 BodyLanguage, 181, 249, 267, 274 boss, 143 break out box (BOB), 25, 26, 280 Broadway card, 139, 140, 142

the manual

index

c3d, 14, 31, 32, 59, 83, 86, 98, 99, 142, 143, 145, 164, 175, 177, 179, 180, 181, 182, 183,
184, 216, 235, 238, 273, 277, 286

calibration residual, 50, 79, 104, 133, 172, 209, 221, 282 CCD, 37, 38, 134, 267, 278 Character Studio, 175, 194, 256, 267 continuity chart, 112, 114, 116, 117, 118, 119, 120 copy pattern, 118, 125, 175 csm, 99, 145, 175, 195, 256-264

D
defragment, 124, 125 delete and fill, 124, 178 directory window, 28, 30, 32, 55, 142, 286 dongle, 27, 268 DYNACAL, 7, 13, 43-53, 56, 79, 107, 172, 174, 202-206, 283, 289

E
exchange points, 126

F
f-stop, 26, 133, 266, 270 field of view, 89, 115, 135, 268 fill gaps, 95, 96, 98, 99, 112, 118, 125, 174, 175, 214, 269 filter, 105, 177, 272 focal length, 138, 221, 268, 270 focus, 26, 134, 135, 266, 268, 281, 282

index

the manual

frame count, 155, 156 full motion video, 179, 268

G
genlock, 152-158, 269 ghost markers, 57, 90, 103, 105, 113, 114, 116, 120, 121, 123, 174, 208, 209, 286

I
interpolation, 124, 175, 176, 269, 273 intersection limit, 104, 110, 113, 114, 208, 284, 292 inverse kinematics, 185, 188, 270

J
joint center, 74

K
kinematic model, 176, 179, 182, 190, 266, 267, 268, 269, 270, 274, 277

L
L-Frame, 21, 23, 33, 43, 44, 47, 48, 51, 60, 68, 110, 202, 204, 219, 240, 285 LIN Grid, 134 linearisation, 28, 32, 40, 41, 45, 51, 133-138, 208, 219, 220, 225, 226, 236, 280, 281,
282, 286, 288

live monitor, 28, 29, 32, 39, 40, 41, 44, 45, 79, 115, 134, 135, 172, 278, 280, 281, 283, 289 live movie, 139, 141, 142 loop through, 153

the manual

index

M
3DsMax, 22, 99, 145, 191, 267 manual labeling, 84, 87, 114, 162 marker merging, 45, 52, 80, 106, 115, 167, 284, 285, 286 max acceleration, 103, 110, 121, 167, 209, 272, 285, 292 maximum deviation, 90, 92, 108, 109, 214, 292 maximum fill gap, 125 Maya, 22, 190, 196 minimum overlap, 90, 108, 109, 214, 292 Mobius, 22, 73, 185-189 movie capture, 7, 139-144 movie synchronisation, 143, 234 MPEG, 31, 139, 140, 142, 144, 232, 271 multiple subject, 92, 94, 99, 101, 161-167

N
noise factor, 104, 209, 292 non-circular markers, 106, 292

O
occlusion, 57, 74, 93, 107, 108, 111, 112, 115, 116, 164, 272, 284 on-line help, 6 origin, 33, 42, 43, 44, 47, 52, 56, 74, 103, 110, 118, 121, 165, 203, 204, 205, 240, 248,
249, 267, 285

index

the manual

phantom markers, 103, 113, 117, 123, 273 phantom menace, 111,222,123 pipeline, 4, 7, 8, 22, 55, 58, 73, 80, 83, 94-101, 111, 116, 127, 145, 149, 164, 172-200, 273 plug-in, 99, 145-151, 190, 194, 196, 199, 256 pop-up graphs, 105, 121 predictor radius, 105, 110, 285, 292 progressive scan camera, 25, 156, 271

R
re-sampling, 176, 178, 274 reconstruction, 13, 37, 41, 42, 43, 50, 56-60, 90, 95, 102-108, 111, 133, 168, 172, 178,
202, 203, 208-210, 212, 217, 219, 225, 232, 236, 238, 270, 272, 273, 274, 283, 284, 285, 286, 288, 290, 292

reconstruction parameters, 57, 84, 99, 102-108, 114, 115, 126, 173, 208, 218, 273,
283, 285, 292

reconstruction volume, 34, 103, 104, 110, 113, 124, 284, 285 ref input, 153, 154, 156 ref output, 154 remote control, 159 repeated capture, 97, 100 residual factor, 104, 284, 292 resolution, 25, 37, 41, 136, 208, 226, 227, 273, 274, 282 rotation center, 33, 117

S
S-VHS, 139, 154 sample skip, 159-160, 274

the manual

index

SDK, 145-151 segment, 57, 58, 70, 73, 74, 77, 84, 88, 90, 92, 93, 104, 105, 107, 108, 109, 112, 114,
115, 116, 120, 124, 125, 161, 164, 165, 166, 173, 176, 179, 212, 228, 230, 231, 243, 247, 248, 249, 269, 270, 271, 273, 274, 277, 283, 284

segment count, 105, 107, 112, 115, 164 sensitivity, 40, 41, 44, 51, 52, 79, 92, 106, 114, 135, 138, 172, 267, 281, 283, 284, 286,
289, 290

skeleton, 68, 70, 77, 82, 179, 181, 185, 186, 188, 190, 192, 194, 196, 198, 200, 242, 243,
247, 251, 276

SMPTE, 152, 154, 253, 275, 277 SoftImage, 22, 190, 196, 248, 249, 251 static pose, 83, 162, 182 subject calibration, 79-87, 92, 108, 161-167, 173, 182, 213, 235, 290

T
timebar, 32, 34-35, 58, 187, 188 timecode, 8, 152-158, 218

trajectory, 13, 32, 35, 38, 56, 57, 58, 59, 60, 79, 82, 83, 84, 85, 86, 88, 92, 96, 99, 102,
106, 107, 108, 112, 113, 114, 115, 116, 117, 118, 121, 123, 125, 139, 145, 161, 163, 166, 173, 174, 176, 178, 185, 209, 228, 235, 266, 269, 270, 272, 273, 282, 283, 284, 285, 290

trial types, 55, 58, 80, 84, 94, 96, 100, 142, 159 tvd, 13, 31, 32, 50, 52, 56, 81, 83, 98, 139, 142, 152, 157, 168, 219, 226, 236, 282

V
velocity, 121, 167, 269 video monitor, 31, 32, 56, 60, 98, 134, 142, 281, 285 video window, 31, 32, 68

index

the manual

VITC, 153, 154, 157, 277

W
wand, 21, 39, 43, 44, 45, 47, 48, 49, 50, 51, 52, 53, 56, 58, 68, 79, 123, 173, 202, 205,
282, 283, 289, 290

workspace, 13, 31, 32, 34, 35, 54, 56, 57, 58, 59, 60, 81, 84, 85, 88, 103, 112, 117, 118,
119, 120, 126, 142, 143, 182, 185, 229, 230, 233, 239, 240, 267, 276, 277, 282

www.vicon.com