Sie sind auf Seite 1von 103

How to Get Started with 3D

The world of 3D is huge. The sheer number of industries, software and tools that
are involved in learning 3D can be daunting. Before you ever drop the money for
the best 3D software, you need to know exactly what it is you want to learn how to
do. In this article youll get a crash course of some of the most popular fields
related to 3D and the steps within the pipeline. That way you can find out exactly
what interests you most, what programs youll need as well as some great first
steps to starting out on the right path towards learning 3D.
What do you want to do?
If youre reading this, you probably already have an idea of what you want to do.
For your convenience, you can use the links below to jump to the sections
relevant to you.
I want to make movies or TV shows
I want to create product designs
I want to make games
I want to create my own art
I want to create visual effects
I want to make movies or TV shows
The process of making an animated movie can take years with it being common
for a typical movie to take three to five years. TV shows generally dont take as
long simply because of their shortened production schedules, but both movies
and TV shows can take hundreds of well-trained artists to reach the final product.
Needless to say, making a movie or TV show is no small undertaking.
Working in the television or film industry as a 3D artist can mean working on
movies like Toy Story, live action movies with CG integrated in like The Avengers
or hit TV shows like Dr. Who. Regardless of whether or not the movie is a fully
animated feature or a mixture of live action and 3D, the disciplines are the same.
The creation process just varies between the two
Using an animated movie as an example, when a company like Pixar makes an
animated movie they first need to come up with the story, create the concepts for
the characters and the environments that the characters will be interacting with.
Once that has been completed then the process of actually creating the 3D world
begins. This is where the 3D artist shines as they get to model, texture, animate
and render all of the virtual worlds and characters.
When working on a TV show the process is similar to working on feature films and
the same disciplines are involved. Everything is planned outside of the 3D world
much in the same way, but with TV shows the deadlines are much tighter.
Typically a TV episode will be released once a week where as a movie can take
years to create.
Finding where you fit into the pipeline would be the next step. Use the links below
to jump to more detail about how to get started with that step of the pipeline.
Modeling is the process of taking the 2D concepts and building them in 3D. Once
a model is created, they need to have things like color and materials applied to
them. This is the process of texturing. Before motion can be added to a model, it
needs to have a skeleton built during the rigging step. Once rigged, a model can

be brought to life through animation. After the model has been animated, it needs
to be rendered to turn it into a series of still images that will eventually turn into
the final movie.
Last, but not least, while compositing isnt really considered as a step of the 3D
pipeline, it is important enough for making movies and TV shows that we wanted
to include it because, in some instances, it can include some 3D work.
Compositing is the process of taking the rendered images and adding polish,
whether thats integrating them into live footage (VFX) or adding the final touches
of an all-3D animation.

I want to make games


The amount of video game studios has sky-rocketed recently. Its an exciting field
to be in, and if you love to play video games, chances are youve wondered how
your favorite games are made.
Unlike a movie that is played from start to finish, a game is much more interactive.
This means that to make a game there is much more that goes into than just the
3D aspects. For example, game development and programming play a vital rule
in the process, but those are outside the scope of this particular article so we
wont be going into those.
Like a movie, creating game art requires every step of the 3D pipeline from
modelers to rendering artists. The disciplines involved in making movies are
similar to that of video games, but the approach is quite different. For example, an
animated character not only needs to look good in the software but it also needs
to work well in the game engine. This means taking extra steps to make sure that
it works well in the game instead of just looking good for the shot camera, like it
would need to in a movie.

Because many of the aspects of creating game art is very similar to creating art
for movies, it is common for an artist working in the movie industry to switch over
to the game industry or vise versa. Learning any discipline within the pipeline can
help prepare you for the game or move industry. Its more about which one that
you find more appeal in.
Finding where you fit into the game art pipeline would be the next step. Use the links
below to jump to more detail about how to get started with that step of the pipeline.
Just like movies, every game asset needs to be modeled. This is the process of
taking the 2D concepts and building them in 3D. Because games need to be able
to run smoothly on a gamers computer or console, they need to have restrictions
on how much resolution the models can have. As a result, a lot of extra detail is
faked for game models in the texturing process. Once the model is looking good,
it needs to be rigged before motion can be added to it. After that, a model needs
to have a number of animations created for it. For example, a character would
need both a walk animation and a run animation created so that when the gamer
tells the character to go from walking to running, the game engine will be able to
make the character look like the character is actually running as they move faster.

I want to create visual effects


Working in visual effects often means you will be working in the movie industry as
well. Visual effects (VFX) involve the integration of computer generated imagery
into live-action footage. All the scenery and blood splatter effects in the movie 300
are a great example of VFX artists at work.
A lot of the same disciplines involved in making animated movies can be found in
VFX. Typically VFX artists will be called on to add in the extra things that would
either be impossible to capture on film or much more expensive. For example,
adding in a 3D model of a car exploding in a live-action scene is much safer and a
whole lot cheaper.
Because working with VFX is most commonly done when working with movies or
TV shows, your next step should be to check out our section in this article about
making movies or TV shows to learn about some of the disciplines involved.

I want to create product designs


Product design is the process of using a computerized representation of a product
(e.g., like an iPad) to help visualize a product. Usually this is done after a product
has been conceptualized in 2D but before the product has actually been created
in real life. For example, here is a great example of some 3D visualizations of
some products. With these visualizations, anyone can easily see what the product
does and what its intentions are.
In most cases, product designers create the product on their own from start to
finish. This means that your next step would be to have a strong understanding of
modeling, texturing, and rendering so that youd be able to take a product concept
and visualize it as a beautiful render.

I want to create my own art


Since being a hobbyist generally means youre in charge of creating everything
from start to finish, your next step would be to take a look at each step in the
pipeline and see what interests you the most.
To help avoid languishing work, you need to ask yourself what sort of art you
want to create before you can decide what to study. Focusing on the things that
interest you the most can help you want to get your projects done simply because
you enjoy them.

What step of the pipeline do you want to focus on first?


It is certainly beneficial to learn multiple steps of the pipeline and in many
instances, your potential job may require moving from step to step. However, it is
never a good idea to tackle more than one step at a time. Focus on learning one
step before moving onto another.
If youre not sure how these steps are used in various fields, jump to the top of
this article to see what fields use these different steps.
I want to learn modeling first.
I want to learn texturing first.
I want to learn rigging first.
I want to learn animation first.
I want to learn compositing first.
Modeling
Anything that exists or could exist in the real world can be built virtually. A 3D
modeler can create anything from a character for a movie to a gun in a video
game. For example, the Hulk in The Avengers was built in 3D by modelers
Even though modeling for movies and modeling for games may seem like the
same task, they actually differ quite a bit in the process. A 3D modelers job in film
is typically only to create the models and the textures will be handled by a texture
artist. The limitations that are set for a modeler in film are far less than that of a
game modeler. Instead, modeling for movies is more about production speed and
how high the quality of the final model needs to be.
With game modeling the 3D model has the potential to be interacted with, and
seen from every possible angle. A 3D modeler in games is limited by the game
engine, and the console it is being played on. They are given a very tight
polygonal budget that they cannot pass.

Creating 3D models for product design is a different approach than creating


models for games or movies. Often times a product designer must work with blue
prints, and create the model to fit within specific specifications.
All of the above fields require a 3D modeler somewhere within the production
pipeline. But each one differs in the process
There are many different software packages to learn 3D modeling. Some of the
most utilized ones are Maya, 3ds Max, CINEMA 4D, and Softimage. Which one
you use doesnt really matter as they are all capable of creating great models.
Remember you can check out some of our tutorials for each of these software
programs to see what interface and workflow you think youll like the best. Then,
try downloading demos for the software that you like to try it out yourself.
If product design interests you, a 3D application like Maya, CINEMA 4D or 3ds
Max can certainly be used, but a more specialized program may be better. For
example, Inventor and SolidWorks will let you be a lot more precise with
dimensions than 3D applications that are primarily used for media and
entertainment like Maya, CINEMA 4D or 3ds Max.
If youre a hobbiest or you just dont have the budget for an industry-standard 3D
program like those listed above, consider Blender; it is a completely free 3D
application that you can use to get started.
Once youve picked out the program that you want to use, follow along with the
appropriate courses below to get familiar with modeling in that software:
Quick Start to Modeling in Maya
Quick Start to Modeling in 3ds Max
Modeling in CINEMA 4D
Modeling in Softimage
Getting Started in Inventor
Getting Started in SolidWorks

Texturing
Once a 3D model is built, the computer doesnt know what sort of surface is being
created so it is created with a single, flat color. For example, should the wall on a
house have wallpaper on it or is it painted? These sort of things are done in the
texturing step.
Texturing is required in all of the fields mentioned above. In the film industry often
times a texture artist is a specific job within the pipeline. With the game industry
often times a 3D modeler will be required to create textures for the models they
create. Just like modeling, Maya, 3ds Max, CINEMA 4D, and Softimage are some
great applications that you can use to get started with texturing in 3D.
Ironically enough, though, jumping into a 3D application really isnt the best first
step for an aspiring texture artist. Before jumping into a 3D application it is
incredibly important for you to be familiar with an image editing application like
Photoshop. The reason for this is simply because a lot of textures start off from a
photograph or at least a photographic reference. To get up to speed in
Photoshop, follow along with this learning path.
If youre already familiar with Photoshop, you can jump ahead and start learning
how to texture in your favorite 3D application:
Texturing in Maya
Texturing in 3ds Max
Texturing in Softimage

Rigging
In the film and game industry there are artists who are responsible for creating the
skeletons for the character. These skeletons are called the rig. In order for an
animator to be able to bend and deform a 3D asset, a rigging artist, or technical
director as they are often referred to, will first need to set up a rig. This is done by
creating all of the control points on the 3D model that will be needed so that an
animator can bend and deform the model to create the animations.

In the film industry a character rig will most likely have hundreds of controls for an
animator to be able to manipulate. These rigs can take weeks to finish and a lot of
technical prowess is required for this.
The same process is required for video games, but depending on the complexity
of the animations that will need to be done will determine how complex the rig
needs to be. For example, a character in a game may never need to speak so
there would be no need to create a facial control rig.
To get started with rigging, the first step is to determine which software you want
to use. Software such as Maya, 3ds Max, CINEMA 4D, and Softimage are all
capable of creating powerful rigs. Check out their interfaces and workflows in
some of our tutorials to get an idea of which one may interest you the most. Then
grab a demo and start to dig in!
Depending on which software program you like the best, here are some tutorials
to get you started off on the right path:
Quick Start to Rigging in Maya
Quick Start to Rigging in 3ds Max
Rigging in CINEMA 4D
Rigging in Softimage

Animation
The animators job is to make the required 3D assets move in a believable way.
For example, when a character in a 3D animated movie is moving, every
movement was created by an animator. Take a look at how the characters in Toy
Story move. Every movement that character did had to be created by an
animator. Animation is heavily utilized in most of the above fields, especially in
movies and video games. Each one differs a bit from the other.
For example, when working on a 3D animated movie like Tangled, the animators
typically create three to four seconds of finished animation every week. With
animation in video games, an animator might be tasked to create 20 seconds of
animation in a single day.

Animation in a live-action film is similar to animation for fully animated feature,


except that typically when animating for live-action the characters must be
animated in a very realistic way, because they are interacting with real-life actors.
Animation in product design differs largely from that of movies or games. Instead
of creating character performance animation like you would see in a movie,
animating for product design might mean animating how the case of a phone
might come off or some other type of movement to help show off the product.
The most widely used application for 3D animation is Maya. This doesnt mean
that other software cant be used, though. When it comes to animation knowing
the right software is not as important
Besides Maya, other popular animation tools include 3ds Max, CINEMA 4D, and
Softimage. Watch some of our tutorials for these different software programs just
to get a feel for how the interface looks and how the workflow is. Then try
downloading demos for the software that you think youd like the most to try it out
yourself. Once youve picked a software, check out some of our learning paths to
get started:
How to Get Started with Animation in Maya
Animation in 3ds Max
Animation in CINEMA 4D
Animation in Softimage

Rendering
Rendering is most commonly done toward the end of the production. The same
way that a 2D artist would render their drawings by adding lighting and shading
into their paintings or drawings, 3D rendering allows the 3D artist to incorporate
shading and lighting into the 3D scenes.
Rendering in a 3D movie can be a lengthy process. For example, a single frame
from Monster University took 29 hours to complete.
Rendering in a game must be done in real-time. This means it cant take a single
frame hours to render. It is up to the consoles (or computers) graphics card to
produce the render and must be done while the viewer is playing. The reason why

movies typically look much better graphically is because each frame is prerendered and can be devoted as much time needed to get the final quality.
Like many of the other steps in the pipeline, to get started with learning how to
create beautiful renders, the first step is to determine which software you want to
use. Software such as Maya, 3ds Max, CINEMA 4D, and Softimage are all
capable of creating beautiful renders. A great way to figure out which one you
think youre interested in is to start by simply checking out the interfaces and
workflows in some of our tutorials. This will let you see how to move around in
the application to get an idea of which one may interest you the most. Then grab
a demo of the one(s) that peeks your interest and dig in!
Depending on which software program you like the best, here are some tutorials
to get you started off on the right path:
Rendering in Maya
Rendering in 3ds Max
Rendering in CINEMA 4D
Rendering in Softimage

Composting
A compositor is utilized most commonly in the VFX field. A compositor can add in
separate images into a life-action background to create the illusion that all
elements are there. For example, if youve ever seen a behind the scenes look at
sci-fi type movie, you may have noticed the actors are in a strange green room. If
you were to look at the finished movie, it looks as if the actors are in a sci-fi city,
or some other area. Its the compositors job to add all the separate elements to
give the illusion that the actors are in another universe.
Compositing is an important role within the VFX pipeline, without compositors
there would be no way to integrate 3D elements into a live-action scene.
Some of the most important software applications for compositing are NUKE and
After Effects. A great first step to becoming familiar with these powerful tools is to
follow along with one of these learning paths:

10

How to Get Started in NUKE


Compositing in After Effects
Learn more about understanding the basics of a 3D pipeline.

Final thoughts
Once youve figured out the path you want to pursue, its up to you to become
great at whichever path you choose to take. Great artists arent born overnight. It
can take years to truly master one of these subjects. Dont get discouraged if
what youre making doesnt look like what Pixar is producing. Remember that
their movies take hundreds of experienced artists years to create. Practice,
practice, practice and with anything you will become better at it.
One of the best ways to become better is to get feedback from fellow artists, find
a community that you can share your work in. Its important that you can take
constructive criticism well, dont take it personally. Visit our forums to post your
work in progress and get feedback from fellow artists.
When youre ready to dive in make sure to take advantage of training on every
step of the 3D pipeline. Keep learning more about jumping into 3D with choosing
your first 3D application and getting a better understanding of the 3D pipeline.

11

12

Understanding a 3D Production Pipeline - Learning The Basics


The basics of a 3D production pipeline are essential for any student or new artist
looking to either start learning the entire production pipeline or focus on one of the
steps.This is what your typical project will go through from start to finish.
Pre-production: This is the phase where the direction of the project first takes
form. Concepts for style, look, sets, characters and much more are determined to
help everyone and everything make it through the pipeline.

Modeling: Assets to be used in anything from animations to VFX shots are all
modeled from scratch, or adapted from other models, using a variety of
techniques to meet project requirements. This phase takes all of the concepts
from pre-production and starts bringing them to life. Assets are usually modeled
with the style or concepts set forth in the pre-production phase. If you have a
sculpting application, like ZBrush or Mudbox, youll also be able to digitally sculpt
assets or add more detail to your models.
Modelers often start with a completely empty 3D scene and build up the 3D
geometry to look like anything from simple props, or environments, to complex
characters. A 3D model is made up of a series of points called vertices that are
connected to form a mesh. These vertices have all been meticulously placed by a
3D modeler. Its one of the first and most important steps in the 3D pipeline
because it is essentially the creation of the assets that all of the other steps in the
3D pipeline will use.

Painting and Texturing: Your awesome models, or assets, reach the next step
where color and textures will take over that gray look, known as a default shader.
This step is where you will learn about materials, shaders, textures, maps and all
of the ways you can add physical textures and color to your models.
A texture artist will work with what is referred to as a shading material that, when
applied to a 3D model, gives the artists the ability to control things like color,
reflectivity, shininess, and much more. This way, what was once a 3D model with
a solitary color can be transformed into a 3D model that looks a lot more realistic
with colors and materials applied.
If someone else is rigging and animating the model, the model might also be
getting rigged at the same time you or someone else is texturing it.

Rigging: Your models are ready to move, but how do you get them ready to jump
across the screen? Rigging, like all steps of the pipeline, is an important step in
making this happen. By creating rigs for your characters and objects that move,
you or an animator can control the movement to create life-like or stylized
animations.
Before an animator can begin the animation process the computer needs to be
told how the 3D models can move. For example, should a 3D model that looks
like an arm be able to bend at the elbow like a realistic arm? Or can it be
stretched into wacky shapes like a cartoon characters arm? Setting this up and
telling the computer the range of motion for each part of the 3D model is what the
rigging process accomplishes. This is done by creating control points on the 3D
model that an animator can bend and deform to create the animations. Think of it
as the skeleton for the 3D model.

Animation: In the animation phase, rigged assets are animated using controls to
match the desired shot. A lot goes into creating seamless animations, but this is
where you really see everything come together and results you can show others.
Using a timeline, an animator will set movement in frames that play back as an
animation.
Animation can mean anything from adding motion to a piston for an engine, all the
way to creating complex character performances that you see in the latest 3D
animated movies. 3D animation is essentially a digital version of 2D animation and
uses the same concept of as a flipbook animation. Except instead of creating a new
pose on each sheet of paper, 3D animators create poses on a series of still images
that are referred to as frames. By creating a series of poses and playing it over a
certain amount of frames you can create the illusion of movement. It is an animators
job to make the 3D characters and objects come to life in a believable way.

Dynamics: Closely tied to the animation step of the pipeline, dynamics allows
you to create simulations and real-world forces like a model shattering on impact
with the ground. Dynamics lets you save time, instead of hand animating
everything, and usually produces better results you can fine tune.

Lighting: As the name might imply, lighting is the step where you can control
most of the light elements of your scenes and shots (you may touch lighting in
both rendering or even a 2D application like Photoshop after creating the render).
Lighting lets you control everything from where the sun is in a shot to how much
glow a light might have thats in the scene. While it sounds easy, and it will be
with practice, lighting can add that exact feel you want a shot to portray.

Rendering: Closely coupled with lighting, and sometimes even texturing, is


rendering. In the rendering step of the pipeline, you take your scene, or what is
seen through the camera you setup, and output files for a variety of uses: your
final shot, your animation or a final beauty pass of an asset for your portfolio.
Rendering can take anywhere from a few seconds, to hours or even days
depending on the quality of the render, the complexity of the scene, and the
computer it is being rendered on. In order to render a scene you would first want
to set up an environment, and tweak the render settings to add shadows, and
adjust the quality till you get the desired end result.

Key 3D Modeling Terms


Beginners Need To Master

Polygon geometry

NURBs surfaces

Subdivision surfaces

A polygon is a three-four sided

NURBS stands for non-uniform

Subdivision surfaces, which are

plane of geometry that makes a

rational b-spline (NURBS). NURB

sometimes referred to as NURMS

face on an object. Polygons are the

splines are curves used to created

(non-uniform rational mesh

most commonly used geometry

smooth, minimal surfaces and

smooth), are closely related to

type in 3D. While polygons are

geometry. NURBS are commonly

polygonal geometry. Subdivision

commonly used for all types of

used for very smooth objects

surfaces use an algorithm to take

objects, in order to create very

because they dont require as many

polygon geometry and smooth

smooth surfaces with polygons

points to create the same look as

it automatically. For example, in

means that youd need to add a lot

polygon geometry would. A NURBS

the image above you can see the

more geometry than you would

surface always has four sides that

polygon cage around the smoothed

with either NURBs or subdivision

are defined by control points.

subdivision surface. You can think

surfaces.

of a subdivision surface as a mix of

Modeling Terms

Starting your 3D modeling journey is an exciting and rewarding experience. As you begin to
learn and practice, there are essential terms you need to know and remember to grow your
modeling skills. Use this resource to learn key terms and as a reference when creating your
own 3D models.

polygonal and NURBs geometry.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a service of PL Studios, Inc.

Vertex
A vertex is simply a point in 3D

Face

space. Connecting three or more


vertices creates a face. These

When three or more

points can be manipulated to

edges are connected,

create the desired shape.

the face is the filled


space between the
edges and makes up
what is visible on a

Edge

polygon mesh. Faces

An edge is the line created

are the areas on

by connecting two vertices.

your model that gets

Edges can be used to

shading material applied

transform and define the

to them.

shape of the model.

Topology

Topology : [tuh-pol-uh-jee] : noun

Triangle

Quad

N-Gon

A triangle is the simplest polygon

A quad is a polygon made up of four

An n-gon is a polygon that is made

that is made up of three sides or

sides or edges that are connected

up of five or more sides or edges

edges connected by three vertices,

by four vertices, making a four

connected by five or more vertices.

making a three sided face. When

sided face. Quads are the polygon

Its important to keep in mind an

modeling, triangles are typically a

type that youll want to strive for

n-gon is typically related to a five

polygon type often avoided. When

when creating 3D models. Quads

sided polygon, but its not limited

creating complex meshes, triangles

will ensure your mesh has clean

to just five sides. An n-gon should

tend to pose a problem when

topology, and that your model will

always be avoided, they often pose

subdividing geometry to increase

deform properly when animated.

problems at render time, when

resolution, and when a mesh will be

texturing, and especially when

deformed or animated.

deforming for animation.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a service of PL Studios, Inc.

Modeling Terms

Whatever type of geometry you use it will either be created by NURBs, or


points, edges, and faces. The way these components are connected together
and the flow around the 3D object is the topology. You can think of topology
as the type of polygon faces, the type of vertices and the flow of the edges.

Extrude
Extruding is one of the primary ways of creating
additional geometry on a mesh. The extrude
command allows you to pull out extra geometry
from a face (polygon in 3ds Max), edge or vertex.

For instance, you can use the


extrude command on the face
of a simple cube to pull out the
geometry needed to create
fingers. These additional
extrusions can be edited and
manipulated just like any
other area of the mesh.

Edge Loop
An Edge Loop is a series of edges connected
across a surface, with the last edge meeting the
first edge, creating a ring or loop. Edge loops are
especially important for maintaining hard edges
in a mesh, and also for more organic models.
For example, in order for an arm to deform properly
there will need to be edge loops on each side of the
elbow joint so there is enough resolution.

Beveling is the process of


chamfering, or creating
rounded edges on a mesh.
Beveling expands each vertex
and edge into a new face. In
the real world objects rarely
have completely hard edges.
Beveling helps to lose some of
the computer generated look
that comes with 3D modeling.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a service of PL Studios, Inc.

Modeling Terms

Beveling

Pivot point

Normals

The pivot point is the point on a 3D object where any

Surface normals are used by your 3D

rotation, scale, or moves that you do will occur from. This

application to determine the direction that

pivot point can be moved to any position on the model.

light will bounce off of geometry. This is very

For example, placing the pivot point on the hinges of a

helpful to get control over how the light reacts

door will tell the computer where it should rotate from.

to certain materials on your 3D objects.

Instances
When working with a 3D set you will often need to create
duplicates of a single object, whether its hundreds of trees or
fence posts. Doing this however can greatly increase render
time, because the computer has to calculate all this new
geometry. Instead of creating a duplicate of an object, you can
create an instance.

Instances are copies of objects that derive all their information


from the original object; because of this the computer only has
to calculate the geometry for the original object. Its important
to keep in mind that you cant edit an instances shape on
its own. If you edit the shape of the original object all of the
Original

Instances

instances update to reflect that change.

Construction History
While youre working on your 3D models you will likely
use a wide range of tools to get the desired result. For
example, you may need to extrude many different
faces or bevel the edges of the model to create a
particular shape. Most 3D applications keep track of all
This displays a list of every different tool youve used
on your 3D model in the order that you used them.

If you need to go back and adjust the settings of a tool


you used, you can find it in the construction history.
Keep in mind that as your construction history starts
to stack up it will slow down your computer, so you will
need to delete your construction history periodically.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a service of PL Studios, Inc.

Modeling Terms

these actions in what is called the construction history.

Digital Sculpting
When creating 3D models in an application like Maya the process includes manipulating
vertices and edges to get the desired look. While this works, it can be hard to get fine detail
that is often required, especially in organic models. Digital sculpting works around this issue by
allowing you to create your 3D meshes in much the same way as a traditional sculptor would.

You can interactively push and pull areas


of your model, and create details like
wrinkles and scratches without ever
having to select an edge or vertex.

Base Mesh

Often times a modeler


will create the low
resolution base mesh
in a program like Maya,
and import that into
digital sculpting applications
like ZBrush or Mudbox to be
High Polygon Mesh

To learn some more common terminology used in 3D modeling, be sure


to check out the free CG101 Modeling course. Use this reference guide to
help you start creating your own models and follow along training.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a service of PL Studios, Inc.

Modeling Terms

able to create those finer details.

Essential 3D Texturing Terms


You Need to Know

If you are new to the 3D texturing process then chances are youve heard some terms
being tossed around that you might not fully understand. This article will go over some of
the most common texturing terminology you are likely to encounter so you will be more
comfortable when deciding which map to use or what was that term they just referenced?

Texture Mapping

Shaders

To create a surface that resembles real life you need


to turn to texture mapping. This process is similar
to adding decorative paper to a white box. In 3D,
texture mapping is the process of adding graphics to
a polygon object. These graphics can be photographs
to original designs. Textures can help age your
object, and give it more appeal and realism.

A shader describes the entire material on an


object, how the light is reflected, how its absorbed,
translucency and bump maps, which you will learn
about a little later on in this post. Shaders and
textures can often times be confused, but a texture
is something that gets connected to a shader to give
the 3D object its particular look.

Body and Head


Texture Map

Eye Shader

Tongue Texture Map

Teeth Texture Map

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

TEXTURING TERMS

Tail Texture
Map

UV Mapping
A 3D object has many sides and a computer doesnt know how to correctly put a 2D
texture onto the 3D object.

Specularity

Normals

Specularity defines how a surface reflects light.

A normal is an invisible line that points straight out

It is basically the textures reflection of the light

from a polygon face or NURBS patch. These normals

source which creates a shiny look. Having the right

help the 3D application determine which side of a

specularity is important in defining what the 3D

surface is the front, and which side is the back. The

objects material is made from. For example, a shiny

correct normal orientation is important, especially

metal material will have high reflectivity, whereas a

when rendering, because most times a render

flat texture like cement will not.

engine will not render backward facing normals.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

TEXTURING TERMS

A UV map is basically the 3D model stretched out into a flat 2D image. Each face on your
polygon object is tied to a face on the UV map. Placing a 2D texture onto this new 2D
representation of your 3D object is much easier.

Transparency Maps
Transparency maps are grey scale textures that
use black and white values to signify areas of
transparency or opacity on an objects material.
For example, when modeling a fence, instead of
modeling each individual chain link which would take
a significant amount of time, you can use a black and
white texture to determine what areas should stay
opaque and what should be transparent.

Bump Maps
A bump map gives the illusion of depth or relief on
a texture without greatly increasing render time.
For example, the raised surface on a brick wall
can be faked by using a bump map. The computer
determines where raised areas on the image are by
reading the black, white and grey scale data on the
graphic. In other words, bump maps encode height
information using black and white values.

Normal Maps
A normal map creates the illusion of detail without
having to rely on a high poly count. For example, a
character can be detailed into a sculpting program
like ZBrush, and all the information can be baked
onto a normal map and transferred to a low poly
character, giving the illusion of detail without
increasing the actual poly count for the model.
Game studios utilize normal maps often because
they need to stay within a tight polygon budget, but

Normal maps use RGB values to signify the


orientation of the surface normals. The information
in the red, green and blue channels in the normal
map corresponds with the X, Y and Z orientation
of the surface. Normal maps can typically capture
more detailed information.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

TEXTURING TERMS

still need a high level of detail.

Baking
In your typical 3D scene you will want to shade, texture and light objects to create the exact
look that you want, and then you render. To shorten render times you can bake all the
materials, textures and illumination information into an image file. For instance, you could
bake all the lighting information directly onto an existing texture, render it once, and then
delete the actual lights used in the scene. This is great for games because a light would
need to be recalculated every new frame.

without lights (color map)

baked lights

TEXTURING TERMS

Now that you have familiarized yourself with these common texturing
terms, youre one step closer to building textures for 3D models. See
them in action in the CG101: Texturing tutorial before taking the leap into
any texturing tutorials.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

Key 3D Rigging Terms


To Get You Moving

In its simplest form, 3D rigging is the process of


creating a skeleton for a 3D model so it can move.
Most commonly, characters are rigged before they
are animated because if a character model doesnt
have a rig, it cant be moved around. The rigging
process can become very technical and seem
overwhelming at times, but after a little practice
youll be creating great rigs in no time.

Joints
Sometimes called bones, you can think of joints for rigging in the same way you think of joints in a human body.
They basically work in the same way. Joints are the points of articulation you create to control the model. For
instance, if you were to rig a characters arm you would want to place a joint for the upper arm, another joint for the
elbow and another joint for the wrist, which allows the animator to rotate the arm in a realistic way.

Driven Keys
To speed up the animation process for the animators, a rigging artist
can utilize driven keys when rigging a character. Driven keys allow you
to use one control or object to drive multiple different objects and
attributes. In the example we can use a driven key to control the fist

A driven key contains two parts: the driver and the driven. The driver
is the object in control of the animation. The driven is the objects and
attributes that are being controlled by the driver. Typically for regular
keyframes an attribute has values keyed to time in the time slider. For
a driven key, the attribute has values keyed to the value of the driving
attributes. The driver can be another object, or in the case of the

Driven

Driver

example image above it is a control slider.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

RIGGING TERMS

position for the hand, with just one single control.

IK (Inverse Kinematics)
Animate bones by going up the hierarchy.
Great for keeping the hand planted while
animating the uppper body.

FK (Forward Kinematics)
Animate bones by moving down
the hierarchy, or forward. Great for
smooth arcing movements.

KINEMATICS

Kinematics : [kin-uh-mat-iks] : noun

FK (Forward Kinematics)

IK (Inverse Kinematics)

Forward Kinematics means your character rig will

Inverse Kinematics means that the child node within

follow the hierarchal chain. This means more control

your rigs hierarchy can influence the movement

over your chain, but also means youd need to

of its parents. For example, if you use IK for your

position each joint in your chain independently of

characters arm you can position your characters

each other. For example, with FK if you positioned

hand and the rest of the arm chain will be calculated.

the characters hand the rest of the arm wouldnt

This allows the animator to animate independently of

follow like it does with IK.

the chains hierarchy.

Instead you would need to position each joint

Because of this IK is great when needing to have

independently, starting with the upper arm, the

a characters arm stay planted on something. For

elbow and then the wrist. This obviously takes

example, pushing against a wall or swinging on a bar.

more time than IK, but can give the animator much
more control of the poses. Most times riggers will
incorporate both FK and IK into the rig to meet the
animators needs.

Learn more about IK and FK in a post on


Demystifying IK and FK for Animators.
(http://blog.digitaltutors.com/ik-and-fk-demystified-for-animators)

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

RIGGING TERMS

A description of how something moves. FK means that you


animate bones by moving down the hierarchy. IK allows you
to animate going up the hierarchy.

Constraint
Constraints are very important in both the rigging and animation process. Typically your 3D application will have
several options for constraining. Constraints limit an objects position, rotation and scale based off the attributes
of the parent object. For example, by taking two separate spheres, applying a parent constraint, and then deciding
which is the parent and which is the child, you can select just one and the other will follow the parent.

Constraint

Control Curve

Control Curves
Control curves are created by the rigger to assist the animator in manipulating joints within the rig. Typically a rig
consists of many components that need to be manipulated to move the character in the desired pose. This can be
very difficult to do without control curves because the animator would need to hide the mesh to see the skeleton

Control curves are typically simple NURBS curves placed outside of the character so the animator can easily select
the curve to position the character instead of the actual joint.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

RIGGING TERMS

within the character and try to determine which joint manipulates the elbow, for example.

Deformers

Skinning

Weight Painting

Deformers are often


used by modelers but
they are also extremely
helpful to enhance your
rig. Deformers contain
algorithms that can move
large sections of vertices on
a model to produce organic
shapes. For example, when
rigging a character you
can utilize something like a
cluster deformer that allows
you to manipulate a large
section of vertices by using
just one single control.

Skinning is the process of


taking the joints or bones of
the rig and binding them to
the actual 3D mesh. When
the joints are bound to the
3D mesh it allows you to
move the joints and the
mesh will follow. Without
skinning the mesh to the
joints the joints will have no
influence on the actual 3D
model.

When a mesh is bound to


the skeleton, the computer
doesnt know how much
influence each joint should
have over each vertex, so
it averages the weight out
based on the distance from
the joint to the mesh. Think
of weight painting as a way
to set how much influence
a joint has on a particular
area of the model and
correct the deformations
on the 3D mesh.

Weight painting example:


if the leg joint has too much
influence on the model it might
affect the torso area giving you
unrealistic results.

Skinning

RIGGING TERMS

Deformers

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

Facial Rigging
When creating complex character rigs the facial rig setup is often
a whole different monster. A typical joint or bone setup doesnt
work well for a facial rig other than having a joint for the jaw bone
because facial movement often requires very stretchy and organic
motion. Instead of a normal joint setup, facial rigging usually requires
deformers (mentioned on previous page) and blend shapes.

Blend Shapes
A blend shape, or morph depending on your 3D application, allows you
to change the shape of one object into the shape of another object.
When rigging, a common use for blend shapes is to set up poses for facial
animation. This might be lip sync poses or more complex expressions like a
smile or frown. You can tie all these new poses into the original face mesh
and have it operate all on one control slider.

For example, if you want to raise an eyebrow you can model a face pose
with one eyebrow raised, connect it to a blend shape and using the slider
with a value of 0 to 100 to either raise or lower the eyebrow. This is a great
way for the animator to be able to quickly make face poses without having
to move individual facial controls around. There are some downsides to
using blend shapes for facial poses, because the editability can be limited.
Riggers often will give the animators both blend shape options and
traditional control points to use them in conjunction.
Right Brow

Keep learning with creative professionals on how to use these terms


and topics and develop techniques through hands-on experience with
introductory to advanced-level 3D rigging tutorials to become a proven
rigging artist. Find free 3D rigging tutorials to help you get started too.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

RIGGING TERMS

Left Brow

Modeling with
Quads or Triangles
What Should I Use?
As a modeler, youve probably already
encountered your own internal debate on
using quads or tris? The truth is, while it can
come down to preference, there are some key
advantages to using quads to create your models.
Its also not the end of the world or your model
if you use both, but it is recommended to use as
few triangles as possible to save you some major
headaches down the line. Lets cover why quads
are so beneficial for your work.

Triangle

N-Gon

Quadrilaterals

A triangle is the simplest


polygon that is made up
of three sides or edges
connected by three vertices,
making a three sided face.
When modeling, triangles
are typically a polygon type
often avoided.

An n-gon is a polygon that


is made up of five or more
sides or edges connected
by five or more vertices. Its
important to keep in mind
an n-gon is typically related
to a five sided polygon, but
its not limited to just five
sides.

A quad is a polygon made


up of four sides or edges
that are connected by four
vertices, making a four sided
face. Quads are the polygon
type that youll want to
strive for when creating 3D
models.

Triangles tend to pose a


problem when subdividing
geometry to increase
resolution and when a
mesh will be deformed or
animated.

An n-gon should always


be avoided, they often
pose problems at render
time, when texturing and
especially when deforming
for animation.

Quads will ensure your


mesh has clean topology
and that your model will
deform properly when
animated.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

QUADS VS TRIANGLES

Understanding the different types of polygons.

So why choose quads?


When modeling with quads, the wireframe will have a
much cleaner look and the model will be easier to navigate
and edit. When you spend hours working on a project, you
deserve to show it off, but if the wireframe is messy you
become very limited to what you can put out in the world.

Edge loops

Sculpting

Edge loops are typically a continuous


loop whith no determined start or end
point. If you were to follow a loop from a
highlighted vertex, you would eventually
end up at that exact vertex. Edge loops are
helpful to add detail such as wrinkles or
folds, they can also be used to help define
how sharp an edge is.

So you want to add that extra little detail


to your model, you better use quads.
If you plan on taking your model into a
sculpting application, such as ZBrush or
Mudbox, it is best to avoid triangles as
much as possible.

If an edge loop runs into a triangle, the


loop has to end. This breaks the flow of
the line and its no longer a loop.

Sometimes you may need to subdivide the


geometry 4-5 times pushing your model
to over a million polygons. This is why you
want to work with a predictable quadbased mesh. This also helps build a lower
resolution version and accent the model
using edge loops.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

QUADS VS TRIANGLES

When you use quads you create clean, sleek lanes of


polygons that are easy to follow through the model and
provide beautiful edge flows that can be easily modified by
you or a team member. Someone can convert a model made
up of quads to triangles easier than converting a model
made up of triangles to quads. Having clean polygons also
makes for a less distracting wireframe that you may want to
overlay onto a clay shaded model or even a textured asset.

Subdividing

Smoothing

When subdividing quads, your results


are fairly predictable. You have rows and
columns made up of four sided polygons
and it is easy to see where those polygons
will be split in half once it is subdivided.

If you plan on smoothing your geometry


or using a quick smooth preview feature,
triangles will produce anomalies across
the surface of the mesh. Because of
the uneven amount of vertices, the
triangle can cause blemishes or pinch the
geometry. This similar thing can happen to
geometry created with n-gons.

When you subdivide triangles, things tend


to get messy. There really isnt a visual
flow to the model.

Quads produce cleaner deformations. Typically, artists will focus on areas


where there will be a lot of bending and deforming, such as knees, elbows and
wrists, and provide a little extra geometry that will benefit the rig and animation.
With quads, this is easily accomplished by adding or manipulating edge loops.
If you have a cluster of triangles in this area, it is harder to add or remove loops that will help
benefit the animator. With triangles, it is also harder to see a clean flow of geometry and they
tend to produce sharp angles that can harm the meshs appeal. When it comes to animation
appeal is important both to the model and the artist who provided the mesh for animation.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

QUADS VS TRIANGLES

Animation

What about hard surface models?


When modeling hard surface models, the preference is still to use quads. The main reason:
so you can quickly add and remove edge loops. When it comes to UV mapping your model,
it is also easier when using quads. There are less edges to clutter the texture editor when
placing UVs.
Triangles are not a bad thing. They just have to be used
strategically throughout your model. When using them on
organic models, it is best to hide triangles where they will
not be visible or in areas where very few
deformations are happening.

Its always good to look at how other artists solve topology issues in their work. If you want to
go through a series of exercises exploring techniques to create the best topology in your own
models, follow along some of our popular courses:
Skill-Builder: Mastering Topology in Maya
Modelers Toolbox: Topology Tips
Retopology Techniques in Mudbox
Adding Facial Topology in 3ds Max
If youre just starting your 3D modeling journey or you need a refresh on important modeling
terms, take a look at an article covering key 3D modeling terminology.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

QUADS VS TRIANGLES

A good thing to keep in mind when working in


quads is you can always convert it to triangles.
There will be times when triangles will be
preferred, an example of this is the final mesh of
game assets or characters. In these instances most
artists still prefer to work in quads and then convert
their final model to a mesh built of triangles.

Becoming a 3D Animator: What It Takes and How to Get There


Becoming a professional 3D animator isnt something thats usually, like any
profession or career, achieved in just a few weeks, months or even years. Its a
career path that takes a great deal of love and dedication to the art form, but if
you love it then its going to be completely worth it.
Youll find out very quickly if character animation is something you want to pursue;
its at times frustrating, confusing and, above all, difficult. But it can also be the
most rewarding art form, as you get to be the one to breathe life into your
characters and make your ideas a reality for others to enjoy. If you have the
passion for it, nothing can hold you back.
This article will give you an understanding of what it takes to become a 3D
animator, as well as the steps to get you there successfully.
The Job of a 3D Animator
A 3D animators job is to bring inanimate 3D objects to life through movement. Its
up to you, as the animator, to make these objects feel like they are alive and
breathing. 3D animators can be found in everything from video games, movies,
television and commercials. If there are 3D elements, chances are theres a 3D
animator there to bring them to life.
Some great examples of 3D animation are in just about every single 3D animated
movie since Toy Story. Those characters are just computer data, but you wouldnt
believe it when watching it would you? To anyone watching, those characters are
alive and thinking. Its the animators job to make the audience forget that these
3D objects arent actually real. You can think of the 3D animator as the puppeteer
pulling the strings on the puppet.

The Characteristics of a 3D Animator


Animators need to be patient. As mentioned earlier, animation isnt something
learned overnight; it can take a year or even longer before you can really start
getting the chops for it. So be patient with yourself and with the overall animation
process. Animation takes a very long time to learn, and it also takes a very long
time to do well. You may be working on a single ten-second long animation for
weeks and weeks, but thats often what it takes to create great animation. It
should never be rushed.
Have you ever been outside and noticed someone walking in the most unique
way? Sure, all people walk differently, but this person might have had a certain
step to their walk that made it that much more interesting. If so, then thats good!
Because animators should always enjoy studying life. After all, animators must
bring 3D objects to life. The best way to get inspiration is to study real life.
Animators often see the world much differently than your average person. For
instance, if an animator sees someone interesting waiting in line at the movies
they might take mental notes on everything about them from their posture, their
movements to the way they talk. Whereas a person unfamiliar with animation
would probably not even take a second glance at them.
Animators are really still kids at heart. Whether theyre animating a fight between
two Transformers or animating Spider-Man in the latest movie, the animators get
to come up with these unique moves that theyve probably had in their mind since
they were young. While animation is difficult, its extremely fun and often requires
you to find your inner child again. If youve ever found yourself play sword fighting
with your younger nephew and realized youre enjoying it more than they are,
then you probably have the animation chops!
The Technical Skills of a 3D Animator
When it comes to 3D animation, its important to have the technical skills. Of
course, the computer doesnt create great animations automatically; the animator
does. That being said, youre still going to be in a complex piece of animation
software like Maya, so youll need to spend the time to learn the software. Even
though a 3D application is simply a tool for you to animate with, you still need to
learn how to use that tool, because software like Maya is more complex than a
pencil and paper.
A great place to start is with introductory animation tutorials like Introduction to
Animation in Maya or Introduction to Animation in 3ds Max. With these tutorials,

youll be able to get up and running and comfortable with the software so you can
spend more time animating.

Know Your Path


Knowing the path you want to take is very important. Do you want to work on
animated movies like Pixar and DreamWorks produce? Do you want to work on
movies like Transformers and Avengers? Or maybe you want to get into games?
Whatever the case may be, you need to know your end goal.
Each industry is typically looking for a different style of animation. That being said,
you also dont want to start off by just animating big fight scenes because you
want to work on Transformers. Instead, you should be learning the fundamentals
of animation, and then you can cater your demo reel to these different types of
jobs when your skills get up to that level.

Know Where to Start


Whether you want to work in movies or games, starting with the basics is the
most important step. Get every book on animation you can find, The Illusion of
Life, Timing for Animation, The Animators Survival Kit. These are all books
created for 2D animation, but still apply to 3D animation.
Each book teaches the core fundamentals of animation from the pioneers who
refined it into an amazing art form. Study these over and over and most
importantly learn the 12 principles of animation, which are the core techniques for
creating great animations.

Know Where to Learn


If youre ready to take the next step to becoming a 3D Animator, you can start
with the great tutorials below to kick start you in the right direction. With a learning
path, you can begin with the basics of understanding the software and move your
way up to advanced animation courses.
Another great way to enhance your skills is to find a community to share your
work and find other animators like yourself. One of the best ways to learn is to get
feedback on your work. It may be hard to get constructive criticism at first, but the
more you do it the more youll get use to it, and youll quickly see how beneficial it
is in pushing your skills further.

A great place to do this is with an animation site like youanimator.com which is a


community built website designed by animators for the sole purpose of giving
constructive feedback. You can upload your animations to the site and other
animators, like yourself, can give critiques on your shot by drawing directly on top
of the video. The great thing about this site is you must also give critiques to other
peoples work in order to receive feedback on your own work. This is beneficial
because youll start to grow an eye for spotting problem areas in your own work
by giving feedback on other animators shots.
You can also post your animation work directly in the Digital-Tutors forums to get
helpful feedback from other artists.
Learn to Act
Animators are essentially actors; its up to you to create all the movements for the
character. So learning to act is paramount. This doesnt mean you have to join
acting classes. Of course, that would be beneficial, but you can study acting
through books and movies.
As an animator, youll typically be given a simple line of dialogue, and you must
create all the actions and movements the character is going to take. You have to
come up with the acting, and your own emotions will show through the character.
If you have bad acting skills, itll translate into bad animation.
You can check out this very useful Animating a Dialogue Scene in Maya tutorial to
get an understanding of how acting influences your animation choices.
Learn Body Mechanics
Another important thing youll need to master is body mechanics. In order to
create believable animation, you need to understand how the human body
moves, as well as how animals move. One of the best ways to do this is to go
outside and shoot video reference.
How does a person swing their arms when they walk? When do the weight shifts
occur in a run? Having an understanding of these real-world principles will ensure
that your animations are believable. You should build up a whole library of
reference you can pull from when working on your next animation.
Here are some great in-depth tutorials that will teach you step-by-step how to
animate a complex body mechanics shot so that you can learn how to apply these
real-world principles into your own animations. Creating Walk Cycles in Maya,

Body Mechanics and Animation in Maya: Pulling Objects, and Creating Run
Cycles in 3ds Max.

Learn Good Communication Skills


As an animator, youre going to need good communication skills as you work with
many different departments. Creating a film, commercial or game is a very
collaborative effort. Youll need to be able to communicate your concerns and
ideas clearly to your peers.
For instance, you might be working with a team of animators all tasked with the
job of animating several sequences together. Youll need to communicate how
each individual shot is going to translate to the next in order to make it come
together and feel like it was an entire sequence animated by a single person.
While creating animations is one animators job, the production process is a team
effort.
Practice, Practice, Practice
In order to become a successful animator, it really comes down to practice. As
mentioned before, animation isnt something learned overnight. Its arguably one
of the most difficult aspects within a 3D pipeline. Itll take lots of trial and error and
most likely some frustration.
You may have heard before that it takes a thousand bad drawings to get to those
good drawings. The same goes for animation. The best thing you can do is just
practice. Practice implementing the animation principles over and over and
always find ways to push yourself further.

Push Yourself
As an animator, you should have a willingness to learn and be eager to learn new
things as well. Animation is never something thats truly mastered; there are
always new things to discover. Its never good to become complacent. Find new
ways to enhance your skills, whether its animating a type of creature youve
never tried before or taking on a more subtle acting shot youre not use to.

Get Experience
Your dream job or goal may be to work at a studio like Pixar or Infinity Ward, but
that doesnt mean you should hold out until you get there. Just because a studio
thats interested in you isnt Pixar doesnt mean you shouldnt work there. Any
experience you can get is good experience.
Sure, it might not be your true dream job, but youre animating and youre
improving your skills. The more on the job experience you get under your belt will
make you that much more appealing to the bigger studios.
Level Your Expectations and Exceed Everyone Elses
Its important to keep in mind that you might not get the most amazing shots at
your first job. Youll likely need to prove yourself. So be willing to take on any shot
no matter how small it may be. The animation supervisors will want to see how
you handle yourself and how well you do on these simpler shots before giving you
more complex ones.

For example, your first animation at a studio may be a quick little 24 frame clip of
something like a hand opening a door or a character turning to look at something
else. Yeah, they might not be the animations the audience will gawk over, like a
shot where you need to animate fighting robots, but you need to approach every
shot you do like it is.
Dont look at those animations or those shots the experienced animators give to
the new guys so they dont have to do them as boring. Approach it like its the
most important shot in the film. For each shot you do, the supervisors will see
your eagerness and skill. Eventually more complex and exciting shots will be sent
your way, but most likely not right off the bat.
You also need to remember that when you start animating professionally youre
not animating for yourself anymore. When you were learning, you were animating
your own shots, your own ideas. As you begin working at a studio, especially in
movies, youre animating for the director now. Youre bringing their project to life
and its up to you to make their idea and vision a reality.
Thats not to say you shouldnt bring your own ideas to the table, because you will
be, but you need to be able to take feedback and criticism well. If the director
doesnt like one of your choices and tells you to go in a different direction,
chances are you will need to.

Animation is an extremely fun and challenging path to take. Now that you have an
understanding of what it takes to become a successful 3D animator, its up to you
to take the next step.
Join a community to share your work with other animators and start learning with
the many 3D animation tutorials available, including Maya animation tutorials, 3ds
Max animation tutorials and CINEMA 4D animation tutorials.
If you have any questions or need some more advice, share in the comments
below.

Common Terminology for 3D Animation


Jumping into animation presents its own language that can leave you scratching your head at
times if youre not quite sure what the terms mean. Use this list of the most used animation
terminology to help you while learning and to start creating your own animations.

Keyframe

Timeline

Keyframe

Timeline

Keyframe

The timeline shows the frame numbers


within your scene and can be adjusted to
any frame length desired. This is where
the frames can be seen and adjusted.
You will also have the ability to play your
animation directly in the 3D application
with the timeline.

A keyframe is basically the building block for


all animations. In 3D animation you must
create a keyframe to lock down a movement
in time. When a keyframe is created it tells
the computer where you want to have a
change in movement. You need at least two
keyframes for the computer to know the
change that you want to make.

24 FPS
Frame rate
The frame rate is the amount of frames per second. Its important to find out what frame
rate your animation needs to be on before starting any animation, so you can be sure your
animation will be timed right. For example, in film the frame rate is 24 frames per second,
meaning 24 different images are displayed over the course of 1 second. As one of the more
common frame rates, 24 frames per second is a great frame rate to default to if youre not
sure what frame rate your project needs.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

ANIMATION TERMS

12 F = .5 S

Poses
A pose in animation
represents how the
character is positioned. You
can think of a pose the same
way a statue is posed. Except
in animation there are
many poses that make up
the animation. If you were
to freeze an animation at
any point in time, whatever
position that character is in
could be considered a pose.

S Shape

Backwards C Shape

Line of Action
The line of action is an invisible line that can be drawn along a characters pose. Typically
there will be a few main lines for a pose, a C shape, a backwards C shape, and an
S shape. When posing out your character youll want to ensure you have a strong line
of action that typically resembles one of these shapes to help you establish a dynamic
pose. An unappealing line of action would be a simple straight line that flows from your
characters head to their feet.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

ANIMATION TERMS

C Shape

Breakdown

Blocking is an animation
technique where the most
important story telling
poses are created to
establish the placement of
character or object and how
they will move in the scene.
This technique is used very
early on in the animation
process and helps tell the
story of the animation.
Blocking is often the first
step in pose-to-pose
animation.

A breakdown describes how


the character or object is
going to get from one pose
to the other. A breakdown
can be considered a type
of inbetween but a very
specific one. When you have
a pose on frame 1 and a
different pose on frame 10
the next step would be to
add in a breakdown.

Inbetweens
Not to be confused with a
breakdown. An inbetween
basically fills in what is
happening between the
breakdowns for pose A
and pose B. In computer
animation often times
the inbetweens will be
created by the computer.
In traditional 2D animation
there were often times
assistants to the animators
called inbetweeners that did
this fill in work.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

ANIMATION TERMS

Blocking

Moving Holds

Twinning is a term used when one half of


a characters body mirrors the opposite
half producing an unnatural symmetry.
Twinning can be avoided by simply offsetting
a characters pose on one half of its body
ever so slightly from the other half. Avoiding
twinning in your animation is one of the first
steps in creating appealing posing. Twinning
can be a very quick fix, simply by adjusting
the position of the arms or legs.

A moving hold in animation means that a


character freezes or moves very slightly in
a particular pose for whatever length of
time that you have set for it. Moving holds
are used in animation to help break up
the movement, and add to an emotion a
character is trying to compose. Moving holds
should be used sparingly and only where
needed. If there are too many moving holds
it can give the animation a stop and go feel.

Polish Pass
The polish pass refers to the very last step in an animation. This is the point when an animator
will add in the very small finishing touches to the work. Things like eye movements, finger
adjustments, tracking arcs, etc. are all usually animated in the polish pass after all of the main
movements have been finalized.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

ANIMATION TERMS

Twinning

Gimbal Lock
In computer animation gimbal lock is the loss of one degree of rotation in a joint. In computer
animation this means that there has been a significant amount of rotation that has passed the
180 degree mark, and the computer doesnt understand which direction you want to rotate. This
will look like a very fast rotation hiccup when your animation is played back. To avoid gimbal lock
make sure you choose a rotation order that would be suitable for your animation. The rotation
order allows you to reevaluate how each axis reacts as an object is rotated. Certain applications
also include tools that reinterpret rotation values that cause gimbal lock in order to help iron out
discontinuities in animated-rotation data sets.

Breaking joints basically refers to rotating


joints in the opposite direction to its normal
bending. In the real world this wouldnt
be physically possible without actually
breaking a joint. In animation bending the
joints in the opposite direction will add a lot
of flexibility to a movement, and is typically
used for a more cartoony style animation.
Since youre only breaking the joint for
a frame or two when played back at full
speed it isnt noticeable.

Learning basic 3D animation terminology will greatly help your early


animation journey. Find more animation posts here and make sure to
check out our growing library of 3d animation tutorials from industry
professionals to really ramp up your learning.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

ANIMATION TERMS

Breaking Joints

12 Things Animator-Friendly Rigs Should Have


Rigging is a vital part of the 3D pipeline, without rigs there would be no way to
animate the characters. As a rigger its your job to create flexible and intuitive
control rigs that can achieve any animation thrown at them. This article will teach
you some of the key features every great character rig should include so you can
start creating animator friendly rigs that any character animator will enjoy using.
Clean Deformations

This is probably one of the most important aspects of having a great character rig.
A rig needs to deform properly and believably in every single area of
manipulation. If a character is bending over, the stomach and chest all need to
deform properly like a real human would.
Bad deformations can stick out like a sore thumb, and not only look bad at
animation time, but also when rendered. To fix this, youll need to make sure
youre painting the weights properly and in the right areas.
As the character comes down from the modeling department the character should
have enough resolution to deform well, so its up to you to ensure the characters
you receive have all the right edge flow and topology in order to create a great rig.
Before ever passing your rig on you should have a strong testing phase to ensure
that all the weights have been properly painted to achieve a realistic deformation,
so your rigs wont be kicked back by an animator due to deformation issues.

Clear Control Curves

In order for an animator to move the individual joints on a character theyll need to
have access to control curves to make the selection process much easier. The
placement of your control curves should be clear on the rig; the animator should
be able to tell exactly what the curve will influence on the particular part of the
character without having to select it first.
The control curves should also be big enough to see so the animator can easily
select them. Clear control curves will ensure the animator spends less time
figuring out what a control does and more time actually animating.
GUI Picker

A GUI (Graphical User Interface) picker is a great feature to include on a rig with
hundreds of controls. While clear control curves are vital, depending on how
many controls are actually needed on a character rig can make the selection
process extremely difficult for the animator. A complex character rig can
sometimes get up to the thousands in the number of controls. Having a place
where all these controls can be displayed is very beneficial.
GUI pickers are especially good for facial controls because often times there can
be hundreds of controls all populating one small area on the face, making them
extremely hard to find or select if doing it directly in the viewport. Giving the
animator the ability to see a representation of where each control lies on the
character and allowing them to select it in a different window can greatly speed up
the animators workflow.
However, its important to keep in mind the GUI picker should be simple enough
to use. Try not to have numerous submenus for one character all with different
controls. A GUI picker should speed up the control selection process not slow it
down.
Great Eye Controls

When it comes to character animation the eyes are extremely important. The rig
may be used in subtle acting shots, where the eyes can play a key role in selling
the characters mood and emotions. There should be more than just an open and
close control for the eye lids because the animator will most likely need to do
more with the eyes than blink.
Having several eye shapers along the lids will help push the expression of the
eyes and sell the overall emotion the animator is trying to achieve. You can also
incorporate squash and stretch controls in the eyes and lids to help push the
fleshiness of a blink and to create more exaggerated animations.

IK and FK Options

In any character rig you create there should always be the option to switch
between either IK or FK, whether in the arms or the feet. A great character rig will
give the animators different options, because not all animators like using IK for
every shot. And as a rigger you can never predict the types of animations your rig
will be used for.
In a character rig there should also be the ability to switch to IK or FK for the feet.
While most of the time IK will be used for the feet, there are still times when an
animator may want to use FK to achieve the right look. Having IK and FK options
for both the feet and arms will eliminate any possibility your rig will be sent back
because an animator likes to work in FK.
Versatility
You never know what type of animation your character rig will be used for, so you
should design your rig to be versatile. The rig should perform well for very subtle
acting shots, as well as realistic body mechanics or more cartoony animations.
You can incorporate stretchy limbs and squash and stretch controls where
needed to give the animators the ability to exaggerate their animations when
needed. Not all shots need to be cartoony or subtle, but a character rig should be
able to perform well and achieve the look wanted no matter what type of shot it is.

Of course, depending on the project there will be times when your character rig
will be used for only very realistic movement, and will never need to achieve a
cartoony style animation. So its important you have clear communication with the
other departments on the limitations that need to be set for a rig, and how much
freedom you have.
Full Finger Controls

The finger controls are an extremely important aspect in a character rig that
sometimes get overlooked. The controls on the fingers should be more than
simple opening and closing in one rotation axis. The hands are used to express
emotions, and getting the right posing is extremely important.
A great character rig should have the ability to rotate each individual finger joint in
the X, Y, and Z rotation axis. Thisll allow the animator to incorporate things like
drag and lead and follow into the finger animations. Itll also give the animators a
great deal of control over the fingers so they can get the perfect pose.
You can even take this a step further and add in squash and stretch controls for
the tips of the fingers to get the compression that occurs when a finger is pressed
up against another object, like a table for instance.

Global Scaling

Having the ability to scale a character rig is great for ensuring the character will
look right in any environment. The animator should have the ability to completely
scale the entire rig, including the controls.
This will give the animator as much control over a shot as possible. For example,
animators are often required to cheat certain things to the camera to achieve the
look they want, and scaling a character down can be perfect for establishing
perspective between two characters in a shot.
Breathing Controls
Breathing controls are a great feature to incorporate into any character rig,
because if this is a living and breathing character, they will obviously need to be
able to breath, which means they will need to be animated to give the illusion of
breathing. By including breathing controls you can speed up the animation
process.
Youll give the animator the ability to quickly and easily create a breathing
animation without having to move and scale the chest controls to try and simulate
this effect. Providing a simple slider controlling the characters breathing means
the animator can get a realistic animation in a fraction of the time by keying this
one slider.

Automation in Important Areas

Having places on a rig where simple tasks can be automated is extremely


beneficial to the animator. This can be an expression that simply controls the
opening and closing of the eye lids to create a blink, and the animator can go in
and fine tune it as needed, but the overall pose is already there.
Another example of this would be an expression controlling the opening and
closing of the hands to create a fist. Having these areas of a rig that can be
controlled by a simple slider will speed up the animation process and allow the
animators to quickly get into a pose, and then adjust it further from there.
Different Control Levels
As mentioned before, a complex character rig can sometimes get into hundreds
or even thousands of controls. This can bog down the computer, and the speed at
which an animator can navigate the viewport, making it a frustrating process.
While not all rigs will have this amount of controls on them, its important you have
a lightweight version of a rig that will eliminate this problem.
Animators will often want a lot of controls to be able to get the perfect pose, but
that doesnt mean they all need to be visible at one time, and they may not all be
needed for every shot. Giving the animators the ability to switch to a lightweight
version of the rig with fewer controls is vital for maintaining a control heavy rig
thats not taxing on the system they work on.

Pickwalking Ability
Pickwalking allows you to use the arrow keys to select through different controls
on the character rig. For instance, if you select the wrist control you can pickwalk
up to the elbow, then the arm.
While this might not be as vital as some of the other features listed above its still
a great feature to include in your character rig. Even these very small additions
are extremely important to the animator, and can help them speed up their
workflow.
Taking the extra bit of time to include the ability to pickwalk will not only create a
more intuitive rig, it will also ensure that the animators are able to spend more
time animating. Take note that depending on your program, pickwalking may be
called something different.

Now that you know the key features every character rig should include, start
incorporating them into your next project. Whether its a simple addition like pick
walking or a more vital aspect like clean deformations they all play a part in
ensuring your character rig is the best possible for any animation. If you want to
learn more about character rigging check out Introduction to Rigging in Maya, and
continue learning with the hundreds of other 3D rigging tutorials.

Understanding Dynamics - the Powerful Effects That Can Make


Your 3D Life Easier
Dynamics are an extremely powerful feature in any 3D application. Without them
there would be no particle effects like smoke and fire or complex cloth
simulations. Its no surprise that mastering dynamics is a vital step in your 3D skill
set. They can make your life easier, and greatly increase the speed at which you
work. This article will help you have a better understanding of dynamics so you
can use them confidently in your next project.
Dynamics are a complex physics engine inside your 3D application; dynamics
describes how objects move using rules of physics to simulate real-world forces.
You can specify the different actions you want your object to take, and the
software will figure out how to animate that object in the most realistic way.
Dynamics are vital for creating realistic motion that would otherwise be extremely
difficult and time consuming to achieve with traditional keyframe animation.
For instance, you could use dynamics to simulate a puck bouncing around in an
air hockey game or a building toppling over into a pile of rubble. While simulating
an air hockey puck may not seem very interesting to you, the different effects that
can be created in your 3D application with dynamics are substantial, everything
from realistic fluid effects to explosions, smoke, fire and more.

Chances are the last blockbuster movie you saw utilized dynamics to create some
of the special effects. Without dynamics most of the jaw-dropping 3D effects you
see wouldnt be possible. Working with a complex physics engine may seem
daunting, and an understanding of physics might seem paramount, but in actuality
you do not need to be a physics genius to create these types of simulations.
Dynamics are a type of animation simulation but they differ in how they are
calculated in the computer. Typically dynamics are calculated from frame to
frame, and the position of an object in each frame is taken from the position of the
previous frame. This differs from keyframe animation where the objects position
is determined by key values set at different frames. You can, however, bake out
your simulations into regular keyframe animation, which will allow you to edit the
simulation with the use of keyframes on the timeline.

Key Dynamic Features

If youve ever had the challenge of animating a collision, or any object that needs
to feel like its obeying the laws of physics, you know it can be extremely difficult.
Something as simple as a dice rolling on the table can take hours of tweaking
keyframes to get it to look natural.
Dynamics have the ability to quickly and easily simulate this type of animation
with what is called a rigid body, whether its a line of dominoes falling or a
wrecking ball demolishing a brick wall. If you were to animate each brick
crumbling or each domino falling, it would be a giant task but with rigid bodies it
can be simulated by the computer in a realistic way in a fraction of the time. Rigid
bodies are great for simulating animations that would otherwise take much too
long with traditional keyframing.
Most 3D applications have built-in effects great for quickly dropping down an
effect that will produce very nice results. For instance, Softimage, Maya, and 3ds
Max have built-in fire effects that can be emitted from any polygon or NURBS
object. Play around with these effects in your 3D application to see how they
work; each one has different attributes that can be fine tuned to adjust everything
from fire strength, emission, direction and more. There are also many other prebuilt effects like smoke and lighting.

Another great feature with dynamics is the ability to simulate cloth. With Maya you
can quickly simulate a cloth material from any polygon object with nCloth. Whether
you want to create clothes that move and flow properly around your character, or a
table cloth for a dining room this can be achieved with nCloth. In 3ds Max this is
called Cloth, and Softimage simply calls it Cloth as well. Keep in mind that working
with a cloth simulation can use a lot of computing power. Even with a fast computer,
cloth dynamics at a high level of accuracy can take a very long time to process, so
you may need to lower your simulation to a reasonable level.

One of the most powerful features in a dynamics system is particles. Particles can
be used to replicate fire, explosions, smoke, water, fog and more. While the builtin effects that come in most 3D applications are great, particles allow you to fine
tune the effects and have complete control over your dynamic simulation.
Particles can also be used to create things like grass and fur.
The particles are controlled by an emitter which acts as the source of the
particles; the emitter has many different attributes attached to it, like particle
emission rate, velocity and many other settings that can be tweaked. Unlike the
built-in effects, particles do not produce the desired look right at the start, and will
need to be adjusted to create the look of the effect that you want.
What to Expect When Working With Dynamics
When working with dynamics there is inevitably going to be a significant amount
of trial and error. The result you are looking for will not be achieved with the click
of a button. Even with pre-made effects like the fire effects, there will most likely
be editing that needs to be done in the effects attributes, in order to get exactly
what you are looking for. When working with dynamics, the typical workflow is
tweak and test. Do not get discouraged when you arent getting the results you
want right off the bat.
What you see in the viewport isnt always what you get. When played back,
complex dynamics can give undesirable results or be slow all together, because
the computer has to calculate everything on the fly. To get a better representation
of the dynamic simulation happening, you can do a quick playblast or animation
preview depending on your software.

Dynamics are an amazing tool to have at your disposal, and can make many
tasks much easier for you. Whether you need to simulate an object ripping
through geometry or a wall being demolished, dynamics can achieve it quickly
and in a believable way. To learn more about dynamics, check out these in-depth
tutorials on Introduction to Dynamics in Maya, Introduction to MassFX in 3ds Max,
and Beginners Guide to ICE in Softimage. Find more training options with more
3D dynamics tutorials.

Understanding Three-Point Lighting


Three-point lighting is a technique widely used in traditional photography and
cinematography. Its purpose is to properly illuminate a subject in an effective way
that is not only pleasing to the eyes but also relatively simple in its approach. By
using three separate lights you have complete control on how the subject is
illuminated. The world of 3D has grasped onto this lighting technique, and it can
be seen in everything from product visualizations to character busts and more. It
has quickly become the go-to lighting technique for many 3D scenes because
great lighting results can be achieved relatively quickly. Three-point lighting is
great for creating a studio type lighting effect, if you are creating a still image or if
you need to illuminate a single subject or product.

The most common way to achieve proper three-point lighting is by using three
different spot lights in the scene. Setting up each light the correct way will allow
the subject to be illuminated without deep shadows and be seen properly in the
camera view.
The first and most important light is the key light. Like the name suggests, this
light is vital when establishing the overall lighting for the scene. It should have the
most intensity out of the three lights and should highlight the form and dimension
of the subject. The key light is typically set up to the right of the camera at a 45
degree angle.
Once the key light has been properly set up, then the fill light should be created.
The fill lights purpose is to fill in the deep shadows that are inevitably cast onto
the subject by the key light. The fill light is usually set up opposite of the key light.
The last spot light used is the rim light (sometimes referred to as the back light).
This has the least illumination effects to the subject because it is typically placed
directly behind the subject, facing the camera. The rim lights purpose is to add a
very slight glow to the back of the character. If you were to hide the key and fill
light, you would see that the subject is darkened all around, except a small light
around the edges.

Key Light
When setting up the key light the first step is getting it in the proper place. As
mentioned above, typically it is placed to the right of the camera at a 45 degree
angle. While this works most of the time, it really depends on your scene and
what you need to illuminate. Play with different angles and positions until you are
happy with the result. Remember, most of your light will be emitted from the key
light so a higher intensity may be needed.

Fill Light
The position of the fill light depends on where the shadows are being cast from
the key light. It should be placed in a spot where it can illuminate those dark spots
on the subject. Its important to remember that the fill light should not be as bright
as the key light.

A common mistake is having the intensity much too high, like in the example
image above. This can cause the subject to get blown out.

Instead, you want it just bright enough to illuminate what the spot light isnt
reaching.

Rim Light
When positioning the rim light make sure it doesnt really provide any frontal
illumination to the subject, it should just create a very slight outline of light. While
the rim light is typically placed directly behind the character dont be afraid to
adjust the angle in order to achieve the look that you want.

Depending on the subject you are illuminating, and if there is a background, you
may want to adjust your settings in your 3D application so the spot lights in your
three-point lighting setup only illuminate the subject and not the background. The
reason for this is because the spot lights can have unappealing results on the
background as shown in the image above.
Now that you are familiar with what goes into three-point lighting, a great next
step is to jump in and try it out on your next project. Keep into account that threepoint lighting is not the be-all and end-all lighting setup for every project.
However, it is a great place to start and build off of when youre first starting to
light your scenes.

5 Rendering Tips for Animators


When Adding the Final Touches to Your Demo Reel
Your animations are all polished up and youre ready to put together your demo
reel. To get that extra pop in your reel, adding animation previews isnt always the
best way to go. Simple but well-executed renders can help add that extra final touch
to your demo reel to push it over the top. Maybe you have an animation that calls
for a particular lighting setup that will really help sell the mood of the shot.
Whatever the case may be, there are some very important things to consider when
rendering your animations for your demo reel. Lets go over five tips that, when
applied, will ensure your shots are getting the polish that they deserve.
One of the most important things to take into account before you ever render your
shot is, are you ready? Before starting on any sort of render, it never hurts to
double-check that you are completely happy with your animation. Only when your
animation is completely polished and truly called final should you decide to render.
You should never rely on the rendering phase to fix problems with your animation
or hide things because, at the end of the day, its all about the quality of the
animation and not if it looks pretty. Rendering should just be a nice added touch
to the shots you feel are at the quality worthy of your reel.

Use Motion Blur Sparingly


Motion blur is a great feature to add to your renders, especially if your shot is
body mechanics heavy. It creates a very nice blurring effect during movements to
help smooth things out, and makes it so everything isnt perfectly clear and crisp,
similar to what the human eye would see. While this is a nice feature, it can be
very easy to incorporate too much motion blur which can decrease the quality of
your shot. Use motion blur for your shots, but use it sparingly. Think of it kind of
like the animation principle of squash and stretch. It should be felt rather than
seen.
Every animator has done this before, you add in motion blur to your render and
suddenly that knee pop isnt nearly as noticeable, or that spacing issue magically
got corrected by the motion blur. While this may seem good at first, you should

never use motion blur to hide hitches in your animation. The first reason being its
better to fix it rather than hide it, and the second reason is that a recruiter will
notice!
Before you use motion blur you also need to make sure that your animation
curves are cleaned up and there are no problem areas, like gimbal lock. Messy
curves can cause problems in the motion blur calculations. This will result in
glitchy motion blur when rendered.

Choose Lighting That Fits With the Animation


When youre ready to add lights to your scene, you should be asking yourself
what type of lighting would best suit your animation? Dont add lights just for the
sake of illuminating your shot. Think about what type of lighting would enhance
your animation. For instance, maybe the character in your scene is threatening
someone.
You could use a darker light setup to help push the tone and mood for your shot.
Play around with different lighting setups to see which one best fits your
animation.

Use Image Based Lighting


If you have a body mechanics shot that doesnt require background elements or
any set pieces, like a walk cycle, you can utilize a simple image based light setup
to achieve a good looking render in a very short amount of time. While it may not
be a photorealistic result, it can still be a lot better than the simple playblast or
animation preview.
In order to get a good result, try utilizing a simple ramp as your background image
and choose two colors that complement each other. This will provide a nice, even
light distribution for your shot. Once you are happy with that, you can incorporate
final gather to increase the render quality. Doing this provides a nice render with a
soft background color, and each frame can be rendered in just a few seconds
rather than a few minutes if you were to try a more complex light source.
As mentioned above, this image based lighting technique is great for body
mechanics shots because it wont cast any deep shadows and will provide an
even light source.
Dont Complicate Your Render
Try to steer clear of flashy effects or lighting that distracts from the animation.
Rendering is great for the shots you want to include on your demo reel, but you
dont want to spend hours trying to set up your render, and you also dont want to
spend hours or even days trying to get the actual animation rendered.
It can be very easy to turn on settings like final gather and global illumination to
up the quality of your final render, but these settings greatly increase render time.
So try and find that line of a good looking render, and still keep the render time
relatively low. Having your animations rendered is nice to add that bit of polish to
your demo reel, but the focus still needs to be on the animation and not the
render.
Set Up Rendering Passes
Another way to speed up the rendering process is to set up simple rendering
passes so you can change colors and make other adjustments in a software
package like After Effects. Its a whole lot easier to make minor color corrections
and brightness adjustments in a video editing program than it is to have to rerender your entire animation sequence just because you might need to up the
brightness slightly.
Render to an Image Sequence
This tip is probably well known to artists who render often, but to an animator it
can be easily overlooked. Whenever you render your animation you should
render to an image sequence and not straight to a video format. Why? Because if
for some reason there is a crash, a power outage, or any unexpected hiccup with
the render, the entire thing will be lost and you will have to re-render the whole
animation.

This can be frustrating, especially if your animation was just a few frames from
being completed! Or if your animation takes several hours to finish, which it most
often does.
Instead, render to an image sequence. This can be a JPG, PNG or whichever you
prefer. If your computer crashes, all the previous rendered frames will be saved,
and you can start back up from where you left off. Just about any compositing
software can import image sequences so all you have to do is bring them in and
make sure they are playing back at the frame rate you animated in, most likely 24
frames-per-second, and render to a video format from there.
If youre ready to render your animations for your demo reel, make sure to
incorporate some of these tips into your workflow to speed up the process and
produce results that will enhance your animations. Remember that your render
shouldnt hide or distract from the animation but should complement it.

Mastering VFX terminology: Bringing images together with compositing


Knowing where you want to fit in the pipeline of the movie making process can be
tough when entering the visual effects (VFX) industry. Have no fear, this article is
part of a series that covers terminology and workflows that are essential to every
VFX artist.
For this article, well be looking at VFX from a compositing standpoint.
The film has been shot, characters have been animated and rendered and now
its up to you to bring it all together in a blaze of glory! So with that in mind, lets
get some basic vocabulary out of the way.
In the beginning, there was a union of two images
To composite something means to combine two or more images to make a single
picture. These pictures are made up of a collection of simple images called
channels. A basic image will have a channel to store the red information, one for
the blue, and one for the green.
A screen only displays in these channels so this is how we get a color image. If
you want to bring two images together, you would remove the background of one
image and then it would be placed onto a new background. The variations on this
idea and the technical challenges behind it are the vast and magical land of
compositing.
Lets take a look at four of the key ways this can occur. When one image is going
to be placed on another, we take for granted the fact that the picture on top needs
some mode by which its background is removed. This information is called an
alpha channel.
It is the information by which the computer removes the area of the image that
would cover the region where you want the new background to show through.
Alpha channels and chroma keys
Often, the alpha channel of a CG image is stored in the images data. When this
happens, its easy to take it for granted because the image just looks like it
doesnt have a background. However, you may be working with an alpha channel
that isnt stored with the RGB channel data and instead you have a simple black
and white image.
Its good to know you can use this image in many ways to cut away information in
an image. However, sometimes when youre not working with a CG image, for
example, green screen footage, you have to create that alpha information by
pulling a chroma key, also called an alpha matte.
When you pull a chroma key, youre telling the computer to look at an isolated
color that is usually blue or green, and remove the areas of the footage that have
that color. Doing this will allow you to place your subject on any background you
choose. This green or bluescreen technique is used everywhere from big
blockbuster movies to your local news station.

Blending modes
Another way you can composite images together is through blending modes.
There are a lot of different blending modes to choose from, but when boiled down
to their core functionality they are simply mathematical algorithms that bring the
images together in a way that blends the color information together. One of the
most common blending modes is the Multiply blending mode.
Before I explain the awesomeness of this blending mode, you first need to
understand the color scale within the RGB channels. It is on a scale of 255 colors
with 0 being black and 255 being white.
So lets say that I have a beautiful nature scene and I have an image of a red leaf.
I want to place the leaf on top of the nature scene and use the multiply blending
mode to unite them. So you change the blending mode of the leaf to multiply, and
now all the color values in the nature scene below the leaf will be multiplied by the
color value of the leaf. Then they will be divided by 255.
Following the above process, youll essentially create an effect where the whitest
pixels in the image become transparent. This creates a darkened effect on the
leaf so that the pixels underneath in the image show through. There are a lot of
other blending modes with similar operations that can be chosen depending on
the look you want. In fact, every union of images you have has a blending mode.
Its just that if you dont set it, its defaulting to the blending mode of Normal.
Node vs. Layer-based compositing
Up until this point, weve been looking at the compositing process from a software
agnostic standpoint, which is best if you still havent chosen what software to
learn. However, when you do choose, youll have two main types of categories to
choose from: node-based software and layer-based.
Layer-based software rely on information being placed in a stack. If you want an
image to go on top of another image then you would place it on top of the stack.
When it comes to node-based software, you rely on input pipes to choose which
image you want to be on top. If you have an A and a B pipe, you would hook
your B pipe up for the background and the A pipe up for the foreground.

Mastering VFX terminology: Creating mattes in footage with rotoscoping


For the second volume of our mastering VFX terminology series, we will be
discussing the ins and outs of rotoscoping. Rotoscoping is one of the essential
tools in a VFX compositors toolbox.
Its grunt work, but everyone has to do it when theyre first starting out. Theres
also a right way and a very wrong way to do it. Learn some of the best timesaving practices, and prevent a lot of headaches and reworks.
What is rotoscoping?
First of all, lets define what a rotoscope is. A rotoscope is a manually created
matte for an object in live action footage. A matte is an image that controls the
transparency of another image.
When you create a rotoscope, you are essentially drawing the outline of an object
in its footage to cut away the background. Rotoscoping is the actual process of
drawing the outline.
Is there a right way to draw a rotoscope?
Often when youre first drawing a rotoscope youll start by outlining the entire
shape with one big rotoscope. This is a common rookie mistake.
The reason this is wrong is because later when it comes time to animate that
rotoscope it will be much harder to create keyframes that accurately move a
rotoscope that encompasses an entire image.
Instead, consider breaking the rotoscope into several smaller overlapping parts.
This may seem like more work at first, but the drawing process is much less time
consuming than the animation process.
The animation process can be sped up considerably when joints of moving parts
are taken into consideration. Consider a person waving their hand. Instead of
creating one big mask that encompasses the whole body, create a mask for the
fingers, the palm, the forearm, the upper arm, the shoulder and all the body parts
that are defined by natural joints.
This might take more time to draw initially, but once things get moving its much
faster because you can focus on moving the whole rotoscope rather than
individual points within it.

Whats the best way to make a rotoscope move with an image?


You might be tempted to go ahead and start moving the rotoscope or scopes
around and going through them frame by frame. This will take a very long time
and have more of a chance of being inaccurate because of the tremendous
amount of keyframes this technique generates.
Instead, watch the footage through several times and establish the most
prominent points of change in the motion of the object. Then set keyframes for
your rotoscope on these keyframes.
Now, begin scrubbing between those keys and finding the areas where the
rotoscope is not matching the outline of the object. If you find yourself having to
do a lot of point level animation on the rotoscope, you probably did not break the
roto down into small enough parts.
You should be able to reposition and rotate the rotoscope with minimal changes
to the shape of its outline. Setting those intermediate keyframes is the longest
part of the rotoscoping process.
Do your best to set as few keyframes as possible so that you can save yourself
the most time and have the least chance of a jittery looking matte.
Using a track with a rotoscope
If you find you have a lot of complex movement that you have to match with your
rotoscope keyframing, consider tracking some of the features and using the
tracking data to animate your rotoscope. Different software have different ways of
actually executing this but the concept is still the same across the board.
Track the position and rotation of an object and then apply the tracking data to the
position and rotation of the rotoscope. Some software wont allow you to directly
tie tracking information to the rotoscope so you may have to create a rotoscope
thats not being applied directly to your footage but first to a solid that will act as a
matte. These types of footage can receive tracking data and also can be reused
later if output as a sequence.
The rotoscope as an alpha, or if you drew your rotoscope on solids to begin with,
you could just make sure you are using pure black and white colors for the final
output to work as a luma matte. Rotoscoping can be relaxing for some people or
mind-numbing drudgery others, but doing your rotoscope the right way the first
time is a good way to ensure that youll eventually move up in the VFX pipeline.
Are you one of the people out there who likes rotoscoping or do you hate it with a
passion? Tell us about it in the comments below!

The Evolution of VFX in Movies: The 60s Till Now


The movie industry has consistently relied on some type of visual effects even in
the early years of film making. Whether it was the fake blood in 1965s Battle of
the Bulge or the continually impressive Kong in King Kong (1933). As time
progressed, its really no surprise that the amount of special effects utilized in
films is constantly increasing. Special effects help to create the fantastic, the
things that simply do not exist in our world, or to help create a completely unique
visual experience like in Sin City or 300. The early years of film making relied on
practical effects, now the majority of effects are created through the use of a
computer. Lets take a look at the evolution of VFX in movies, looking back on the
great moments of effects that helped to push the art form into what it is today.
60s
The 60s were the decade of some truly impressive practical effects that had
moviegoers in complete aw at what was transpiring on the screen. One of those
ground-breaking moments were with the infamous skeleton battle scene in Jason
and the Argonauts (1963). Created by Ray Harryhausen and done in complete
stop-motion animation, he was able to bring these skeletons to life in the film, and
integrate them with the real actor. This is a very famous sequence in the effects
industry that many artists look back on even today. While it certainly doesnt stand
up to do todays visual effects, it was a ground-breaking sequence for its time and
helped pave the way for what was to come. Theres a reason Pixar paid homage
to Ray Harryhausen in their film Monsters Inc. (The prestigious restaurant is
named Harryhausens)
The film 2001: A Space Odyssey (1968) was another film that pushed special
effects in movies to a whole new level, utilizing various techniques like miniatures
for many set pieces and hand-drawn rotoscopes to combine everything for the
final shot. The director of 2001: A Space Odyssey assembled his own effects
team to create the film and to bring his vision to life.
70s
The 70s were another decade of advancements in effects. However, it was also a
time when effects houses got a hit from the industrys recession in the early
1970s, with many closing shop. It wasnt until 1977 when the first Star Wars was
released and finally took a turn for the better. Star Wars introduced some
advancements in special effects technology, and the sheer amount of effects in
the film were staggering, from aliens to spaceships and planets. The film also
spawned a new special effects house, Industrial Light and Magic, which are one
of the most popular visual effects studios today.

Not only was Star Wars a film that is still impressive to the VFX industry today,
but 70s films like The Hindenburg, The Poseidon Adventure and the horror classic
The Exorcist brought together many different techniques, like matte paintings,
which is a technique still heavily utilized in the VFX industry. The horrifying effects
in The Exorcist like the 360 degree head rotation and many other grotesque
effects made the 70s a very impressive decade in terms of visual effects.
80s
The 80s saw a massive leap forward in visual effects with movies like Blade
Runner, Raiders of the Lost Ark and E.T. the Extra-Terrestrial. Blade Runner
featured a beautiful futuristic city with flying cars, floating advertisements and
more. Everything youd expect to see in a futuristic city, right? The movie took
place in 2019, so were about four years away, hopefully well get our flying cars
soon!
Ray Harryhousen also showed off more of his considerable skill with Clash of the
Titans which features some amazing stop-motion work.
Beyond these great movies of the 80s, there were actually even larger
advancements that lead to what visual effects are today. The 80s introduced the
first computer generated images in a movie. Star Trek 2: The Wrath of Khan was
the initial film to feature a completely computer-generated scene. Right after Star
Trek 2: The Wrath of Khan, and the first CGI elements in a movie, Tron took this a
step further and featured extensive sequences created entirely by the computer.
After that, more and more movies in the 80s featured various CGI elements like
The Last Starfighter which featured detailed 3D models, whereas before this type
of spaceship was created with miniature models, like in Star Wars. The first ever
3D animated short film was released in 1984 with the title, The Adventures
of Andr and Wally B. If youre familiar with the history of Pixar you may
recognize this as one of the original things created by John Lasseter and his
team.
90s
After the introduction of CG in the 80s, this only led to more and more films
utilizing this technique. If the 80s were the spawn of CG in movies, 90s were the
explosion. You can probably think of a few game-changing feature films that
many VFX artists refer to as the reason they got into the Industry. Jurassic Park is
one of them. Spielberg had a team of experts and combined CG with
animatronics to create several different breathtaking sequences that gave a new
look into what is possible with CG.
There were also many other advancements with CG in the 90s, including the first
time motion capture technology was used in the film Total Recall, for a very short
x-ray sequence. Terminator 2: Judgment Day featured many distinctive visual
effects shots, as the liquid metal terminator could morph into any character. Shots

like when the terminator was shattered into many different pieces and those
pieces reassembled back together were just a few of the amazing VFX
sequences in the film.
Of course, likely the biggest advancement in terms of CG was the first feature film
created entirely in CG, which was Toy Story. This led to the success of Pixar and
the spawn and popularity of many different completely 3D animated films. Not
only that, but the technology used to create these films also helped to push the
quality of the CG elements integrated into feature films.
The Matrix achieved numerous different innovative visual effects elements
making up a large portion of the film. Of course, the bullet dodging scenes are
very iconic utilizing various techniques to achieve this effect.
2000s
As were inching closer to where visual effects are today, the quality is constantly
increasing. Films like The Lord of The Rings took motion capture technology to a
totally new level with the creature, Gollum. One of the first films to heavily utilize
motion capture, they were able to infuse an actors performance onto a entirely
CG creature. This of course led to many other movies perfecting this technique
even further, like The Polar Express.
Pirates of the Caribbean: Dead Mans Chest also pushed motion capture with the
award-winning visual effects on Davy Jones, using facial motion capture
technology to push the actors performance and capture realistic movements.
This technology was pushed yet again in James Camerons Avatar, with
advancements in facial and body motion capture.
2010-Present
The world of effects in films has definitely come a long way from special effects to
the dominated realm of visual effects. In the past few years, weve seen movies
constantly trying to push the boundaries of visual effects, trying to achieve more
realistic and believable visual effects that can hold up next to the real actors and
not know the difference.
To get a great glimpse into where weve come just in the past decade, take a look
at Gollum in The Lord of The Rings: The Two Towers and compare him to Gollum
in The Hobbit: An Unexpected Journey. As technology advances and the tools
used to create these out of this world characters so do the quality of what is on
screen.
The recent release of Dawn of the Planet of the Apes features extremely realistic
apes, and many advancements in terms of motion capture and the visual
aesthetics of the apes like the rendering of the fur. For example, Rise of the
Planet of the Apes was one of the first films to use motion capture on location,
and not in a specifically designed motion capture studio.

More films are being shot largely on green screen stages, leaving the rest of the
film up to the VFX artists. VFX is as much of a part as many blockbusters like The
Avengers or Pacific Rim as the actors themselves. While VFX is often seen as
icing on the cake of a film, its becoming more of a center piece. If you want to
share some of the films that inspire you as an artist, whether its with practical
effects or visual effects post them in the comments below!

DEMO REELS

What It Takes to Get Noticed

14 TIPS to Help You Stand Out

Start and end with your strongest pieces


Theres no guarantee that your demo reel will be
watched for more than a few seconds, so be sure that
the first piece is your strongest piece. Likewise at the
end, you should leave them wanting to see more and
with something to remember you by. See ways to
tailor your reel to a specific job

Include only your best work


While this might sound like common sense, its easy
to overlook. A demo reel is only as strong as your
weakest piece. As you create new work, you need
to constantly re-evaluate what is in your demo reel
and replace weaker pieces with stronger pieces.

Tailor Your Reel to Your Dream Job


If you want to be a character animator but your demo
reel is full of character models, then your potential
employer is not likely to hire you as an animator no matter how great those models might be. If you
dont have enough content to put on your reel for
the position you want, take the time to create some.

Know your strengths and focus on them


This works close with tailoring your reel to the
job you want, but its important enough to point
out that you dont need to show your potential
employer that you can do everything under the
sun. Your demo reel should reflect your strengths.

Keep it short and simple


A typical demo reel should be between one and
two minutes. If you have more content than that,
start eliminating some of the weaker pieces to make
room for your absolute best. Do more with less.

Label your reel inside and out


When creating your reel, make sure to include your name
and contact information at the beginning and end of the reel
itself. If you are submitting a physical disc (such as a DVD), be
sure to label this with your name and contact information as
well. If a potential employer wants to contact you, they should
know how to do so from each item that you submit.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

Make your demo reel easy to play


Your potential employer shouldnt need any special
codecs or special software installed just to play
your reel. Remember that in many studios the
person who is making the first round of reviews
for demo reels may not be a CG artist with a
powerful workstation. Your reel needs to be able
to play on any operating system. Test it on as
many computers as you can to make sure it works
without any errors.

Check the companys guidelines


Most studios are accustomed to receiving demo
reels and post exactly what format they need
them to be submitted in on their site. Take the
time to make sure that your submission falls within
their guidelines. If you cannot follow along with
directions for submitting a demo reel, should a
potential employer trust you with following the
directions for a complex work of art?

Only include work you have approval to use


If youve done work that is under an NDA or some
other not-for-public agreement then the rule is
simple: dont include it on your reel. If you have a
project that you cannot show on your reel but you
want to show what you are capable of doing, take the
time to recreate the effects in a personal project that
you can show publicly.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

10

Bring a copy of your demo reel, shot breakdown


& printed resume or necessary documents
While its likely that the people interviewing you have already
seen your demo reel, its also possible that one or more of
the people involved in the interview have not seen it. Bring an
easily-playable copy of your reel to any interview just in case.
Cover letters are very important, even though they are
often overlooked, and can be the deciding factor that
determines if your reel is even watched. The cover letter
should be specific to the position that youre applying to and
should include a quick snapshot of your skills, experiences
and which software and technical skills are your strengths.

Only include work you created, group work is OK


You certainly dont want to include work that you didnt
do in your reel, but that doesnt mean that you should
only include work that you worked on by yourself. In most
studios, youll be working as part of a team, so if youve
worked on team projects those can show your potential
employer that you can successfully integrate into a team.
In cases like this, youll need to make sure to let them know
what parts of the project you worked on.

12

11

Dont worry about the music if it doesnt


add to your reel
Adding your favorite track to spice up your demo
reel might seem like a great idea, but you run
the risk of the viewers not liking it. Also note that
some viewers wont have the audio on. The only
universally acceptable audio for a CG demo reel
is audio that is directly relevant to your work. For
example, lip syncing.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

13

Dont call/email a studio... constantly


The larger a studio is, the more demo reels they
receive and review. Be respectful of the time of those
who are looking at your work by refraining from
constant calls and/or emails. Until they reach out to
you, keep learning and working to make your reel
even better. Let your good work get you noticed, not
an annoying abundance of calls and emails.

Dont worry about fancy packaging


You arent being hired for doing the fancy packaging,
so rather than focusing on that, focus on making the
content of the demo reel stand apart from the rest.
A demo reel is judged not by its outward appearance
but by the greatness that it contains.

14

Learn more about making your reel job-specific

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

Tailoring Your Reel For A Specific Job

Aspiring Concept Artists


Pay close attention to the composition, timing and
cuts of your shots. Study cinematography as that
will help you compose your shots.
The goal of a concept is to have it modeled in 3D,
so its a good idea to go the extra mile and include
orthographic views for your character concepts.

Aspiring Modelers
Creating your own concepts is great, but your focus is
building those concepts in 3D not creating them. So feel free
to use designs that have been done by professional artists. If
the design is abstract, be sure to include the concept art.
Make sure to include wireframes to show your models
topology.
Even though you wont be animating your characters, take
the time to learn how your character would move and
behave. This will help you as you build your character.

Aspiring Technical Directors


Dont try to rig your character to do everything. Stick
to building a solid rig that is effective for how your
character needs to move.
Get feedback from an animator on how your
character should move and build the rig accordingly.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

Aspiring Shading Artists


Nothing in the real world is absolutely perfect. Take the time to
add rust, dirt and scratches into your scenes.
Photographs can make great textures, but just about anyone
can apply a photograph to a model. Instead, show that you can
paint your own textures.

Aspiring Lighting Artists


Dont use Photoshop to cover up bad lighting. Lighting is best
shown by moving your camera around the scene so its not a
still image.
Your lighting shouldnt just let us see whats in the scene, it
should set the mood of a scene. Try creating different moods
of the same scene to show off your ability to change the
mood through lighting.

Aspiring Animators
Rather than quick shots of random animations, try creating
a couple vignettes to tell a story.
Take the time to get to know your character well. Learn
about your characters likes, dislikes and how your character
would react to the situation. Then animate accordingly.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

Aspiring Compositors
Not all compositing has to be visible right away. In
fact, the best composites are the ones that dont
stand out.
Include before and after shots so its easy to tell
what youve changed.

Studios and companies want to help you


improve your demo reel.
Remember that its always a good idea to check with the
company that you are submitting your demo reel to before
working on it, so you can tailor your reel to their specific
requirements. Studios and companies will also often have
recources for creating your demo reel. Here are some great
resources on building assets and creating your demo reel:
PIXAR
Digital-Tutors tutorial and training page
Get inspired and connect with the Digital-Tutors community

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

Choose the right 3D software


If youre starting out in 3D there quickly comes the moment when youre going
to ask: should I learn Maya or 3ds Max (or quite truthfully, insert any
other complete 3D animation application: Softimage, CINEMA 4D, MODO,
Houdini, etc.)? For some it may be decided by your school or what resources
you have to acquire the software.
If its up to you to decide, there are some key things you should think about
when deciding, and some reasons why its about more than just learning one
application. Youll learn about each applications pricing point, as well as what
most studios are using each application for, and why.
Picking Your First 3D Application
Step 1: Try (for Free) Before You Commit
When all is said and done, and we hope this isnt a spoiler alert or a let down,
when it comes to picking your software of choice, its as simple as preference.
If youre not limited to a software youre already being handed, theres no
reason not to set out and explore which application you prefer.
Most software companies offer free trial or learning versions to get you started
and help see if its a right fit. Its also important to know the price points for
each application, because eventually you will need to purchase one. Some
options to explore include
(Please note restrictions on each linked page about importing and exporting
files per the software provider)
Maya
Maya is arguably one of the most popular 3D applications out there today.
With two different versions Maya, and Maya LT the prices vary. Maya costs
$3675, or $185 monthly, $460 quarterly and $1470 annually. If youre a
student, however, you can get the full software at no cost for educational
purposes for 3 years. The stripped down version of Maya, Maya LT costs just
$30 monthly or $795 if you buy the program outright. Its also important to note
that in a recent interview, Autodesk has announced they will eventually move
to a subscription only pricing model, similar to the Adobe products. A specific
date for that has not been specified, however.
3ds Max
3ds Max is another extremely popular 3D application in Autodesks arsenal.
The pricing for 3ds Max is the same as Maya, you can buy the program
outright for $3675, or get a free 3-year license if youre a student. Since 3ds
Max is also an Autodesk product it will eventually move to a subscription only
pricing model.

Softimage
Softimage is another of Autodesk 3D applications, sadly; however, the last
release of the program will be 2015. After that there will be no updates to the
software as Autodesk phases the program out. Its a side time in the 3D
community seeing the beloved program go the way of the wind, but there are
still plenty of 3D applications to choose from. If you still desire, you can
download the 30 trial or the three year free student license.
Softimage (30 day) NOTE: Softimage 2015 will be the last release per an
Autodesk announcement.
CINEMA 4D
There are several different versions of CINEMA 4D focusing on specific areas
of a production pipeline. CINEMA 4D Prime costs just $995 and focuses on
graphic designers. CINEMA 4D Broadcast costs $1,695 which focuses on the
motion graphics side of things. CINEMA 4D Studio combines every package
into one, costing $3695.
MODO
MODO is one of the more inexpensive 3D applications out there, costing
$1495. Even though it costs less its still a program that packs a punch. You
can also download a free 30 day trial.
Houdini
Houdini offers a few different versions of the program focuses on specific
areas of a pipeline. Houdini costs $1,995 which focuses on animation,
modeling and lighting. Houdini FX focuses specifically on the visual effects
side of things like particle simulations, fluids, dynamics, etc. It costs $4,495.
You can also get Houdini Indie which only costs $199 and focuses on
animators and game creators. The program is attached to a limited
commercial license however.
Blender
When it comes to pricing, Blender takes the cake on this one. Blender is an
open-source 3D application, meaning it doesnt cost anything. While its free, it
doesnt mean its not powerful. Its very popular among hobbyist and indie
developers, its also starting to grow in popularity in recent years as updates
and improvements are constantly happening with the program.
Blender (Download Page)
In terms of pricing, all the programs are very similar, with the exception of
Blender, of course. Maya and 3ds Max offers the most appealing pricing
packages with the 3 year student version. This gives you access to both
complete versions of the applications at no cost to you.
In essence, all of the tools are there in each application. Some may have more
powerful features in certain areas, but usually that is something you can learn

in any software down the line. We like to think of the analogy where all the
tools are in the tool belt, the application, is just organized differently. Some of
those advanced features like the Mograph Editor in CINEMA 4D or creating
quick dynamic simulations in Houdini may be software-specific, but learning
the main skills and tools will help you take those on when youre ready.
Spend a week in each software youre contemplating learning first and find
which one you are most comfortable working with. You can even start
watching free beginner training with a Digital-Tutors demo account, or jump
right in with a full membership and watch entire introductory courses.

Automotive Modeling in Maya

Step 2: Understand Its All About The Techniques and Skills


If you take the approach that its more than just the software youre learning
and stay focused on the techniques and skills, your results will go much
further.
We cant tell you how many times weve heard:
a) A professional saying they have to or even get to switch between
applications
and
b) A student or developing artist refusing to watch a course because its not in
their application of choice. This only limits the artist to stay in that software
shell, but also limits them in their marketability when trying to apply to jobs or
move between studios.
If you can learn techniques and skills from any course no matter the main
software, it will be amazing what you can learn and do in any application.
Initially it will, of course, make sense to stay in your single application to get up
and running, but dont make it a habit. And know that even the pros take time

to get re-familiarized when switching software it just takes time to get used
to the new setup.
When learning a software youll essentially be learning new skills too. These
skills transfer from application to application, so make sure that overall picture
is the focus. Low-poly modeling for games is low-poly modeling no matter
which application you are using to get there. Learning the essential skills, and
advanced skills along the way, will help you conquer your first application and
then any other that may stand in your way.

Transforming Robot Production Pipeline (Uses a Variety of Applications)

Step 3: Investigate and Think About What You Want to Do


If youre getting into 3D you likely know why youre getting into 3D: a career or
for your own creative pursuits. If its for your own artistic journey, the options
are endless and all yours. If youre hoping to start your career, you probably
know which area of 3D you want to get into and now you can either see if
there are clearer options or go deeper to find more. You can learn more about
what each job does within the 3D pipeline by reading the Where Should I Start
with 3D? article. Maybe youre not even sure yet what exactly you want to do,
but all you know is that you want to work in the 3D industry.
There are endless possibilities of what you might be doing at a studio. For
example, you could be a texture artist, specifically working on creating the
textures for an asset, or maybe youll be a modeler, building the assets.
Depending on the studio, you may be required to do only one of this things,
and at another studio your job may include both modeling and texturing of
assets. Youll need to be flexible and this can mean knowing several different
software packages.
Which software are they using to do (insert job you want)?

Game Art
From your research you may hear this is 3ds Max dominated (and previously
Softimage), but Maya is right up there with it. With the recent release of Maya
LT which focuses solely on game development Maya is slowly trying to bridge
this gap between 3ds Max and Maya in terms of a game art application. Thats
why its so important to focus on those skills youll be able to transfer to any
application.
We also love working with Softimage, but its popularity hasnt taken off quite
as much even though
hough it does have some unique, powerful tools and useruser
friendly UI. While 3ds Max and Maya are known as great game art
applications, MODO, CINEMA 4D and Blender are also popular for game
asset creation. Specifically Blender since its a free 3D application its popular
among indie developers.. If you want to learn more about the pros and cons of
each application in terms of game art read the 3ds Max, Maya LT or Blender
Which 3D Software Should I Choose for Asset Creation? article.
3D Modeling

If you want to get into game art, youllll likely be modeling, creating assets,
characters, etc. However, 3D modeling goes much deeper than that, and
modeling for movies can be different than modeling for games. While the
techniques are the same, the guidelines can be different. Some of the most

popular applications for 3D modeling are 3ds Max and Maya. You can
probably start to see the trend here. 3ds Max and Maya are really the two
industry standards applications. Both feature some really great 3D modeling
tools that will help you get the job done.
MODO, CINEMA 4D and Blender have also seen a rise in popularity in terms
of 3D modeling. CINEMA 4D boasts a fairly easy to grasp workflow that many
new artists enjoy for its ease of use. MODO also has powerful 3D modeling
tools. An important thing to keep in mind is that while each 3D application may
have slightly different tools, for the most part, they are all similar. Youll be
able to create the exact same 3D model in application A as you could in
application B, the techniques and workflows that you use to get there will be
different depending on the application. Thats why its vital that you test the
waters first, and find out what 3D programs workflow you find most
comfortable.
3D Animation
Its hard to put any limitations on software for this one. If you look to studios
and schools for what theyre using, and our most viewed animation training, it
goes Maya, 3ds Max, CINEMA 4D, Houdini, MODO, and Softimage. Even
prompting our animation tutors and modeling tutors with the question, they too
think it all comes down to your skills and applying them to whatever software
you need to use to make your scenes, characters and animations happen.
When it comes to animation, the 3D application actually plays a very small
part. Since creating animation mainly consists of creating keyframes, as long
as the application can do that, thats all that really matters. If you can animate
in program A you can very easily animate in program B without any real
learning curve. If youre wanting to get into animation its more important that
you focus on the principles of great animation, and not the application youre
going to use to create it.
Rigging
Maya is really known as the go-to application for rigging because of its
powerful scripting language MEL that allow artists to create complex rigs,
and create custom tools. 3ds Max also has some great rigging tools, like CAT
which can greatly speed up the process of rigging, and is great for game
projects. However, all 3D applications listed here have the ability to create
complex rigs. Again it really comes down to personal preference, and which
program you find more comfortable to work with.
Broadcast
CINEMA 4D is a growing name in this area and has seen an even bigger
spike with its new integration with After Effects using CINEWARE. Maya and
3ds Max still also play nicely with After Effects, including state sets in 3ds
Max. CINEMA 4Ds affordability and ease of use on a Mac have also helped it
grow in popularity. Depending on the intricacy of the project though, even

something like Houdini could be used for the VFX tools. Check out this indepth post to break down some of the
he differences between 3ds Max and
CINEMA 4D for motion graphics. With CINEMA 4D Broadcast it also gives you
the option to choose a cheaper version of the program if the only thing you
want to focus on is motion graphics and design.
Visual Effects
Most applications weve been discussing, possibly with a mixture of other
secondary applications, are going to be able to create stunning visual effects.
Youll hear
ar about some powerful tools and dynamics, such as Softimages
ICE, Houdini as a whole. With The Foundrys purchase of Luxology, visual
effects between NUKE and MODO should continue to evolve too, though the
workflow between NUKE and other applications is also still very popular.
When it comes to visual effects it usually comes down to a mixture of different
applications to create the finished work. For instance, Houdini is known as the
go-to
to app for simulations, NUKE for compositing. Maya, CINEMA 4D, MODO,
etc for rendering.
Product Visualization or Design
Most applications discussed will help
you create product
ct and architectural
visualizations anything from tiny
handheld devices, to vehicles, to large
building developments. For
visualizations, learning rendering
applications like V-Ray
Ray, Arnold
Renderer and Maxwell Render will also
be helpful. When its time to create
specific product designs though, CAD
software like Inventor,, Rhino or
SolidWorks might be more fitting. Youll
also want to spend time focusing on
items
tems outside the software like usability
of the product and overall user
experience.
All 3D applications mentioned have the tools youll need to create product
visualizations since they are complete 3D packages with modeling, animation,
texturing and rendering tools.
Investigate and Ask
If you have a dream job or even think you know your first stepping stone into
the creative industry, there is absolutely nothing holding you back from hunting
down more specifics. Look to things like job listings for the software
sof
they
mention, Google what does X Studio use? or even how did X Studio create

that shot?, and then there is the friendly email to the company that can work
too.
Knowing what a company is using can help you get that foot in the door, but
again dont limit yourself to one application. There are many studios that use
their own proprietary software. For instance, Pixar uses Presto for animation,
and DreamWorks uses Premo for all their animations. Thats why the most
important thing is that you master the skills youll need, and not necessarily
the 3D application.
Hopefully your head is now in the right place for making the software decision,
whether its Maya, 3ds Max (or Softimage, CINEMA 4D, MODO, Houdini,
etc.). Dont forget thats its not a forever commitment should you want to
change not all will be lost. Focusing on high-level skills and techniques that
transfer to any application will set you up for a successful career; that and
hard work, patience and perseverance.

Rendering a Photorealistic Female in 3ds Max

See how you can jump ahead from the start with introductory training on
todays most popular creative software. Learn and grow skills with the same
training used in top schools, studios and companies all around the world.
For more info no comparing and contrasting 3ds Max and Maya, be sure to
check out a newer article about the differences in these popular applications.

5 Tips for Character Rigging

Whether you are creating a character rig for yourself or the animators within your
pipeline, you need to take into account some of the important things that go into
creating a great character rig. Lets go over five tips that can help you create the best rig
possible.

Use the Right Rig for the Job


In most cases, character rigs arent built to make every sort of motion possible. Instead,
they are built specifically to be able to perform the actions that the character needs to
do. The first thing you should be asking yourself before starting to design the character
rig is: What is this particular rig going to be used for? If youre a hobbyist just wanting
to create your own rigs to be able to animate, then you need to know what you plan on
doing with this character.
Do you want a character rig to use in your game? Or do you want to create a rig to use
in your very own short film? Maybe this rig isnt going to be a humanoid; you might be
creating a creature type rig that walks on all fours. If you are working in a studio,
chances are you are aware of the intent of the character rig, but you still need to know
what to incorporate into the rig and what can be left out in order to save time.
For example, if this rig is going to be used in-game then the level of editability may not
need to be as high as something like a short film or movie where the focus isnt going
to be on game play, but on animation. When creating a rig for a game it may only need
to be used for animating run or walk cycles and might not have any speaking roles. So
there is no reason to spend time creating a complex facial rig if there wont be any
emphasis on that area. Find out where the most focus needs to be so you arent
spending time on areas that wont ever be animated. After all, time is money.

Use Deformers for Facial Rigging


Creating a well-crafted facial rig can seem very daunting. Thats why knowing the right
techniques for creating a flexible facial rig is extremely important. Deformers are vital
tools to create detailed facial controls that can be created relatively quickly, but still
have a great level of flexibility.
For example, using something like a cluster deformer can be great for creating flexible
cheek controls that gives the animator a high level of control. Or a wire deformer to
quickly create eyebrow controls. Deformers are a good alternative to creating extra
joints within the face for facial rigging because they can be done very quickly. Of
course joints will still need to be created for areas like the jaw so it can open and close
smoothly, but for detailed areas on the face like cheeks, nose, lips and eyebrows
deformers can be utilized. To learn more about deformers check out these in-depth
tutorials on facial rigging in Maya, Facial Rigging in 3ds Max and Facial Rigging in
CINEMA 4D.
Incorporate Both IK and FK
Inverse Kinematics, and Forward Kinematics (IK and FK) are both important in your
rig. While animators may prefer one over the other, incorporating both systems into
your character rig will make it much more user friendly. Having an IK and FK switch
in your arms will give the animators the flexibility to use whichever one is best for that
particular shot.
If you are working on a project with several animators using your rig you never know
which system they prefer, so eliminate the chance of having your rig returned to you
for further editing by adding both options in.

You can also create an IK and FK spine rig or a blend of both together to get more
control over the chest movements. An IK and FK spine is a great option for creating
very stylized cartoony animations.

Create a Good Control Setup


In order for an animator to move the character around, they will need to have access to
all the control curves. A proper control setup is vital for creating a rig that is user
friendly. NURBS curves are the go-to option for creating control curves because they
wont show up during render time.
Make sure your control curves are easily accessible to the animator and are color
coded. For example, the left foot might have a blue control box whereas the right has a
red control box. The same goes with the arms. This helps the animators quickly see
what needs to be adjusted especially when dealing with poses where control curves
might overlap. For example, if the characters arms are crossed it can be very difficult
to determine which control to select for the right arm if both control curves are the
same color.
This should go without saying, but still needs to be mentioned. Dont forget to name
each control curve properly! Having a good naming convention is important when
setting up your control curves. You dont want an animator to be lost because the
elbow control is titled something like nurbsCircle27. That wouldnt make much sense
for the animator. Name it something that anybody can understand and is relative to
what it affects.
You should also not over populate your rig with too many controls; instead find places
where you can incorporate controls into the attribute channel of a curve.
For example, if you want to create a squash and stretch control for the head, instead of
creating a completely new control curve to place somewhere on the head, create a

squash and stretch attribute within the head control. This still gives the animators a
good level of flexibility but without too many control curves to keep track of.

Alternatively you should also be cleaning up the attributes for your controls. If an
animator doesnt need to rotate or scale a control, or maybe if they do it could have a
negative affect on the rig, then lock it and hide it. That way the animator wont have to
worry about it or accidentally do something they didnt intend.
Add Special Controls as Needed for Your Rig
Depending on what your rig is going to be used for try incorporating those extra
controls that can push your rig to the next level. If you are familiar with animation you
probably know what is beneficial to add to a rig. Putting in a squash and stretch control
for the head or the hips can help with those shots that need a bit more cartoony flair to
them. Or even incorporating a shrink and stretch control to the legs can help when
creating a more stylized animation, but are also great for reducing knee pops for walk
or run cycles.
Go one step further, and create a control for the chest that can quickly simulate a
character breathing by simply tying an attribute into the existing chest control. If you
are not sure what kind of extra things you can add to your rigs, ask some animators.
Theyll be sure to throw some ideas at you.

Master the 12 Principles of Animation


The 12 principles of animation are the most crucial techniques you must master as an
animator. Created by the pioneers of animation Frank Thomas and Ollie Johnston, and first
introduced in The Illusion of Life: Disney Animation. These 12 principles should be your
ultimate guide for creating appealing and realistic character animations.

#1 Timing and Spacing


Timing and spacing in animation is what gives objects and characters the illusion of
moving within the laws of physics. Timing refers to the number of frames in-between
two poses. For example, if a ball travels from screen left to screen right in 24 frames that
would be the timing, it takes 24 frames (or one second if youre working within the film
rate of 24 frames per second) for the ball to reach the other side of the screen.
The spacing refers to how those individual frames are placed. Using the previous example
the spacing would be how the ball is positioned in the other 23 frames. If the spacing is
close together, the object moves slower, if the spacing is further apart the object moves
faster.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

12 PRINCIPLES OF ANIMATION

Explore the 12 principles and start mastering them in your own work to create captivating
animations. If you ever get stumped on a principle or your animation needs some help, use
this as a reference.

#2 Squash and Stretch


Squash and Stretch is what gives flexibility to objects. There is a lot of squash and stretch
happening in real life that you may not notice; in animation this effect can be exaggerated.
For instance, there is a lot of squash and stretch that occurs in the face when someone
speaks, because the face is a very flexible area.

Squash and stretch can be implemented in many different areas of animation, like the eyes
during a blink or when someone gets surprised or scared, their face squashes down, and
stretches. Squash and stretch is a great principle to utilize to exaggerate animations and
add more appeal to a movement.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

12 PRINCIPLES OF ANIMATION

The easiest way to understand how squash and stretch works is to look at a bouncing ball.
As the ball starts to fall and picks up speed, the ball will stretch out just before impact, and
as the ball impacts the ground, it squashes, and as it takes off again it stretches.

#3 Anticipation
Anticipation is used in animation to set the
audience up for an action that is about to
happen. Not only is anticipation needed to
prepare the audience, but its also required to
sell believable movements.

#4 Ease-In and Ease-Out


As any object or person moves or comes to a stop there needs to be a time for acceleration
and deceleration. Without ease in and ease out (or slow in slow out), movements become very
unnatural and robotic. For example, as a car starts from a stop it doesnt just reach full speed in
an instant, it first must accelerate and gain speed. As it comes to a stop it doesnt go from sixty
to zero in the blink of an eye, if it did, it would not be realistic. Instead, it slowly decelerates until it
reaches a complete stop.
The same must be accomplished in an animation, and the easiest way to accomplish ease in and
ease out is to utilize the principle of spacing. As a character stands up from a sitting position the
spacing will be closer together at the start, so they ease into the movement, and as they stand
up, they will ease out of the movement. Without this acceleration and deceleration of actions
everything would be very abrupt and jarring.
www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

12 PRINCIPLES OF ANIMATION

For example, before a baseball player pitches


they first need to move their entire body and
arm back to gain enough energy, and before
a parkour enthusiast leaps of a ledge they
first must bend their knees to prepare for
the jump. Imagine if these actions had no
anticipation, they would not be believable.

#5 Follow Through and Overlapping Action


Follow through and overlapping action can be considered two different principles, but
theyre still closely related.

Overlapping action is very similar in that it means different parts of the body will move at
different times. For example, if a character raises their arm up to wave, the shoulder will
move first, and then the arm, and the elbow and hand may lag behind a few frames. You
may have also heard this referred to as drag or lead and follow You can even see an
example of overlapping action in something like a blade of grass, the base moves first, and
the rest of the grass follows behind at different rates, giving you that waving motion.
In real life everything moves and different speeds and at different moments in time, and
that is why follow through and overlapping action is so important for capturing realistic and
fluid movement.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

12 PRINCIPLES OF ANIMATION

Follow through is the idea that individual body parts will continue moving after the
character has come to a stop. For example, as a character comes to a stop from a walk,
every part of the body wont stop at the exact same time, instead, the arms may continue
forward before coming to a settle. This could also be articles of clothing that continue to
move as the character comes to a stop.

#6 Arcs
Everything in real-life typically moves in some type of arcing motion, and in animation you
should adhere to this principle of arcs to ensure your animation is smooth and moves in a
realistic way.
The only time something would move in a perfectly straight line is if youre trying to
animate a robot, because its unnatural for people to move in straight lines.
For example, if a character is turning their head, they will dip their head down during the
turn creating an arcing motion. You also want to ensure the more subtle things move in
arcs as well, for example the tips of the toes should move in rounded arcing motions as the
character walks.

Exaggeration is used to push movements further to add


more appeal to an action. Exaggeration can be used to
create extremely cartoony movements, or incorporated
with a little more restrain to more realistic actions. Whether
its for a stylized animation or realistic, exaggeration should
be implemented to some degree.
If you have a realistic animation, you can use exaggeration
to make a more readable or fun movement while staying
true to reality. For example, if a character is preparing to
jump off a diving board, you can push them down just
a little bit further before they leap off. You can also use
exaggeration in the timing as wall to enhance different
movements or help to sell the weight of a character.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

12 PRINCIPLES OF ANIMATION

#7 Exaggeration

#8 Solid Drawing

#9 Appeal
This principle can really come down to adding
more appeal in many different areas of your
animation, like appeal in posing. However,
the most obvious example is appeal in the
character design, you want to have a character
that the audience can connect or relate to. A
complicated or confusing character design can
lack appeal.

With solid drawing you need to think


about how you pose out your 3D
character rig, ensuring there is correct
balance and weight in the pose, as well as
a clear silhouette. You also want to avoid
what is called twinning, which basically
means the pose you have created is
mirrored across to the other side.

You can find areas on the character to push


and exaggerate to create a more unique
character design that will stick out in the
audiences memory. For example, simply
exaggerating the jaw of the character or
pushing the youthfulness in the eyes can help
create more appeal.

12 PRINCIPLES OF ANIMATION

In 2D animation solid drawing is about


creating accurate drawings with volume
and weight, and thinking about the
balance and anatomy in a pose. With 3D
animation, animators are less likely to
rely on their drawings, but the idea of
solid drawing is just as important.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

#10 Straight Ahead Action and Pose to Pose


Straight ahead and pose to pose refers to
the two different techniques for how you go
about animating. With straight ahead its very
spontaneous and more of a linear approach.
Youll create each pose or drawing of the
animation one after the other.
With pose to pose its much more methodical and
planned out, with just the most important poses
required to properly tell the action. For example,
you could approach a jump with just four poses,
the character standing, crouched, in the air, and
back on the ground. It allows you to work much
simpler, and ensure the posing and timing is
correct before adding more detail.

Secondary action refers to creating actions that emphasize or support the main action of
the animation; it can breathe more life into an animation and create a more convincing
performance.
Its important to remember that the secondary action should typically be something subtle
that doesnt detract from the main action happening, and can be thought of as almost a
subconscious action. For example, a character talking to another character in a waiting room, the
two of them talking would be the main action, and if that character began tapping their fingers
nervously, that would be the secondary action.
A character walking down the street while whistling could be another example of secondary
action or a person leaning up against a wall talking to some people at school, the main action is
the character leaning against the wall and talking, and then putting in an action of them crossing
their arms would be the secondary action.

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

12 PRINCIPLES OF ANIMATION

#11 Secondary Action

#12 Staging
Staging is how you go about setting up your scene, from the placement of the characters to
the background and foreground elements and how the camera angle is set up. The purpose
of staging is to make the message of the animation unmistakably clear to the viewer. This
could be ensuring the camera is set up in a way to communicate the characters expression
clearly, or setting up two different characters so that both of them are easily viewed from
the specific angle.

Now that you know the meaning and purpose behind each principle be sure to implement
these 12 key principles into all your animations, and youll be sure to create stunning work!
Explore animation tutorials on Digital-Tutors

www.digitaltutors.com Copyright 2002-2014 Digital-Tutors, a Pluralsight company.

12 PRINCIPLES OF ANIMATION

You want to keep focus on the purpose of the shot and what you want to communicate so
the audience doesnt become confused.

Das könnte Ihnen auch gefallen