Sie sind auf Seite 1von 7

Aspect Ratio

There are many different aspect ratios in use in the film world, with some differences
between common formats used in different countries. The most common aspect
ratios used in Video are 16 x 9 (above) and 4 x 3 (below).
Aspect Ratio, continued

Sometimes a 16x9 aspect ratio project is displayed in a 4x3 frame. This is called
Letterboxing. Sometimes a 4x3 frame is presented within a 16x9 frame. This is
called Pillarboxing.

The decision about which aspect ratio a project is going to be presented in


MUST be made before shooting starts! Mixing footage shot with different aspect
ratios is possible, but a pain! This should be one of the first technical decisions
you make.

Note: these decisions have no impact on sound or music at all!


Frame Rates
24 Frames Per Sec
1 2 3 4 5 6 7 8 9 10 11 12

30 Frames Per Sec

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

0:00 ½ Second

There are two standard frame rates used in film and video: 24 frames per second, and 30 frames per second.
Audio does not have a frame rate (sound does not have frames) so the choice of frame rate has no impact on
sync issues. Animation is often created at 15 frames per second. But again, since audio does not have
frames, this does not impact on sync.

So, why are there sometimes sync issues (particularly noticeable with dialog / lip sync)?

1.) Audio for film is recorded at 48kHz. Audio for Music is recorded at 44.1kHz. If you record at 44.1 and tell
your editing application it has been recorded at 48kHz you will get MAJOR sync issues.

2.) Sometimes film is shot at higher (or slower) framerates for special effects, timelapse, etc. If the film is
recorded at 60fps and played back at 24fps (thereby creating slow motion) any audio recorded at the time
will (obviously) not sync up.

3.) There is one other sync issue that is so minor most people don’t ever notice, but here it is anyway:

Televisions and other video equipment don’t actually play back at 30fps or 24fps, rather at 1⁄10 of 1% slower.
You may have seen these rates: 29.97 or 23.976. These are the real rates of playback. (Long story on why
this is – just take my word for it.) All of these numbers relate to our standards of electrical power (60hz).

So, film shot at 24 fps and played back at 23.976 on video is running at the wrong speed and the audio will
be out of sync. Guaranteed! However, the differences are so small, and most shots with dialog are so short
noone will ever notice.

This small difference is why professional sound recording devices will offer these rates to record in:
48.048; 96.096 etc. By tricking ProTools into thinking the recording is actually done at 48kHz or 96kHz the
film and audio will be in sync. On our SoundDevices recorders there is a setting for 48.048f. The “f” in this
case is for “fakey” since the .wav file is labelled as having been recorded at 48kHz.

This will often be referred to as “pulldown” or “pullup.” Those terms refer to the process by which 24 frames
are stretched to 30 in order to show film on video and TV, but in practice, it means compensating for this
1⁄10 of 1% speed difference.
Compression
In audio we often talk about using Compressors to control the dynamic range of our audio signals.

There is another kind of compression used in video that has nothing to do with this, except for sharing the
same name. Video is almost always “compressed” in one way or another. Why is this done?

Video has to either fit onto tape, or play back through a device, or off of a disc like a DVD. If the video was
not compressed, we could only play back very short bits of video, or, our playback would stutter and skip.
Try this for fun: take a 15 second clip of video, export it as an “uncompressed” Quicktime or AVI file. Burn it
to a DVD as a data file, and try to play it from your laptop.

What will happen? The file will lock up, stutter and skip. The physical disc is not able to spin fast enough to
send all of that information to the display device or computer.

DVDs for instance, use very compressed video in order to fit long programs onto the disc. That format is
know as MPEG-2. (MPEG = Motion Picture Experts Group). This compression is known as “lossy” because
information and quality are lost in the process of making the video smaller. YouTube videos look like crap
because they are super-compressed so that they play back on a variety of machines with different speeds of
internet connectivity.

Compression is a huge area of research, and there are new formats released regularly. For devices like DVD
or Blu-Ray, however, the manufacturers agree on a commond spec so that devices made by one company
will play the discs burned by another company’s device. These specs are then locked down and not
changed.

Anytime you work with video – whether you know it or not – you are making decisions about compression.
This is really important information to understand.

a
b
c
d
e
f

Here is a comparrison of different compression schemes and frame sizes. The original footage is a clip
exactly 1 minute long, shot at 1920 x 1080 pixels (full HD).

• Example “a” is the clip saved in the “Animation” codec, which is (almost) uncompressed. Note the file size of 9.15 gb. That means you could
you could only fit about 30 seconds of movie onto a standard DVD disc.
• Example “b” is the same 1 minute footage, but compressed using the DV codec and rendered at a frame size of 720 x 480.
• Example “c” is compressed using H.264 set for “Apple TV”, at a frame size of 720 x 480.
• Example “d” is compressed using MPEG 2 (for standard DVDs) at a size of 720 x 480.
• Example “e” is compressed using the ProRes codec at full HD size of 1920 x 1080. This codec is a good choice when editing a project shot on
different formats. Convert everything to ProRes, then edit.
• Example “f” is compressed with H.264 for a Webcast. The frame size is 360 x 240.
Comparison of relative frame sizes, from “Full HD” to Web video.

1920 x 1080

1280 x 720 960 x 720 (HVX)

720 x 480 (DV)

360 x 240 Web


Timecode, Capturing, & Ingesting.
Timecode is a sometimes hidden, sometimes visible indication of where (in time) we are playing or
recording. In consumer formats (like DV) the timecode is visible during playback through the camera, but
not always when you are editing. Here is where you can get into trouble:

If you have more than one section of tape labelled “3:00” (for example) the computer will get confused if you
tell it to go to that time. That can happen if you turn off the camera, then turn it back on, and fast forward
past the point you last recorded. The camera will start the new section with the timecode 0:00:00:00. This
can lead to multiple points with the same time reference.

The process of getting the footage from tape into the computer is called “capturing.” If done correctly, the
editing application will be able to locate precise spots on the tape repeatedly. For instance, if you need to
recapture footage weeks, (or even years) after the initial capture session, the editing application will be able
to read the timecode of the tape and capture exactly the same clip.

The correct way to capture footage is to log the section of the tape you want by establishing “in” and “out”
points, then to capture the clip. You may be able to “batch capture” several clips, but often this fails leaving
you unsure of what material you actually have. If you have timecode breaks, dropped frames, or other
screw-ups, you may need to use the “capture now” feature. This is a last resort, and should be avoided,
because it severs the relationship between the timecode and the editing application. This process just
captures blindly.

For footage that is stored on memory cards (P2, SD, etc.) the process of getting the footage into the com-
puter is called “ingesting.” In FCP, the process is called “Log and Transfer.” Often this process involves not only
copying the data files from the memory card, but also transcoding the footage from the camera (or aquisi-
tion format) to the computer, or editing format. In FCP, the codec most often used is ProRess422.
File-type standards, archiving, etc.
Over the years certain standards have emerged for how files are passed between professionals working in
film and video. This stuff does change as new formats gain acceptance, so stay on top of this information.
The easiest and cheapest way to stay current on this kind of information (in addition to more creative topics)
is to read the online forums that professionals use, and to subscribe to the (sometimes free) trade maga-
zines. Here are some suggestions:

Free: Post Magazine (always the same 6 articles, but good)


Millimeter Magazine

Paid: EQ / Mix /ReMix (all OK, pretty mainstream)


Sound on Sound (British, but with US edition. My personal favorite. Very gear-heavy)
American Cinematographer (ASC, very Hollywood, but great tech info).
The Wire (no tech, all art)
The Soundtrack (no tech or art, all theory - from London School of Sound [Chion, et al]

Web: Cinematography.com or .net (incredible resources)


Digidesign DUC (special post section)
Creative Cow (for AE, FCP, etc.)
Lynda.com for training

File Format Standards (as of today!)

Audio: .wav
Video: Quicktime (for reference videos) compressed with ...
H.264
DV
ProRes422
Moving Audio:
OMF
AAF (maybe!?)
Archiving Projects:
XML to make backups of projects
Note: most editing programs (except consumer programs like iMovie) store the media
separately from the program file. You should back up both, but in different ways. Make
permanent copies of your original media. Keep one copy off-site. Make daily backups of your program files.
Since these are small, back them up via email or idisk, or a commercial online backup service. Make sure
your backup system works automatically -- when you are busy you won’t take the time!

One last note: Buy UPS battery backup power supplies for all your machines, including laptops. Ask me
about lightening!

Das könnte Ihnen auch gefallen