Sunday, January 30, 2011

Determining Pixel Size and Frame Rate

Before beginning to edit it is important to know what is going on with your file. The idea is to "mess" with it as little as possible. Though we will be converting our files, more than likely into ProRes 4:2:2, it is still a good idea to understand the basics of your camera/file before beginning to edit.  I have tried to simplify the basics in this, if you detect gross errors, please let me know in the comments.


1. What pixel size does your camera shoot?
Are you shooting 4:3 or a 16:9? If you are shooting DVcam, or other standard def (SD) cameras you will be shooting a pixel aspect ratio, or better a Storage Aspect Ratio of 720x480. BUT this is particularly confusing if you are shooting anamorphic. Because even though it is SD, it can be either 4:3 (actually it is 3:2, but we won’t get into those details now) OR 16:9. This is because of the very confusing definitions of Storage Aspect Ratios (SAR) and Display Aspect Ratios (DAR). The SAR and the DAR for standard def is 720x480 but with the anamorphic setting--the 16:9 setting the DAR can stretch out the pixels along the x-axis making it appear as if there are more pixels. That is why when you shoot something anamorphic and don’t have your settings set correctly in Final Cut, the image can appear squished. The camera actually shoots it squished to fit on the pick up chip but the pixels are electronically stretched to look correct in 16:9.


WHEW! That is just for SD. If you are shooting HD, there are a number of other dimensions you will need to track. All HD (for practical purposes) is 16:9. But, there are different ratios depending on the format and pick up chip set of your camera. The common ones are:

1280x720--usually referred to as “720p”

1440x1080--this is usually for HDV

1920x1080--usually referred to as EITHER “1080i” or “1080p”

Note here that the size is referred to by the number of horizontal pixel-rows (each row is counted, not the number of individual pixels in each row) along the y-axis, not the number of pixel-columns along the x-axis. Look at your camera manual and see which settings your camera uses



2. What is the frame rate of you file?
The most common frame rates for North America are:
23.97 fps
24 fps
29.97 fps
30 fps
60 fps

The next thing you need to know if it was recorded as an interlaced or a progressive signal.

Interlacing was used way back in the late 1930s as a way to cram extra information in the limited broadcast bandwidth. What they did was take a TV frame and split it in two, split along every other line of video and made two fields from every one frame. This worked in conjunction with the cathode-ray tube monitors that were in virtually every TV set in the world. This worked suitably until the advent of the VGA standard in the 1980s. Computers at that time were fast enough and their graphics cards were strong enough to handle each “line” of video in sequence. This is called progressive scan.



Most likely if it is DVCam it is recorded interlaced. In HD the frame rate can either be interlaced “i” or progressive “p.”

ARRRGH! If it isn’t confusing enough already! You need to decide as video artists, how your video will be primarily displayed. ALL broadcast programming will be broadcast interlaced. Even if you have the latest and greatest 1080p HD monitor, if you are watching off-air or via cable, it is interlaced. BUT if you are watching as a download or from BluRay it is progressive. Also if you want your work to be mainly viewed on the web, or projected from a computer (not DVD) you will want your work, as best as you can, to be shot and edited in progressive mode.

No comments: