Video Upconversion: Facts and Fallacies Page 3
The process of converting video from interlaced to progressive scanning - 480i to 480p or 1080i to 1080p - is called deinterlacing. Originally introduced as an option for improving picture quality, especially from DVDs, deinterlacing is now a necessity for most HDTVs, few of which support interlaced scanning at all. That's a consequence of the transition away from CRT-based sets to "fixed-pixel" display technologies, such as plasma, LCD, DLP, and LCoS, which work by flashing a complete frame on the screen at a time; what used to be scanning lines are now pixel rows. So today deinterlacing is less a feature that can improve your TV picture than an essential process that can make the picture worse if it isn't done right.
Although it seems straightforward enough to combine two fields of a frame and display them all at once, there are a couple of gotchas. The first is that when video is shot in interlaced format - which is how most TV and video cameras operate - the fields are shot sequentially, with the second field of a frame acquired a sixtieth of a second after the first. Any motion that occurs between the two will cause "jaggies" if the two fields are just slapped together and displayed simultaneously. Consequently, a deinterlacer must incorporate sophisticated motion-compensation techniques to achieve good results with typical video-originated material, and some are distinctly better in this regard than others.
The second potential issue arises with material originally shot on film. When film is transferred to interlaced video (for broadcast or DVD mastering), it must be converted from its native frame rate of 24 fps (frames per second) to the 30 fps employed for TV in most of the world via a method called 2:3 (or 3:2) pulldown, which pads out the sequence with a repeated field every other frame. For example, imagine four film frames, A, B, C, and D. When this 24-fps sequence is transferred to 30-fps interlaced video, each frame is split into two fields, which are organized in a sequence like this: A1, A2, B1, B2, B1, C2, C1, D2, D1, D2, and so on.
The beauty of video originated from film it that it can be deinterlaced perfectly to 60-fps progressive-scan format - but only if the deinterlacer accurately detects and compensates for the 2:3 pulldown. Then each field is reunited with its mate to restore the original film frame, and, following our example above, you wind up with a sequence like this: A, A, B, B, B, C, C, D, D, D, and so on. If the deinterlacer doesn't handle the pulldown correctly, however, it will create some video frames out of fields from two different film frames, causing an ugly artifact called "combing" if there is any motion between those original frames. (A deinterlacer might be able to fudge this by not switching to film mode at all, but even if it had excellent video-mode performance, resolution would suffer.) This is why 2:3 pulldown compensation is so important in progressive-scan DVD players and HDTV sets.
Deinterlacing: The Bottom Line Deinterlacing is one of the most critical video-processing steps in today's HDTVs, all of which incorporate circuits for this purpose. But virtually all DVD players available now also have deinterlacers, as do an increasing number of A/V receivers and preamplifiers. You may therefore be faced with the question of where in the chain to perform deinterlacing for certain sources. Since most HDTVs have good deinterlacers, leaving the job to your TV is usually a reasonable default choice. For DVDs, the job is better handled in the player if, and only if, it has a very good deinterlacer. Most cheapie progressive-scan models are mediocre in this department, however. Look for an indication that your player uses a deinterlacing chip from a company known for performance in this category, such as Faroudja, Silicon Image, Silicon Optix, or Gennum. Deinterlacing in a receiver or preamp will seldom make sense unless it is also scaling the image.
Scaling: The Background An HDTV set has to handle at least four basic video formats: regular old 480i standard-definition (SD) for conventional analog broadcasts and videotapes, 480p SD (mainly from progressive-scan DVD players), and the two widescreen high-definition (HD) formats, 720p and 1080i, which provide much greater picture detail. An HDTV set should, therefore, be able to accommodate inputs in a number of scan formats and in both 4:3 and 16:9 aspect ratios for standard-definition signals (4:3 is not used for high-definition broadcasts).
It's possible to design a CRT display to handle all of those formats directly, but since it's cheaper to convert some formats to others than to make a full-bore multi-scanning monitor, most rear-projection and direct-view CRT sets take the conversion approach. And in the case of fixed-pixel displays, such as LCD, DLP, LCoS, and plasma, all incoming signals must be converted to a progressive-scan format that exactly matches the display's pixel array. Most CRT-based HDTVs work at 480p and 1080i and convert every other incoming signal to one of those native formats. That usually means 480i gets bumped up to 480p and 720p gets converted to 1080i. (Because 720p actually has the highest data bandwidth and horizontal scan rate, it is easier from the display-design standpoint to convert it "up" to 1080i than to step 1080i "down.") The process of converting between scan formats is known as scaling, and interlaced signals must be deinterlaced prior to any other processing, which is one reason deinterlacing performance is so critical in HDTVs.
- Log in or register to post comments