VS / MS: Explanation of Field Order -v- Frame Based.

Tutorials using more than one Product

Moderator: sjj1805

Post Reply
User avatar
Administrator
Site Admin
Posts: 334
Joined: Fri Oct 27, 2006 12:45 am

VS / MS: Explanation of Field Order -v- Frame Based.

Post by Administrator » Fri Nov 03, 2006 6:50 pm

There is an excellent thread here
http://phpbb.ulead.com.tw/EN/viewtopic. ... 8525#88525
With a lot of detailed information that will hopefully explain the correct Field Order to use and why.

Excellent write ups by various forum members.
Black Lab wrote:Here's a general description - Fields and Frames.
Ken Berry wrote:A little confusingly, the Field Order A in the Video Help explanation is Lower Field First in Video Studio (it was Field Order A up to and including VS8, though, and some other editing programs still use this terminology).

To take the explanation a little further, if simplistically, depending on the encoding method used, a half frame using Lower Field First (LFF) will, as the term implies, come first, then be followed by the other half frame. Another encoding system will have the upper half frame displayed first, and this is then called a field order using Upper Field First (UFF). But it is only after the micro-second involved in displaying both half frames that you have a complete frame displayed which your eye sees as a single frame.

But if you get the order around the wrong way, your eye will detect minor errors. You will get some jitter or straight lines during a panning shot will look jagged, even though a full frame is displayed. To correct the jitter or jaggies, you simply reverse the field order. If your video has been processed using LFF, then use UFF instead, or vice versa.

Generally speaking, video captured from an *analogue* source comes into your computer via some capturing device as a digital signal using UFF. If it comes from a mini-DV digital video camera, including one used for pass-through of analogue capture, it will almost invariably use LFF. But there are exceptions. A lot (all?) of mini DVD disc cameras, for instance, use UFF, as do some (all?) hard disc digital video cameras. Some higher end capture devices which are hardware encoded to capture analogue video, will use LFF instead of the more usual UFF for analogue capture. So you have to find out about this before you get too far into your editing.

Video Studio, like most editing programs, can handle UFF, LFF and Frame Based/progressive scan video with no problems. BUT -- and its a BIG 'but' -- any single project can only use one Field Order. Of course, you can have different projects on one DVD, and you can burn the separate projects using different Field Orders on one disc because if otherwise processed properly, they will be DVD-compliant regardless of whether they use LFF or UFF or Frame Based. But you can't have a single project which mixes video using both UFF and LFF.

As for slideshows, still images are already a single frame. They don't need the half frames to come together to display properly. They can thus use Frame Based properly. I know all the people above have said they use LFF (or sometimes even UFF) when doing slideshows or displaying still images as part of a larger video. I happen to use Frame Based for my slideshows (with anti-flicker switched on), though of course if I include photos in a video, then they take on the Field Order used by that particular video...

And at the end of the day, this does not matter. If a still image -- a single full frame, remember -- is displayed using LFF or UFF first, then each "half' frame is still a single frame displayed twice. So the end result is the same. :lol:

As a final comment, a computer monitor is quite different from a standard (analogue) TV. In effect it is similar to a digital TV which uses progressive scan. It can happily display Field Based and Frame Based video correctly (though you will still get jaggies etc if you use the wrong Field Order). An analogue TV, however, will have difficulty with video which started life as UFF or LFF video as opposed to a Frame Based still image. A progressive scan TV, on the other hand, because of the technology involved, will display Frame Based video properly, and in effect will convert a Field Based video signal on the fly to display properly as well.

Some analogue capture devices, including, as I understand it, the popular Adstech DVD Xpress DX2, uses Frame Based, or more correctly, progressive scan, as its scanning method because more and more people have progressive scan TVs. However, again as I understand it, once captured as Frame Based, it can be easily and successfully processed in Video Studio using a Field Order, and display properly on both analogue and digital TVs after being burned to DVD.

Here endeth the lesson... :lol:
PeterHF wrote:It's been a while since I took this stuff in university but here's what I remember about interlaced video.

The whole idea about interlaced video is to handle motion as best as possible given the fact that the video stream is limited to 'painting' the entire screen 30 time a second. If you do this, the motion is quite jittery. So they came up with the idea to refresh the even lines and then the odd lines 60 times a second. You only refresh half the screen for each one of the passes but the overall effect is that motion looks better this way because of the combination of a certain amount of persistence on the screen and what your eye and brain does with the information.

It only takes a moment to realize that one must display the lines in the same order they are recorded. Suppose, for example, an object is moving across the screen from left to right at a constant rate therefore moving the same amount each 1/60 of a second and the video is captured upper frame first. Let's say that at time zero the object at position A. If we are capturing upper frame first then the first upper frame will show the object at position A. If the object moves X amount in 1/60 second, the first lower frame will show the object in position A + X and so on. The object's position will be as follows:

A (first upper frame)
A + X (first lower frame)
A + 2X (second upper frame)
A + 3X (second lower frame)

. . . and so on.

Now imagine if you played back this sequence lower frame first instead of upper frame first. You would have:

A + X (first lower frame)
A (first upper frame)
A + 3X (second lower frame)
A + 2X (second upper frame)

. . . and so on.

As you can see, you MUST play the video back in the same sequence it was shot or the motion will be very jittery.

To my knowledge, there is no way to changed the order of the original video without losing quality.

It is also important to realize that there is no complete image of the object at any given point in time so de-interlacing the image will necesarily reduce it's quality. Mind you, smart de-interlacing schemes can do a pretty good job of filling in the missing lines, you cannot get a perfect conversion.
maddrummer3301 wrote:
It's been a while since I took this stuff in university but here's what I remember about interlaced video.

The whole idea about interlaced video is to handle motion as best as possible given the fact that the video stream is limited to 'painting' the entire screen 30 time a second. If you do this, the motion is quite jittery. So they came up with the idea to refresh the even lines and then the odd lines 60 times a second. You only refresh half the screen for each one of the passes but the overall effect is that motion looks better this way because of the combination of a certain amount of persistence on the screen and what your eye and brain does with the information.
Hmmmm.......Can't agree and would have failed my exams with that answer.
Movies are films/frame-based pictures. Any film in a movie house I ever watched was nice/smooth. Most Dvd's aren't interlaced, they are progressive because they came from film.

Actually, I was taught when TV was first developed it was Black & White Frame Based hardwired throughout the studio at 30 Frames per second.
There was a problem... Engineers said how can we transmit this video information across the air waves and do it within the frequency bands and limited bandwidth that were setup?
Answer: Double the vertical scanning rate and send the video information/frame in 2 parts.
Part 1 = First Field
Part 2 = Second Field
Worked.
When color needed to be added to the transmission the chrominance sub-carrier signal needed to fix exactly into this without affecting the sound. Hence 29.97 was implemented via a formula (same formula was used to determine the chrominance sub-carrier frequency to use in PAL).

It's important to keep your fielding the same as the source video.
Pictures alone as an mpeg2 file are best rendered as frame-based. If pictures/slideshow is mixed with video that is interlaced then of course the pictures will then have to match the fielding of the video.
.

Post Reply