Video Lines – Introduction

There are no moving pictures in television, just individual frames played back at sufficient speed to give the perception of motion. The post Video Frames Introduction, demonstrated how video is divided into individual frames, fields and the frequencies used, in this post we will look at how and why fields are split into lines.

Two receptors dominate our vision; rods and cones. Each eye contains approximately ten million cones and ten billion rods.

Cones are clustered around the center of the eye, have good acuity and can see color. Rods, greater in number and dispersed around the eye, are more sensitive to low level light, and only detect black-and-white images.

Cones are of greater interest to us – viewers normally look straight-on to the color viewing screen and are therefore using the center of the eye, which excites the cones more and allows us to see greater detail and color.

Rods cannot be ignored – they can detect flicker if the scan rate of the television or monitor is not high enough, hence the reason we use interlace to increase the frame rate.

Tests have shown, when stood 20 feet (9.5 meters) from a chart, the average human eye is just about able to resolve two lines drawn 1/16 inch (1.75mm) apart. However, this varies enormously from person to person due to the variance in our eyesight. But this value has been found to be a good average.

Before digital monitors and televisions, cathode ray tubes were used to display images. They were heavy, cumbersome, glass devices with magnetic coils around them used to deflect the electron beam to trace out horizontal lines. Each line was scanned below the previous causing the image to appear as the scan moved down the screen.

Early broadcasts used 50, 100 or 405 lines, but the standard definition of the 1950’s and 1960’s provided 525 lines for the USA and 625 for UK and Europe.

Modern LCD, plasma and LED screens use a matrix pattern of pixels, but these are still based on the horizontal line method developed for CRT’s.

Cameras operate in an analogous way to televisions, but in reverse. During the analogue CRT era, cameras used tubes, but the face was light sensitive. An electron beam scanned across the inside of the face and a current flowed that was proportional to the brightness falling on it, over a frame this provide the video signal. Electromagnetic coils around the tubes provided the beam scan and a picture of 525 or 625 lines was created.

Modern cameras using CMOS and CCD image gathering sensors use the same matrix pattern of LCD screens, and are based on the line system of the tube cameras.

Increasing the number of lines will increase resolution to a point. If the number of lines is increased too much, then the eye will not be able to resolve the increased resolution. However, if we increase the physical size of the television, or move closer to it, then you will see the increased resolution. The same theory applies to modern LCD, plasma and LED screens.

As technology moves to UHD-4K and 8K, we will need bigger screens, and to take full advantage of their resolution we will have to sit closer to them in our homes. But not too close as the increased brightness will damage your eyes.