Video is the technology of capturing, recording, processing, storing, transmitting, and reconstructing with digital or analog electronics, meaning a series of images representing moving scenes.
Video History and Development
Etymologically, the word comes from the Latin verb video, vides, videre, and is translated as the verb ‘see’. This term is usually applied to the video signal and is often referred to as “video” as an abbreviation of its full name. Video technology was originally developed for television systems but has evolved into many formats to allow consumers to record videos and watch over the Internet.
In some countries, this is also called image and sound recording on the tape or laserdisc (DVD), but with the emergence of the latter, this term is often defined by the VHS type, previous recordings of BETAMAX on the tape.
Initially, the video signal consists of several lines grouped into several frames, and they are also divided into two areas, carrying the light and color information of the image. The way to carry the line, box number and color information depends on the particular television standard.
The amplitude of the signal is 1Vpp, with some of the signal carrying image information above 0V and some of the synchronizations below 0V level. The positive part can go up to 0.7V for the white level, black corresponds to 0V, and synchronizations are pulses that go down to -0.3V. Today, there are many different standards, especially in the field of computers.
Parts of the Analog Signal
The signal consists of what is called brightness, color, and timing. The amplitude is between 0.7 V and -0.3 V, corresponding to the target from the lower synchronization level.
The appropriate signal is about brightness with the synchronizations to which the color signal is added, and with its synchronization, the color is recorded in such a way that the cutting is mounted on the brightness.
The bandwidth of the brightness signal is usually 5 MHz but it depends on the system used. Colorfulness is a quadrature modulating signal.
The carrier is called the color lower carrier and is a frequency close to the top of the band, in PAL it is 4.43 MHz. Obviously, this frequency relates to the remaining basic frequencies of the signal, for historical reasons, referring to the frequency of the electrical supply network, the frequency of the field based on 50 Hz in Europe and 60 Hz in many.
The image consists of light and color; light defines a black and white image, and this part of the signal is called brightness.
There are different standards for color-coding, NTSC, SECAM, and PAL.
Regarding synchronizations, three classes are distinguished: those related to line or horizontal, area or vertical, and color.
Line synchronization shows where each line in which the image was created starts and ends; They are divided into a front skid, rear skid, and synchronization stroke.
Vertical synchronizations are those that indicate the beginning and end of each field. They consist of front balancing pulses, synchronization pulses, rear balancing pulses, and protection lines.
The frequency of the synchronization pulses depends on the television system: the line frequency of 525 lines per frame in America, 625 lines per frame in Europe. These figures are derived from the frequency of the electrical network, to which the oscillators of the receivers are already connected.
Regarding color, a carrier is modulated with color information in all standards. Made in NTSC and PAL, it is an amplitude modulation for saturation, and in phase for the color tone, it is called quadrature modulation.
The PAL system alternates 180° in each line of the carrier phase to compensate for transmission distortions. The SECAM system modulates each color component in the corresponding lines.
The term video generally refers to a variety of formats: DVD, QuickTime, digital formats including DVC and MPEG-4, and analog cassettes including VHS and Betamax.
When cameras record analog signals as PAL, SECAM, or NTSC or record digital media such as MPEG-4 or DVD (MPEG-2), they can be recorded and transmitted to various physical media on the tape.
Its quality mainly depends on the capture and storage method used. Digital television (DTV) is a relatively new format with higher quality than previous television formats and has become a standard for television.
3D video, three-dimensional digital video was released at the end of the 20th century. Six or eight cameras with real-time depth measurements are typically used to capture 3D sequences. The 3D format is fixed in the MPEG-4 Part 16 Animation Frame Extension (AFX).
In the UK, Australia, the Netherlands, and New Zealand, the term video is often used informally to recorders and their most prominent tapes, that is, generally from the context.
Images per Second
Image upload speed: uploads six to eight images per second (fps) per unit time for old mechanical cameras or 120 or more images per second for new professional cameras.
PAL and SECAM standards specify 25 fps, NTSC 29, 97 fps. Cinema is slower at 24 fps, making it difficult to transfer the movie from movie to video. To achieve the illusion of a moving image, the minimum image upload speed is about fifteen images per second.
It has evolved to prevent flickering or flickering that occurs in a television image when duplicated in an image tube due to the permanence of light phosphors that make up the same image screen.
2/1 interlaced scanning, which is characteristic of PAL, NTSC, and SECAM television systems, and later developed, allows us to analyze each frame of the image in two equal half-frames called the area so that the resulting lines overlap each other alternately.
One of the fields has double lines, this is even field, and the other is called an odd field, with vertical synchronism at the beginning of each line. There is a half-line offset between one area and the other so that the double area can discover the edge of the image, which releases a single area.
Intermittent discovery of a two-field box requires that the number of lines in the box is odd so that the transition line from one field to another can be divided into two halves.
Abbreviated resolution features usually include an “i” to indicate a jittery image. For example, the PAL format is usually specified as 576i50; where 576 indicates the vertical line of the resolution, “i” indicates intermittent, and 50 indicates 50 areas per second (half image).
In progressive scan systems, each scan time updates all the scan lines.
The development of an image representation system other than the image tube, such as TFT and plasma screens, allowed the development of progressive scanning television systems.
A procedure known as deinterlacing can be used to convert pulsating streams, such as analog, DVD, or satellite, to be processed by progressive scan devices such as those set in TFT televisions, projectors, and panel plasma.
The size of an image is measured in pixels for digital or horizontal and vertical scan lines for analog. In digital space (eg DVD), standard-definition television (SDTV) is specified as 720/704/640×480i60 for NTSC and 768/720×576i50 for PAL or SECAM resolution.
However, while the number of active scan lines remains constant in the analog area, the number of horizontal lines varies by signal quality measurement to approximately 320 pixels per line for VCR quality, 400 pixels for television broadcasts, and 720 pixels for DVD.
The aspect ratio is protected from a square pixel deficiency.
The new high definition televisions (HDTVs) have a resolution of1920 ×1080px 60, resolution of 1920 pixels per line per 1080 lines, up to 60 frames per second.
3D video resolution is measured in the voxel. For example, 512×512×512 voxels of resolution are now used for simple 3D, which can even be viewed on some PDAs.
The aspect ratio is expressed by the width of the screen relative to the height. Until the start of standardization of high-definition television, the standard format had an aspect ratio of 4/3. Accepted is 9/16. The compatibility between both aspect ratios can be done in different ways.
A 4/3 image to be displayed on a 16/9 screen can be presented in three different ways:
- There are vertical black bars on both sides. It maintains a ratio of 4/3 without losing part of the active area of the screen.
- Enlarges the image until it fills the entire screen horizontally. Part of the image disappears above and below the image.
- Deforms the image to adapt to the screen format. The entire screen is used and the entire image is seen, but the geometry is changed.
The 16/9 image to be displayed on the 4/3 screen has three similar views:
- With horizontal bars above and below the image. The entire image is visible but the screen size is lost.
- Enlarging the image to fill the entire screen vertically, missing the side portions of the image.
- Deforming the image to adapt it to the aspect ratio of the screen. The whole image appears on the entire screen, but the geometry is changed.
Color Space and Number of Bits Per Pixel
The color model name describes creating a video color. The YTS system was used in NTSC television. It is very close to the YUV system used in NTSC and PAL televisions and the YDbDr system used by SECAM television.
The number of different colors that can be represented by a pixel depends on the number of bits per pixel (bpp). One way to reduce the number of bits per pixel in the digital video can be done by chroma subsampling.
Its quality can be measured subjectively by official metrics such as PSNR or video quality using expert observation.
The subjective video quality of a video processing system can be evaluated as follows:
- Select the video sequences (SRC) to be used to perform the test.
- Select the system settings to be evaluated (HRC).
- Choose a test method to present video clips to experts and collect their reviews.
- Invite a sufficient number of experts, preferably no less than 15.
- Take the tests.
- Calculate the average for each HRC based on the expert’s assessment.
There are many subjective video quality methods outlined in the BT.500 recommendation. One of the ITU-T Standardized methods is DSIS (Double Stimulus Impairment Scale).
In this method, each specialist sees a solid reference to the video and then a bad version of the same video. The specialist then evaluates the damaged video using a scale ranging from “no damage detected” to “damage very annoying”.
A variety of methods are used to compress video sequences. Video data includes temporal, spatial, and spectral redundancy.
In general, spatial redundancy is reduced by recording differences between parts of the same image (frame); This task is known as in-frame compression and is closely related to image compression.
Similarly, differences between temporary surplus images (frames) can be reduced by recording; This task is known as inter-frame compression and includes motion compensation and other techniques. Used for satellite systems and MPEG-4 home systems.
Bit rate is a measure of the rate of information contained in a stream or stream. The unit in which it is measured in bits per second (bits/s or bps) or Megabits per second (Mbit/s or Mbps).
A higher bit rate provides better quality. For example, VideoCD with a bit rate of about 1Mbps has a lower quality than a DVD with about 20Mbps. VBR (Variable Bit Rate) is a strategy used to maximize the visual quality of the video and minimize the bit rate.
In fast-moving scenes, the variable bitrate uses more bits than in slow-moving scenes but achieves a consistent visual quality of the same length. In the case of real-time and unbuffered streaming, CBR (Constant Bit Rate) should be used when the bandwidth is constant.
Two channels (one right channel for the right eye and one left channel for the left) or two color-coded overlay layers are required for stereoscopic video.
This left and right layer technique are sometimes used on broadcast networks or on the latest “anaglyph” versions of 3D movies on DVD. Red / cyan plastic lenses provide a hidden way to view images to create a stereoscopic view of the content.
The new HD DVD and blue-ray discs will greatly enhance the 3D effect in color-coded stereo programs. The first HD players on the market were expected to be released on the April 2006 NAB Show in Las Vegas.
♦ What is a Graphics Card?
♦ What is Blu-ray?
♦ What is DVI?
♦ What is SLI?