The Graphics Card or Video Card uses support processors to process information as quickly and efficiently as possible, and also uses memory chips to temporarily store images.
What is a Video Card?
Fixed function graphics accelerator chip family from S3 Corporation. The 86C801, 86C805, 86C924, and 86C928 chips are used in most accelerated graphics adapters used to speed up Microsoft Windows video response.
Modern video cards can offer additional features such as setting up television signals, the presence of connectors for the stylus, video recording and decoding of different formats.
A video card usually has two main features. Image resolution and color count that can display simultaneously.
Both features determine whether the user likes certain video games or if they can use software that requires a lot of graphics capacity, for example, design programs.
The history of graphics cards begins in the late 1960s when you switched from using printers as a display item to using monitors.
The first cards could only display text at 40×25 or 80×25, but the advent of the first graphics chips like the Motorola 6845 allowed it to start offering S-100 or Eurocard bus-based equipment with graphics capabilities. With the cards added to a television modulator, the term video card was used for the first time.
The success of the home computer and the first video game consoles means that due to the lower costs, these chips are integrated into the motherboard.
Even computers that already come with a graphics chip contain mainly 80 column cards that add 80×24 or 80×25 character text modes to run soft CP/M.
The evolution of graphics cards made a big comeback in 1995, with the advent of the first 2D/3D cards produced by Matrox, Creative, S3, and ATI. These cards met the SVGA standard but had 3D functions. In 1997, 3dfx launched the Voodoo graphics chip with new 3D effects as well as new computing power.
From this point forward, a series of graphics cards such as Voodoo2 from 3dfx, TNT and TNT2 from NVIDIA follow each other. The power of these cards was that the PCI port to which they were connected was short in bandwidth. Intel has developed AGP (Accelerated Graphics Port) to address the bottlenecks that appear between the processor and the card.
From 1999 to 2002, NVIDIA dominated the video card market with its GeForce product line. In this period, developments focused on the 3D algorithm field and the speed of graphics processors.
However, memory also needed to increase speed, so DDR memories were included in graphics cards. At that time, video memory capacities changed from 32 MB GeForce to 64 and 128 MB GeForce 4.
Graphics Card Types
The Monochrome Display Adapter or Monochrome Adapter was released by IBM as 4KB memory.
Special for TTL monitors. It had no graphics, and the only resolution was the resolution presented in text mode (80×25) at 14×9 dots without any configuration possibilities.
Basically, this card uses the video driver to read the dot matrix to be viewed from the ROM and is sent to the monitor as serial information. The lack of graphics rendering shouldn’t be surprising, as there were no apps on these early PCs that could really benefit from a good video system. In practice, everything was limited to information in text mode.
This type of card is quickly identified because it contains a communication port for the printer.
Depending on the text used, “Color Graphics Array” or “Color Graphics Adapter” appeared in 1981 by IBM and was very common. It allowed 8×8 dot strings on 25-line, 80-column screens, but only 7×7 dots to represent the characters.
This detail made it impossible for him to represent underlines, so he replaced them with different densities in that character. It allowed resolutions up to 640×200 in graphics mode. The memory was 16 KB and was only compatible with RGB and Composite monitors.
Although superior to MDA, many users preferred the latter, as the distance between the points of the potential network on the CGA monitors is greater.
Thus, it was possible to reach 8 colors in 16 different shades, each of which could not be repeated in two densities, at all resolutions.
This card had a fairly common failure and was known as snow. This problem was of a random nature, and bright and intermittent dots appeared on the screen that distorted the image. So much so that some BIOS of the time included No Snow cleaning option.
Known as Hercules Video Card, or more popularly Hercules, it was on the market in 1982 and has become a major standard of success, although it does not support IBM’s BIOS routines. Its resolution was monochrome 720×348 dots with 64 KB of memory.
Since there is no color, the only task in memory is to reference every point on the screen using 30.58 KB for graphics mode, the rest for text mode and other functions. The values were made at a frequency of 50 HZ, managed by the 6845 video controller. The characters were drawn in 14×9 dot matrices.
GPU, which stands for graphics processing unit, is a processor dedicated to graphics processing. Its purpose is to alleviate the workload of the central processor and is therefore optimized for floating-point computation in 3D functions. Most of the information presented in the technical specifications of a graphics card expresses the features of the GPU as it constitutes the most important part of the card.
Three of the most important of these features are the core clock frequency, the number of shader processors and the number of pipelines, ranging from 500 MHz in low-end cards to 850 MHz in high-end cards. A 3D image made of corners and lines turns into a 2D image made of pixels.
Graphics RAM Memory
Depending on whether the graphics card is integrated into the motherboard, it uses the computer’s own RAM memory or has a special memory card. This memory is video memory or VRAM.
The memory used in 2010 was based on DDR technology, which emphasized GDDR2, GDDR3, GDDR4 and GDDR5, especially GDDR2, GDDR3, and GDDR5. The memory clock frequency was between 400 MHz and 4.5 GHz.
RAMDAC is responsible for converting computer-generated digital signals into an analog signal that can be interpreted by the monitor.
Depending on the number and speed of bits processed simultaneously, the converter can support different monitor refresh rates. Given the growing popularity of digital monitors, RAMDAC is becoming outdated because analog conversion is not necessary, but it’s true that many retain VGA connections for compatibility.
Ports on the Motherboard
In chronological order, the connection systems between the video card and the motherboard are basically:
MSX Slot: 8-bit bus used in MSX equipment.
ISA: 16-bit 8MHz bus architecture, dominant in the 1980s; It was created in 1981 for IBM PCs.
Fox II: It is used in Commodore Amiga 2000 and Commodore Amiga 1500.
Fox III: It is used on Commodore Amiga 3000 and Commodore Amiga 4000.
NuBus: Used on Apple Macintosh.
Processor Direct Slot: Used on Apple Macintosh.
MCA: In 1987, IBM attempted to replace ISA. It had 32 bits and 10 MHz speeds but was not compatible with the previous ones.
EISA: 1988 IBM competition response; It is 32 bit, 8.33 MHz and backward compatible.
VESA: An ISA extension that resolves the 16-bit restriction, doubles the size of the bus and has a speed of 33 MHz.
PCI: The bus that replaced the previous ones from 1993; With the 32-bit size and 33MHz speed, it allowed a dynamic configuration of connected devices without having to manually adjust jumpers. PCI-X was a version that increased the size of the bus to 64 bits and increased its speed to 133 MHz.
AGP: Private bus, 32 bit as PCI; In 1997, the first version increased speed to 66 MHz.
PCIe: It is a serial interface that started to compete with AGP since 2004 and reached twice the bandwidth in 2006. PCI-X should not be confused with the PCI version.
The most common connection systems are:
DA-15 RGB Connector: Mostly used on Apple Macintosh.
Digital TTL DE-9: DA-15 RGB Connector: Mostly used on Apple Macintosh.
SVGA/Dsub-15: It has been the analog standard since the 1990s. It is designed for CRT devices and eliminates electrical noise and digital conversion to analog conversion and sampling error when evaluating the pixels to be sent to the monitor.
DVI: It is designed to achieve maximum image quality on digital screens such as LCD or projector. It prevents distortion and noise by directly matching a pixel that will be represented by one of its monitors in its native resolution.
S-Video: It includes supporting televisions, DVD players, videos and game consoles.
Composite: It is very old and comparable to scart, it is a very low-resolution analog using the RCA connector.
Component: It is also used for projectors and has three pins (Y, Cb, and Cr) of comparable quality to SVGA.
HDMI: It is a compressed encrypted digital audio and video technology on the same cable.
Display Port: Graphics card port created by VESA that rivals HDMI transmits high-definition video and audio. Its advantages are that it is patent-free and therefore has no copyright to include it in the devices, it also has some tabs that prevent the cable from being accidentally removed.
Graphics cards reach very high temperatures because they are exposed to workloads. If not taken into account, the heat generated may cause the device to malfunction, to be blocked or even to malfunction. To prevent this, cooling devices are included to remove excess heat from the card.
Heatsink: It is the type made of heat conductive material and fixed on the card. Productivity is a function of structure and total surface, so they are quite cumbersome.
Fan: Removes heat from the card by moving the nearby air. It is less efficient than cooler and produces noise by having moving parts.
Until now, the power supply of graphics cards has not been a major problem, but the current trend of new cards is to consume more and more energy. Although the power supplies are getting stronger with each passing day, the bottleneck is on the PCIe port, which can only provide 150W of power.
For this reason, graphics cards with higher consumption than can be provided by PCIe have a connector that allows direct connection between the power supply and the card without having to pass through the motherboard and hence the PCIe port.
However, it is estimated that graphics cards may not need their own power supplies for a long time and will turn this set into external devices.
Difference Between GPU and Graphics
It is important not to confuse the GPU with the graphics card, as not all produced GPUs are integrated solely into graphics cards.
You can think of the graphics card as a structure developed specifically and compatible with the PC. Graphics cards used in computers have a special structure, while GPUs are used externally from computers embedded in game consoles or processors.
In addition, do not confuse the GPU manufacturer with the brands that developed and marketed the card. For example, the largest graphics chip manufacturers on the market right now are NVIDIA and AMD. This is because they are solely responsible for making the graphics chips (GPU).