GPU (Graphics Processing Unit) is a processor dedicated explicitly to graphics processing to offload the workload of the central processor in applications such as Video Games and interactive 3D applications.
What is the GPU?
Most of the graphics-related stuff is processed on the GPU, while the CPU can be used for other types of computations.
The GPU implements certain graphics operations called primitives that are optimized for graphics processing. One of the most common primitives for 3D graphics processing is anti-aliasing, which smooths the edges of shapes to give them a more realistic look.
Additionally, there are primitives for drawing rectangles, triangles, circles, and arcs. GPUs currently have more realism and primitives in effects.
Historical
Modern GPUs are descendants of the monolithic graphics chips of the late 1970s and 1980s. These chips had limited BitBLT support in the form of sprites. They also generally lacked support for drawing graphics and shapes.
Some GPUs could perform multiple operations on a display list and use DMA to reduce the load on the host processor. The first example of this was the Atari 800 and Atari 5200. The ANTIC Coprocessor from Atari was used.
In the late 80s and early 90s, high-speed general-purpose microprocessors were very popular for implementing the most advanced GPUs.
Many graphics cards for PCs and Workstations use Digital Signal Processors (DSPs), such as the TMS340 series from Texas Instruments, to implement fast drawing functions. Many laser printers included a dedicated GPU chassis running a PostScript image rendering processor on an RISC processor, such as the AMD 29000.
As semiconductor processing technology improved, it became possible to move drawing functions and BitBLTs onto the same board and later onto the same chip as a frame buffer controller like VGA.
These thin 2D graphics accelerators were not as flexible as microprocessor-based ones, but they were much easier to manufacture and sell. The Commodore AMIGA was the first mass-produced computer to include a blitter unit. Plus, the IBM 8514 graphics system was one of the first PC video cards to implement 2D primitives in hardware.
What’s the Difference Between CPU and GPU?
While it is not possible to replace the CPU with a GPU in a mainstream computer, GPUs today are very powerful. They can even exceed the clock speed of an older CPU.
However, the power of GPUs and their dramatic speeds in recent developments are due to two different factors. The first factor is the high specialization of GPUs. Because they are designed to perform a single task, it is possible to allocate more silicon in the design to accomplish this task more efficiently. For example, current GPUs are optimized for computation with floating-point values, which is dominant in 3D graphics.
On the other hand, many graphics applications have a high degree of natural parallelism since the basic computational units are entirely independent.
Therefore, it is an excellent strategy to brute force GPUs to complete more computations simultaneously. Current GPU models typically have half a dozen vertex processors and two to three times as many granular or pixel processors. In this way, a clock frequency of around 600-800MHz, which is very low compared to what is offered by CPUs, is translated into much greater computational power thanks to its parallel architecture.
One of the most significant differences between the two is the CPU’s architecture. Unlike the core processor with Von Neumann architecture, the GPU is based on the Circulation Model. This model facilitates parallel processing and the large segmentation that the GPU has for its tasks.
What is GPU Architecture?
A GPU is quite compartmentalized, which shows that it has a large number of functional units. These functional units can be divided into two, basically, those that process vertices and those that process pixels. Therefore, the vertex and the pixel are formed as the primary units that the GPU processes.
In addition, and most importantly, the Report. It stands out for its speed and plays a relevant role in storing the intermediate results of the operations and the textures used.
Initially, the GPU receives information from the CPU in the form of vertices. The first treatment that these vertices receive is carried out in the vertex shader. Here, transformations such as rotation or movement of the shapes are performed. After that, the part of these vertices that will be displayed (clipping) is defined, and the vertices are converted to pixels by the rasterization process. These stages do not have a relevant load for the GPU.
The main bottleneck of the graphics chip is the pixel shader, which is the next step. Here, pixel transformations such as applying textures are performed. When all this is done, and before storing the pixels in the cache, some effects are used, such as antialiasing, blending, and fog.
Other functional units, called ROPs, can apply some effects to receive the information stored in the cache and prepare the pixels for display. After that, the output is stored in the frame buffer.
Now, there are two options: either directly receiving these pixels to be displayed on a digital monitor or generating an analog signal from them for analog monitors. In the second case, they must pass through a DAC, a Digital-Analog Converter, to finally be displayed on the screen.
Programming
Initially, GPU programming was done through calls to BIOS interrupt services. After that, GPU programming began to be done in assembly language specific to each model.
Later, another level was added between Hardware and Software, designing APIs (Application Program Interface) that provided a more homogeneous language for existing models on the market. The first widely used API was OpenGL (Open Graphics Language), and then Microsoft developed DirectX.
After the development of APIs, it was decided to develop a more natural language closer to the programmer, i.e., a high-level language for graphics. Therefore, OpenGL and DirectX emerged from these proposals. The high-level standard language associated with the OpenGL library is the “OpenGL Shading Language,” GLSL, which is implemented in principle by all manufacturers.
The Californian company NVIDIA created a proprietary language called Cg, which had better results than GLSL in efficiency tests. Microsoft, in collaboration with NVIDIA, developed the “High-Level Shading Language” HLSL, which is almost identical to CG but with some minor incompatibilities.