What is a GPU (Graphics Processing Unit)?

GPU (Graphics Processing Unit) is a processor specifically dedicated to graphics processing to ease the central processor’s workload in applications such as Video Games and interactive 3D applications.

What is a GPU (Graphics Processing Unit)?

What is the GPU?

While most of the graphics are rendered in the GPU, the CPU can be used for other types of calculations.

It implements certain graphics processes called primitives optimized for graphics rendering. One of the most common principles for 3D graphics rendering is anti-aliasing, which smooths the edges of shapes to give a more realistic look.

In addition, there are primitives for drawing rectangles, triangles, circles and arcs. GPUs currently have more realism and principles in effects.

Historical

Modern GPUs are descendants of the monolithic graphics chips of the late 1970s and late 1980s. These chips had limited BitBLT support in the form of sprites and generally did not support graphics and drawing.

Some GPUs can perform various operations on a display list and use DMA to reduce the load on the host processor. The first example of this is the ANTIC Co-Processor used in Atari 800 and Atari 5200 from Atari.

In the late 80s and early 90s, high-speed general-purpose microprocessors were very popular to produce the most advanced hardware.

Many graphics cards for PCs and Workstations used Digital Signal Processors (DSP) like Texas Instruments’ TMS340 series to implement quick drawing functions. And in many laser printers, a PostScript image scanning processor included a special chassis that runs on a RISC processor like the AMD 29000.

As semiconductor processing technology developed, it became possible to move drawing functions and BitBLTs to the same board and then to the same chip with a frame buffer controller such as VGA.

These fine 2D graphics accelerators were not as flexible as microprocessor-based ones but were much easier to manufacture and sell. Commodore AMIGA was the first mass-production computer with a blitter unit and the IBM 8514 graphics system was one of the first PC video cards to implement 2D primitives in Hardware.

What’s the Difference Between CPU and GPU?

While it is not possible to replace the CPU with a GPU on a general computer, GPUs today are very powerful and can even exceed the clock frequency of an older CPU.

However, the power of GPUs and their dramatic speed in recent developments stems from two different factors. The first factor is the high specialization of GPUs since it is designed to perform a single task, it is possible to separate more silicon in its design to perform this task more efficiently. For example, existing GPUs are optimized for computation with floating-point values ​​dominating in 3D graphics.

On the other hand, many graphics applications have a high degree of natural parallelism, as the basic computing units are completely independent.

Therefore, it is a good strategy to apply brute force to the GPUs to complete more computations at the same time. Available models typically feature half a dozen corner processors and two to three times more parts or pixel processors. In this way, a clock frequency of about 600-800MHz, which is very low compared to those offered by CPUs, is converted into much larger computing power thanks to its parallel architecture.

One of the biggest differences with the CPU lies in its architecture. Unlike the core processor with Von Neumann architecture, it is based on the Circulation Model. This model facilitates parallel processing and the large segmentation the GPU has for its tasks.

Architecture

A GPU is fairly divided into sections that show that it has a large number of functional units. These functional units can basically be divided into two, which process the corners and the pixels. Therefore, the vertex and pixel are rendered as the main units that the GPU processes.

In addition, the most important is the Report. It draws attention to its speed and plays a relevant role in the storage of operations and intermediate results of the tissues used.

Initially, the GPU gets information from the CPU in the form of vertices. The first treatment received by these corners is carried out in the hill shader. Here transformations are made, such as the rotation or movement of shapes. After that, the part (cropping) of these corner points to be displayed is defined and corner points are converted to pixels by rasterization. These stages have no relevant burden for the GPU.

The location of the graphic chip’s main bottleneck is the pixel shader in the next step. Here, pixel transformations are performed, such as applying texture. When all this is done and before storing pixels in the cache, some effects such as antialiasing, blending, and fog are applied.

Other functional units, called ROPs, can apply some effects to get the information stored in the cache and prepare the pixels for viewing. After that, the output is stored in the frame buffer.

There are now two options that directly receive these pixels for display on a digital monitor, or for analog monitors they generate an analog signal from them. In the second case, they must pass a DAC, Digital-to-Analog Converter to finally show up on the screen.

Programming

Initially, GPU programming was done with calls to BIOS interrupt services. After that, the programming of the GPU began to be made in the assembly language specific to each model.

Then, another level was added between Hardware and Software, which designed APIs (Application Program Interface), which provides a more homogeneous language for existing models on the market. The first widely used API was OpenGL (Open Graphics Language), and then Microsoft developed DirectX.

After the development of the APIs, it was decided to develop a more natural language and a higher-level language for the programmer, that is, graphics. Therefore, OpenGL and DirectX came up with these offers. The high standard language associated with the OpenGL library is “OpenGL Shading Language”, GLSL, which is applied by all manufacturers in principle.

California company NVIDIA has created a proprietary language called Cg in efficiency tests with better results than GLSL. In cooperation with NVIDIA, Microsoft developed the “High-Level Shading Language” HLSL almost identical to Cg, but with some minor incompatibilities.

   Related Articles


What is a CD?
What is DVD?
What is RAM?
What is Intel?
What is AMD?

Add a Comment

Your email address will not be published. Required fields are marked *

error: