||The English used in this article may not be easy for everybody to understand. (January 2012)|
In computing, a video card (also called a graphics card or a graphics accelerator) is a special circuit board that controls what is shown on a computer monitor and calculates 3D images and graphics.
A video card can handle two types of video images. First, they can be used to display a two-dimensional (2D) image like a Windows desktop, or a three-dimensional (3D) image like a computer game. Computer-Assisted Drawing (CAD) programs are often used by architects and designers to create 3D models on their computers. If a computer has a very fast video card, the architect can create very detailed 3D models.
Many computers have a basic video and graphics capabilities built-in to the computer's motherboard. These "onboard" video chips are not as fast as normal graphics cards. They are fast enough for basic computer use and even some basic computer games. If a computer user wants faster and more detailed graphics, a video card can be installed.
Video cards have their own processor (called a Graphics Processing Unit or GPU). The GPU separate from the main computer processor (called the Central Processing Unit or CPU). The CPU's job is to process all the calculations needed to make the computer function. The GPU's job is to handle 3D graphics calculations so the CPU does not have to. 3D graphics calculations take a lot of CPU power, so having a video card to handle the graphics calculations lets the CPU focus on other things like running computer programs.
Video cards also have their own memory, separate from the main computer memory. It is usually much faster than main computer memory, too. This helps the GPU do its graphics calculations even faster. Most video cards also allow more than one monitor to be plugged in at one time. This lets the computer user use more than one monitor at once. Graphics manufacturers nVidia and ATI have special technologies that allow two identical cards to be linked together in a single computer for much faster performance. nVidia calls their technology SLI and ATI calls their technology CrossFire. Some modern graphics cards can even process physics calculations to create even more realistic-looking 3D worlds.
Video cards typically connect to a motherboard using the Peripheral Component Interconnect (PCI), the Advanced Graphics Port (AGP) or the Peripheral Component Interconnect Express (PCI Express or PCI-E). PCI-E is the newest and fastest connection; most (if not all) new video cards and motherboards have this connection. Before PCI-E was used, AGP was the standard connection for video cards. Before AGP, video cards were designed for PCI (sometimes called "regular" PCI).
In early computing years, graphics processing was very basic and could be done by the CPU along with all the other processing. However, as computer games advanced and started using 3D graphics, the CPU had too much to do and CPU-makers could not keep up on making them faster. Eventually, video cards were invented to solve this problem. Video cards are designed to have their own processor called the Graphics Processing Unit or GPU. This lets the CPU do more work since it does not have to spend any time on advanced graphics calculations; it can simply pass these calculations off to the GPU to be done.
The first video cards connected to the motherboard via the ISA connection. The first popular non-IBM video cards were manufactured by a company called Hercules Computer Technology, Inc. Throughout the years, the importance of video cards has grown. As they evolved, a new connection standard was developed called Advanced Graphics Port (AGP). This was the first motherboard connection designed exclusively for video cards. It was much faster at transferring information between the video card and the rest of the computer. Eventually, the AGP connection became outdated, and a new connection, called PCI Express (PCI-E), became the standard for video cards. Most video cards manufactured today use PCI-E to connect to the motherboard.