Nvidia’s GeForce 256: the first fully integrated GPU

0

This article is part of the Electronics History: The Graphics Chip Chronicles series.

The term “GPU” has been around since at least the 1980s. Nvidia popularized it in 1999 when it released the GeForce 256 expansion card (AIB) as the world’s first fully integrated graphics processing unit. It offered integrated transform, lighting, triangle setup/slicing, and rendering engines in a single-chip processor.

Very Large-Scale Integrated Circuits, or VLSIs, began to take over the semiconductor industry in the early 1990s. As the number of transistors engineers could cram onto a single chip grew almost exponentially, the number of functions in the CPU and GPU increased.

One of the biggest strains on the CPU was the graphics transformation of compute elements into graphics processors. Architects from various graphics chip companies decided that transform and lighting (T&L) was a function that should be in the graphics processor. The operation was known at the time as transformation and lighting (T&L). A T&L engine is a vertex shader and a geometry translator.

In 1997, 3Dlabs developed its Glint Gamma processor, the first programmable transformation and lighting engine as part of its Glint workstation graphics chips, and introduced the term GPU – geometry processor unit. The GPU was a separate chip named Delta (also known as DMX). The 3Dlabs GMX was a co-processor for the Glint rasterizer.

Then, in October 1999, Nvidia introduced the NV10 GPU with an integrated T&L engine for a new consumer graphics card. ATI soon followed with its Radeon graphics chip and called it a visual processing unit – VPU. But Nvidia came out on top in the terminology wars and has since been associated with the GPU and credited with inventing it.

Based on TSMC’s 220nm process, the 120MHz NV10 had 17 million transistors in a 139mm² die and used DirectX 7.0. The GeForce 256 AIB used NV10 with SDR memory.

Before Nvidia started offering Nvidia-branded AIBs, the company depended on partners to build and sell the cards. However, Nvidia has offered reference designs to its OEM partners.

The first AIB to use the 64MB SDR was ELSA’s ERAZOR X, which used its own design to create a board in the NLX form factor.

The GPU had a large 128-bit memory interface and could use DDR or SGRAM memory, a choice made by OEM card partners and usually made as a price-performance trade-off. The AIB shown in the image above has four 8MB SGRAM chips. As this was a 1999 AIB, it used the AGP 4X interface with supported DMA and supported Direct3D API 7.0 and OpenGL 1.2.1 with transformation and lighting.

The chip had many advanced features, including four independent pipeline motors that ran at 120 MHz. This allowed the GPU to produce a fill rate of 480 Mpix/sec. The video output was VGA. The chip also came with hardware alpha-blending and was HDTV compatible.

In addition to advanced graphics features, the chip also had powerful video processing capability. It had TV-out capability and built-in NTSC/PAL encoders. It supports S-VHS and composite video input, as well as stereo 3D.

The integration of transformation and lighting capability into the GPU was an important differentiator for the GeForce 256. Prior to 3Dlabs’ standalone T&L processor, previous 3D accelerators used the CPU to perform these functions. The integration of T&L capability has reduced costs for consumer AIBs while improving performance.

Prior to the GF256, only professional AIBs designed for CAD had a T&L coprocessor engine. It also expanded Nvidia’s market by allowing the company to enter the professional graphics market. Nvidia marketed these AIBs under the name Quadro. The Quadro AIB used the same NV10 as the GeForce AIBs and used certified drivers for various professional graphics applications.

This article is part of the Electronics History: The Graphics Chip Chronicles series.

Share.

Comments are closed.