What are Vertex and Pixel shaders?

Solution 1:

DirectX 10 and OpenGL 3 introduced the Geometry Shader as a third type.

In rendering pipeline order -

Vertex Shader - Takes a single point and can adjust it. Can be used to work out complex **vertex lighting calcs as a setup for the next stage and/or warp the points around (wobble, scale, etc).

each resulting primitive gets passed to the

Geometry Shader - Takes each transformed primitive (triangle, etc) and can perform calculations on it. This can add new points, take them away or move them as required. This can be used to add or remove levels of detail dynamically from a single base mesh, create mathematical meshes based on a point (for complex particle systems) and other similar tasks.

each resulting primitive gets scanline converted and each pixel the span covers gets passed through the

Pixel Shader (Fragment Shader in OpenGL) - Calculates the colour of a pixel on the screen based on what the vertex shader passes in, bound textures and user-added data. This cannot read the current screen at all, just work out what colour/transparency that pixel should be for the current primitive.

those pixels then get put on the current draw buffer (screen, backbuffer, render-to-texture, whatever)

All shaders can access global data such as the world view matrix and the developer can pass in simple variables for them to use for lighting or any other purpose. Shaders are processed in an assembler-like language, but modern DirectX and OpenGL versions have built in high-level c-like language compilers built in called HLSL and GLSL respectively. NVidia also have a shader compiler called CG that works on both APIs.

[edited to reflect the incorrect order I had before (Geometry->Vertex->Pixel) as noted in a comment.]

There are now 3 new shaders used in DirectX 11 for tessellation. The new complete shader order is Vertex->Hull->Tessellation->Domain->Geometry->Pixel. I haven't used these new ones yet so don't feel qualified to describe them accurately.

Solution 2:

A Pixel Shader is a GPU (Graphic Processing Unit) component that can be programmed to operate on a per pixel basis and take care of stuff like lighting and bump mapping.

A Vertex Shader is also GPU component and is also programmed using a specific assembly-like language, like pixel shaders, but are oriented to the scene geometry and can do things like adding cartoony silhouette edges to objects, etc.

Neither is better than the other, they each have their specific uses. Most modern graphic cards supporting DirectX 9 or better include these capabilities.

There are multiple resources on the web for gaining a better understand of how to use these things. NVidia and ATI especially are good resources for documents on this topic.

Solution 3:

Vertex and Pixel shaders provide different functions within the graphics pipeline. Vertex shaders take and process vertex-related data (positions, normals, texcoords).

Pixel (or more accurately, Fragment) shaders take values interpolated from those processed in the Vertex shader and generate pixel fragments. Most of the "cool" stuff is done in pixel shaders. This is where things like texture lookup and lighting take place.