How do you render primitives as wireframes in OpenGL?

glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );

to switch on,

glPolygonMode( GL_FRONT_AND_BACK, GL_FILL );

to go back to normal.

Note that things like texture-mapping and lighting will still be applied to the wireframe lines if they're enabled, which can look weird.


From http://cone3d.gamedev.net/cgi-bin/index.pl?page=tutorials/ogladv/tut5

// Turn on wireframe mode
glPolygonMode(GL_FRONT, GL_LINE);
glPolygonMode(GL_BACK, GL_LINE);

// Draw the box
DrawBox();

// Turn off wireframe mode
glPolygonMode(GL_FRONT, GL_FILL);
glPolygonMode(GL_BACK, GL_FILL);

Assuming a forward-compatible context in OpenGL 3 and up, you can either use glPolygonMode as mentioned before, but note that lines with thickness more than 1px are now deprecated. So while you can draw triangles as wire-frame, they need to be very thin. In OpenGL ES, you can use GL_LINES with the same limitation.

In OpenGL it is possible to use geometry shaders to take incoming triangles, disassemble them and send them for rasterization as quads (pairs of triangles really) emulating thick lines. Pretty simple, really, except that geometry shaders are notorious for poor performance scaling.

What you can do instead, and what will also work in OpenGL ES is to employ fragment shader. Think of applying a texture of wire-frame triangle to the triangle. Except that no texture is needed, it can be generated procedurally. But enough talk, let's code. Fragment shader:

in vec3 v_barycentric; // barycentric coordinate inside the triangle
uniform float f_thickness; // thickness of the rendered lines

void main()
{
    float f_closest_edge = min(v_barycentric.x,
        min(v_barycentric.y, v_barycentric.z)); // see to which edge this pixel is the closest
    float f_width = fwidth(f_closest_edge); // calculate derivative (divide f_thickness by this to have the line width constant in screen-space)
    float f_alpha = smoothstep(f_thickness, f_thickness + f_width, f_closest_edge); // calculate alpha
    gl_FragColor = vec4(vec3(.0), f_alpha);
}

And vertex shader:

in vec4 v_pos; // position of the vertices
in vec3 v_bc; // barycentric coordinate inside the triangle

out vec3 v_barycentric; // barycentric coordinate inside the triangle

uniform mat4 t_mvp; // modeview-projection matrix

void main()
{
    gl_Position = t_mvp * v_pos;
    v_barycentric = v_bc; // just pass it on
}

Here, the barycentric coordinates are simply (1, 0, 0), (0, 1, 0) and (0, 0, 1) for the three triangle vertices (the order does not really matter, which makes packing into triangle strips potentially easier).

The obvious disadvantage of this approach is that it will eat some texture coordinates and you need to modify your vertex array. Could be solved with a very simple geometry shader but I'd still suspect it will be slower than just feeding the GPU with more data.