Paint Pixels to Screen via Linux FrameBuffer

I was recently struck by a curious idea to take input from /dev/urandom, convert relevant characters to random integers, and use those integers as the rgb/x-y values for pixels to paint onto the screen.

I've done some research (here on StackOverflow and elsewhere) and many suggest that you can simply write to /dev/fb0 directly as it is the file representation of the device. Unfortunately, this does not seem to produce any visually apparent results.

I found a sample C program that was from a QT tutorial (no longer available) that used an mmap to write to the buffer. The program runs successfully, but again, no output to the screen. Interestingly enough, when I placed my laptop into Suspend and later restored, I saw a momentary flash of the image (a red square) that was written to the framebuffer much earlier. Does writing to the framebuffer work anymore in Linux for painting to screen? Ideally, I'd like to write a (ba)sh script, but C or similar would work as well. Thanks!

EDIT: Here's the sample program...may look familiar to vets.

#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
#include <fcntl.h>
#include <linux/fb.h>
#include <sys/mman.h>
#include <sys/ioctl.h>

int main()
{
    int fbfd = 0;
    struct fb_var_screeninfo vinfo;
    struct fb_fix_screeninfo finfo;
    long int screensize = 0;
    char *fbp = 0;
    int x = 0, y = 0;
    long int location = 0;

    // Open the file for reading and writing
    fbfd = open("/dev/fb0", O_RDWR);
    if (fbfd == -1) {
        perror("Error: cannot open framebuffer device");
        exit(1);
    }
    printf("The framebuffer device was opened successfully.\n");

    // Get fixed screen information
    if (ioctl(fbfd, FBIOGET_FSCREENINFO, &finfo) == -1) {
        perror("Error reading fixed information");
        exit(2);
    }

    // Get variable screen information
    if (ioctl(fbfd, FBIOGET_VSCREENINFO, &vinfo) == -1) {
        perror("Error reading variable information");
        exit(3);
    }

    printf("%dx%d, %dbpp\n", vinfo.xres, vinfo.yres, vinfo.bits_per_pixel);

    // Figure out the size of the screen in bytes
    screensize = vinfo.xres * vinfo.yres * vinfo.bits_per_pixel / 8;

    // Map the device to memory
    fbp = (char *)mmap(0, screensize, PROT_READ | PROT_WRITE, MAP_SHARED, fbfd, 0);
    if ((int)fbp == -1) {
        perror("Error: failed to map framebuffer device to memory");
        exit(4);
    }
    printf("The framebuffer device was mapped to memory successfully.\n");

    x = 100; y = 100;       // Where we are going to put the pixel

    // Figure out where in memory to put the pixel
    for (y = 100; y < 300; y++)
        for (x = 100; x < 300; x++) {

            location = (x+vinfo.xoffset) * (vinfo.bits_per_pixel/8) +
                       (y+vinfo.yoffset) * finfo.line_length;

            if (vinfo.bits_per_pixel == 32) {
                *(fbp + location) = 100;        // Some blue
                *(fbp + location + 1) = 15+(x-100)/2;     // A little green
                *(fbp + location + 2) = 200-(y-100)/5;    // A lot of red
                *(fbp + location + 3) = 0;      // No transparency
        //location += 4;
            } else  { //assume 16bpp
                int b = 10;
                int g = (x-100)/6;     // A little green
                int r = 31-(y-100)/16;    // A lot of red
                unsigned short int t = r<<11 | g << 5 | b;
                *((unsigned short int*)(fbp + location)) = t;
            }

        }
    munmap(fbp, screensize);
    close(fbfd);
    return 0;
}

I've had success with the following few experiments.

First, find out if X is using TrueColor RGB padded to 32 bits (or just assume this is the case). Then find out if you have write permission to fb0 (and that it exists). If these are true (and I expect many modern toolkits/desktops/PCs might use these as defaults), then you should be able to do the following (and if these defaults don't hold, then you probably can still have some success with the following tests though the details may vary):

Test 1: open up a virtual terminal (in X) and type in: $ echo "ddd ... ddd" >/dev/fb0 where the ... is actually a few screen-fulls of d. The result will be one or more (partial) lines of gray across the top of your screen, depending on how long is your echo string and what pixel resolution you have enabled. You can also pick any letters (the ascii values are all less than 0x80, so the color produced will be a dark gray.. and vary the letters if you want something besides gray). Obviously, this can be generalized to a shell loop or you can cat a large file to see the effect more clearly: eg: $ cat /lib/libc.so.6 >/dev/fb0 in order to see the true colors of some fsf supporters ;-P

Don't worry if a large chunk of your screen gets written over. X still has control of the mouse pointer and still has its idea of where windows are mapped. All you have to do is to grab any window and drag it around a bit to erase the noise.

Test 2: cat /dev/fb0 > xxx then change the appearance of your desktop (eg, open new windows and close others). Finally, do the reverse: cat xxx > /dev/fb0 in order to get your old desktop back!

Ha, well, not quite. The image of your old desktop is an illusion, and you will quickly dispense with it when you open any window to full screen.

Test 3: Write a little app that grabs a prior dump of /dev/fb0 and modifies the colors of the pixels, eg, to remove the red component or augment the blue, or flip the red and green, etc. Then write back these pixels into a new file you can look at later via the simple shell approach of test 2. Also, note that you will likely be dealing with B-G-R-A 4-byte quantities per pixel. This means that you want to ignore every 4th byte and also treat the first in each set as the blue component. "ARGB" is big-endian, so if you visit these bytes through increasing index of a C array, blue would come first, then green, then red.. ie, B-G-R-A (not A-R-G-B).

Test 4: write an app in any language that loops at video speed sending a non square picture (think xeyes) to a part of the screen so as to create an animation without any windows borders. For extra points, have the animation move all over the screen. You will have to make sure to skip a large space after drawing a small row's worth of pixels (to make up for the screen width that is likely much wider than the picture being animated).

Test 5: play a trick on a friend, eg, extend test 4 so that a picture of an animated person appears to pop up on their desktop (maybe film yourself to get the pixel data), then walks over to one of their important desktop folders, picks up the folder and shreds it apart, then starts laughing hysterically, and then have a fireball come out and engulf their entire desktop. Though this will all be an illusion, they may freak out a bit.. but use that as a learning experience to show off Linux and open source and show how its much scarier looking to a novice than it actually is. [the "virus" are generally harmless illusions on Linux]


If you're running X11, you MUST go through X11 APIs to draw to the screen. Going around the X server is very broken (and, often as you've seen, does not work). It may also cause crashes, or just general display corruption.

If you want to be able to run everywhere (both console & under X), look at SDL or GGI. If you only care about X11, you can use GTK, QT, or even Xlib. There are many, many options...