Turning a log file into a sort of circular buffer

Folks, is there a *nix solution which would make the log file act as a circular buffer? For example, I'd like log files to store maximum 1Gb of data and discard the older entries once the limit is reached.

Is it possible at all? I believe in order to achieve that a log file should be turned into some sort of special device...

P.S. I'm aware of misc logrotating tools but this is not what I need. Logrotating requires lots of IO, happens usually once a day while I need a "runtime" solution.


Solution 1:

Linux has a kernel ring buffer. You can use dmesg to display it.

Or here is a Linux kernel module that appears to do what you want.

What is emlog?

emlog is a Linux kernel module that makes it easy to access the most recent (and only the most recent) output from a process. It works just like "tail -f" on a log file, except that the storage required never grows. This can be useful in embedded systems where there isn't enough memory or disk space for keeping complete log files, but the most recent debugging messages are sometimes needed (e.g., after an error is observed).

The emlog kernel module implements simple character device driver. The driver acts like a named pipe that has a finite, circular buffer. The size of the buffer is easily configurable. As more data is written into the buffer, the oldest data is discarded. A process that reads from an emlog device will first read the existing buffer, then see new text as it's written, similar to monitoring a log file using "tail -f". (Non-blocking reads are also supported, if a process needs to get the current contents of the log without blocking to wait for new data.)

Solution 2:

The closest thing I can think of is RRDTools, but probably it is not what you are looking for. Another solution would be to monitor the log file (say every second or in Linux with inotify), e.g. you write a script like:

while :; do
  if [[ $(stat -c %s $FILE) -gt 10000 ]]; then
    # rotate the log
  fi
  sleep 1
done

with inotify:

while :; do
  if inotifywait [some options] $FILE; then
    # check size and rotate the file
  fi
done

Solution 3:

You can use multilog from djb's Daemontools. You pipe your log output into it. Yes it's log rotation, but rotations are simply:

ln current $tai64nlocaltimestamp

Which, on just about any modern linux filesystem is a super fast operation. You can specify how many log files you want, how big you want them. make 10 x 1024mb files, and you'll have your 1gb ring buffer.

Note, that because of the automatic rotation, it's one source per multilog instance. But you can work around that by writing a simple wrapper with netcat or by hand.

Solution 4:

You could make a FIFO pipe and then from that read it using a script that inserts to a database. When the counter reaches 1,000, restart the id number being inserted to the database. Wouldn't work for size of course, but you used that as an example, so I'm assuming this is a theoretical question.

Solution 5:

Interesting question; you don't usually see that as a design. I do have a program that uses a faintly similar technique to record history, but it uses a binary format. The 'log file' has four parts, all laid out in a machine-neutral format:

  1. A header containing the magic number and the (maximum) number of entries in the used list and free list, the sequence number for the next history entry, the actual number of entries in the used list, the actual number of entries in the free list, and the length of the file (each of which is 4 bytes).
  2. The used list, each entry giving an offset and a length (4 bytes for each part of each entry).
  3. The free list, each entry similar to the used list entry.
  4. The main data, each history record consisting of a contiguous set of bytes terminated by a null terminator byte.

When a new record is allocated, if there is space in the free list, then it overwrites an entry there (not necessarily using it all - in which case the fragment remains on the free list). When there is no space in the free list, then new space is allocated at the end. When an old record rotates out, its space is moved to the free list, and coalesced with any adjacent free records. It is designed to handle SQL statements so the records can be spread over many lines. This code works on a specified number of records. It does not limit the size of the file per se (though it would not be hard to make it do so).

The main code history code is in two files, history.c and history.h, available from the source for the program SQLCMD (my version, not Microsoft's; mine was in existence a decade or more before Microsoft's), which can be downloaded from the International Informix User Group's Software Archive. There is also a history file dump program (histdump.c) and a history tester (histtest.ec - it claims to be ESQL/C, but is itself really C code; one of the support functions it calls uses some Informix ESQL/C library functions). Contact me if you want to experiment without using Informix ESQL/C - see my profile. There are some trivial changes to make for it to compile histtest outside its design milieu, plus you need a makefile.