Log through a FIFO, then redirect to a file?
I have an application that must log each transaction. Every log message is flushed because we need to have a record of what occurred leading up to a crash. My colleagues and I were curious about how to achieve the performance effects of buffering while guaranteeing that the log message had left the process.
What we came-up with is:
- make a FIFO that the application can write to, and
- redirect the contents of that FIFO to a regular file via
cat
.
That is, what was ordinarily:
app --logfile logfile.txt
is now:
mkfifo logfifo
cat logfifo &> logfile.txt &
app --logfile logfifo
Are there any gotchas to this approach? It worked when we tested it, but we want to make absolutely sure that the messages will find their way to the redirect file even if the original application crashes.
(We don't have the source code to the application, so programming solutions are out of the question. Also, the application won't write to stdout
, so piping directly to a different command is out of the question. So syslog
is not a possibility.)
Update: I've added a bounty. The accepted answer will not involve logger
for the simple reason that logger
is not what I've asked about. As the original question states, I am only looking for gotchas in using a FIFO.
Solution 1:
Note that a fifo is typically necessary in programming where the amount written in can surpass the amount read out.
As such a fifo wont work entirely smoothly as you anticipate but would solve your main problem whilst introducing another.
There are three possible caveats.
- Writing to the fifo is blocked indefinitely if nothing is reading the other end at initialization.
- The fifo has a fixed width of 64K by which if the buffer is filled by that point further writes will block until the reader has caught up.
- The pipe writer will by killed with SIGPIPE if the reader dies or exits.
This will mean that your problem (emulate buffered I/O on a non-buffered write) will be resolved. That is because the new 'limit' on your FIFO will in effect become the speed of whatever utility is writing what is in the pipe to disk (which presumably will be buffered I/O).
Nevertheless, the writer becomes dependant on your log reader to function. If the reader stops suddently reading, the writer will block. If the reader suddenly exits (lets say you run out of disk space on your target) the writer will SIGPIPE and probably exit.
Another point to mention is if the server panics and the kernel stops responding you may lose up to 64k of data that was in that buffer.
Another way to fix this will be to write logs to tmpfs (/dev/shm on linux) and tail the output to a fixed disk location. There are less restrictive limits on memory allocation doing this (not 64K, typically 2G!) but might not work for you if the writer has no dynamic way to reopen logfiles (you would have to clean out the logs from tmpfs periodically). If the server panics in this method you could lose a LOT more data.
Solution 2:
mkfifo logfifo
cat logfifo &> logfile.txt &
app --logfile logfifo
What happens when your cat logfifo
process dies, someone kills it by accident, or someone accidentally points it to the wrong location?
My experience is that the app
will quickly block and hang. I've tried this with Tomcat, Apache and a few small homebuilt applications, and ran into the same problem. I never investigated very far, because logger
or simple I/O redirection did what I want. I usually don't need the logging completeness, which is what you are trying to pursue. And as you say, you don't want logger.
There is some discussion on this problem at Linux non-blocking fifo (on demand logging).
Solution 3:
Your options are fairly limited by the app, but what you've tested will work.
We do something similar with varnish and varnishncsa to the get logs somewhere useful to us. We have a fifo and just read from it with syslog-ng and send it where we need. We handle about 50GB and haven't come across a problem with this approach so far
Solution 4:
The environment is CentOS and the application writes to a file...
Instead of sending to a regular file, I'd send the output to syslog and make sure that syslog messages are being sent to a central server as well as locally.
You should be able to use a shell script like this:
logger -p daemon.notice -t app < fifo
You could also take input (to logger
) from cat
or from tail -f
into a pipe:
tail -f fifo | logger ...
cat fifo | logger ...
The only problem is that it doesn't differentiate based on the importance of the log message (everything is NOTICE) but at least it gets logged and also sent off-host to a central server.
Configuring syslog depends on which server you are using. There is rsyslog
and syslog-ng
(both very capable).
EDIT: Revised after getting more information from poster.