Easy way to limit file size (stdout) on a shell script level?
Ok, this is a very practial use case from my point of view.
Let say I have a some simple shell oneliner which does log the output into a file. This can be simply anything, for example tcpdump. Is there any generic and trivial way, to make sure, that the output file won't exceed given size?
Resoning behind this, is to protect from filling the whole available space on the mount point by mistake. If I forget about the script, or it will yield GBs of data per hour, then this simple debugging task can lead to a potential system crash.
Now, I am aware of the options build in some of the tools (like combination of -W/-C in tcpdump). What I need is a very generic failsafe.
Long story short - when I run a script like:
% this -is --my=very|awsome|script >> /var/tmp/output.log
How to make sure that output.log will never get bigger than 1GB.
Script can crash, be killed or whatever.
Solution I am looking for should be easy and simple, using only tools available in popular distros like ubuntu/debian/fedora. In general something widely available. Complicated, multiline program is not an options here regardless of the language/technology.
You can use head
for this:
command | head -c 1G > /var/tmp/output.log
It accepts K, M, G and the like as suffixes (bytes are the default). Append 'B' to use the base 10 versions.