Unix shell script to truncate a large file

I am trying to write a Unix script which will truncate/empty a file which is continuously being written/open by an application when it reaches say 3GB of space. I know that the below command would do it :

cp /dev/null [filename]

But I am going to run this in an production environment automatically as a cron job - just posting here to see if you guys faced any issues while doing something similar to this.


Just to add another answer,

: > filename

: is a no-op in bash (POSIX-compliant), so this essentially just opens the file for writing (which of course truncates the file) and then immediately closes it.

EDIT: as shellter commented, you don't actually need a command to go along with the redirection:

$ echo foo > foo.txt
$ cat foo.txt
foo
$ > foo.txt
$ cat foo.txt
$

A simple redirection all by itself will clear the file.


I've used the following command on debian

truncate -s 0 filename

That seems reasonable to me. Unix, of course, would let you do this about 50 different ways. For example,

echo -n "" >filename
cat /dev/null >filename

trunc filename

works on AIX flavor of UNIX