Solution 1:

tail -9999f will do something close to what you want. Add more 9s if your file is bigger.

Problems:

  1. Binary files may not have newline characters. tail -f will wait for a newline before printing anything out.
  2. The version of tail on Solaris (you didn't mention which Solaris but it probably doesn't matter) probably doesn't support that option. It may support tail -n 9999 -f. You may have to acquire the GNU version of tail.
  3. Because the file is constantly growing, there is a race condition between finding out how big it is and starting the tail process. You could miss the start of the file if you don't ask it to get enough lines.
  4. tail won't know when you have really finished writing to the file so your gzip process will never finish either. I'm not sure what will happen when you ctrl-c to end the tail process but it's likely that gzip will clean up after itself and remove the file it was working on.

My suggestion would be to start your original program up and pipe the output to gzip like this:

./my_program | gunzip > new_file.txt

That way, gunzip will wait if my_program is going slow but will still finish when the true end of the file is indicated by my_program finishing.

You may need to rewrite your program to write to STDOUT rather than directly to a file.

Edit:

After a look at the man page, three of the issues above can be resolved. Using the -c <bytes> option instead of -n <lines> mitigates problem 1. Using -n +0 or -c +0 mitigates problem 3. Using --pid=<PID> will make tail terminate when the original program ( running as <PID> ) terminates which mitigates problem 4.

Solution 2:

In linux you can use tail -f -n +0 /path/filename to see it. While -n generally refers to how many lines at the end of the file that you want printed, when passed +<n> it starts at the nth line from the beginning of the file.

From tail --help:

-n, --lines=K            output the last K lines, instead of the last 10;
                         or use -n +K to output lines starting with the Kth