How efficient is the tac command on large files

The taccommand (catreversed) can be used to read a file backwards, just like cat reads it rom the beginning. I wonder, how efficient this is. Does it have to read the whole file from the beginning and then reverses some internal buffer when it reaches the end?

I was planning on using it for some frequently called monitoring script which needs to inspect the last n lines of a file that be several hundreds of megabytes in size. However, I don't want that to cause heavy I/O load or fill up cache space with otherwise useless information by reading through the file over and over again (about once per minute or so).

Can anyone shed some light on the efficiency of that command?


When used correctly, tac is comparably efficient to tail -- reading 8K blocks at a time, seeking from the back.

"Correct use" requires, among other things, giving it a direct, seekable handle on your file:

tac yourfile   # this works fine

...or...

tac <yourfile  # this also works fine

NOT

# DON'T DO THIS:
# this forces tac to copy "yourfile" to a new temporary file, then uses its regular
# algorithm on that file.
cat yourfile | tac

That said, I'd consider repeatedly running a tool of this nature a very inefficient way to scan logs, compared to using logstash or a similar tool that can feed into an indexed store and/or generate events for real-time analysis by a CEP engine such as Esper or Apache Flink.