Printing the last column of a line in a file
I have a file that is constantly being written to/updated. I want to find the last line containing a particular word, then print the last column of that line.
The file looks something like this. More A1/B1/C1 lines will be appended to it over time.
A1 123 456
B1 234 567
C1 345 678
A1 098 766
B1 987 6545
C1 876 5434
I tried to use
tail -f file | grep A1 | awk '{print $NF}'
to print the value 766, but nothing is output.
Is there a way to do this?
You don't see anything, because of buffering. The output is shown, when there are enough lines or end of file is reached. tail -f
means wait for more input, but there are no more lines in file
and so the pipe to grep
is never closed.
If you omit -f
from tail
the output is shown immediately:
tail file | grep A1 | awk '{print $NF}'
@EdMorton is right of course. Awk can search for A1
as well, which shortens the command line to
tail file | awk '/A1/ {print $NF}'
or without tail, showing the last column of all lines containing A1
awk '/A1/ {print $NF}' file
Thanks to @MitchellTracy's comment, tail
might miss the record containing A1
and thus you get no output at all. This may be solved by switching tail
and awk
, searching first through the file and only then show the last line:
awk '/A1/ {print $NF}' file | tail -n1
To print the last column of a line just use $(NF):
awk '{print $(NF)}'
One way using awk
:
tail -f file.txt | awk '/A1/ { print $NF }'
You can do this without awk with just some pipes.
tac file | grep -m1 A1 | rev | cut -d' ' -f1 | rev