Quick unix command to display specific lines in the middle of a file?

Trying to debug an issue with a server and my only log file is a 20GB log file (with no timestamps even! Why do people use System.out.println() as logging? In production?!)

Using grep, I've found an area of the file that I'd like to take a look at, line 347340107.

Other than doing something like

head -<$LINENUM + 10> filename | tail -20 

... which would require head to read through the first 347 million lines of the log file, is there a quick and easy command that would dump lines 347340100 - 347340200 (for example) to the console?

update I totally forgot that grep can print the context around a match ... this works well. Thanks!


Solution 1:

I found two other solutions if you know the line number but nothing else (no grep possible):

Assuming you need lines 20 to 40,

sed -n '20,40p;41q' file_name

or

awk 'FNR>=20 && FNR<=40' file_name

When using sed it is more efficient to quit processing after having printed the last line than continue processing until the end of the file. This is especially important in the case of large files and printing lines at the beginning. In order to do so, the sed command above introduces the instruction 41q in order to stop processing after line 41 because in the example we are interested in lines 20-40 only. You will need to change the 41 to whatever the last line you are interested in is, plus one.

Solution 2:

# print line number 52
sed -n '52p' # method 1
sed '52!d' # method 2
sed '52q;d' # method 3,  efficient on large files 

method 3 efficient on large files

fastest way to display specific lines

Solution 3:

with GNU-grep you could just say

grep --context=10 ...

Solution 4:

No there isn't, files are not line-addressable.

There is no constant-time way to find the start of line n in a text file. You must stream through the file and count newlines.

Use the simplest/fastest tool you have to do the job. To me, using head makes much more sense than grep, since the latter is way more complicated. I'm not saying "grep is slow", it really isn't, but I would be surprised if it's faster than head for this case. That'd be a bug in head, basically.

Solution 5:

What about:

tail -n +347340107 filename | head -n 100

I didn't test it, but I think that would work.