How to extract text from a large file, starting at the first occurrence of a string?

I have a large log file I want to review. All the bad stuff starts at the certain occurrence of an error string. I want to then look at the last X lines from that point and see what might have caused that error. I can't open the file with my favourite text editor because it exhausts all the RAM on the machine.

I thought perhaps I might be able to find the line it occurs on and then use another utility to get data from line X to line Y. Is this possible?


Solution 1:

You can just use "grep" with the -A and/or -B options. The -A switch will read the X number of lines after the error, so it's probably what you want, and the -B switch will read X number of lines before the error, so you would do something like this:

grep -A10 -B2 "string to find" /path/and/file.tofind

to find the 10 lines which occur after "string to find", as well as the 2 lines before it.

Alternatively – and it's probably a much worse solution, you could simply use "head" and "tail" to find the first and then the last part of the file you want; but this assumes you know the line numbers. I.e., if you have a long line file, and you are wanting to read lines 500-510, you might try this:

head -510 /etc/file/to/search | tail -10

Which would first extract the first 510 lines of the file, and then read off the last 10 of those lines.