Grep in a huge log file (>14 GB) only the last x GB?

I guess you could use tail to only output that last 4GB or so by using the -c switch

-c, --bytes=[+]NUM
output the last NUM bytes; or use -c +NUM to output starting with byte NUM of each file

You could probably do something with dd too by setting bs=1 and skiping to the offset you want to start e.g.

dd if=file bs=1024k skip=12g | grep something

I'm just posting this because some of the comments asked for it.

What I end-up using was (15 GB file). It worked very fast and saved me a ton of time.

tail -f -c 14G file | grep something

I also did a very rudimentary benchmark on the same file. I tested:

grep xxx file
// took for-ever (> 5 minutes)

dd if=file bs=1 skip=14G | grep xxx
// very fast < 1 sec

tail -c 14g | grep xxx
// pretty fast < 2 sec

the tail is just a bit shorter.

NB: the suffix used g and G differ per command (Ubuntu 15.10)