Is there a way to delete 100GB file on Linux without thrashing IO / load?
Solution 1:
It may be faster to zero/truncate the file than remove it. I also mention this because that's a really large log file, so there must be a tremendous amount of process activity writing to it. Try : > /path/to/logfile.log
if you're not in a position to stop and start the production services.
Solution 2:
ionice -c3 rm yourfile.log
is your best shot, then rm will belong to idle I/O class and only uses I/O when any other process does not need it. ext3 is not stellar when deleting huge files and there's not very much you can do about it. Yes, the rm command will slow down your system. The amount of slowness and the duration of the deletion is something one can only guess, it depends so much on hardware, kernel version and ext3 file system creation settings.
For log servers and other servers with large files I tend to use XFS, as it is very fast with them.
Solution 3:
Alternate solution is having separate disks and cycle between them. So when your done logging to one disk, you swap to the other, and then you could use lots of IO to remove stuff, and not burden the active disk.