How do I see all previous output from a completed terminal command?
I've executed a command in gnome terminal that printed more output to the terminal than I expected. I'd like to read the entire output, but the terminal scroll stops before reaching the beginning.
I understand that I can change the terminal profile settings to enable unlimited scrolling, or pipe the output to a file, etc. All of these common solutions apply to future output, however.
How do I view the complete terminal output of a command that has already been executed?
Edit: All right, it can't be done. Thanks, everybody!
Solution 1:
My experience is that the consensus in the comments is correct - once the terminal's buffer has been exceeded, that data is lost (or as good as - it could possibly be in memory that hasn't been overwritten yet) - and because of this you can't retroactively increase the buffer size.
This answer is borderline somewhere between a comment, an answer and perhaps overkill for your situation. It's more of a suggested approach that may address your situation - particular the problem of not knowing you need the log until it is too late (non-causal problems are hard) but it is not a direct answer to your question.
In any case, it was too long for a comment. I'm not explicitly listing all the code required to implement this approach, mostly because there are a bunch of implementation decisions that need to be made; if you need more detailed info I'd be glad to provide it.
Script
is far from pleasant to deal with
First off, the script
utility has been suggested as a 'stopgap' to prevent the loss of data without increasing the buffer size (which has security implications when set to unlimited). If there was ever a utility that needed some TLC, script
is it. Then again, it was developed by the kernel team. Read into that as you will.
I find script
to frequently be more trouble than its worth (post processing it to make it semi-human readable, etc), and instead have started to use a simplified method to log stdout, stdin, and/or stderr. In some sense this is recreating script, but with full control instead of being at the mercy of the hard coded script
logging settings.
This approach could relatively seamlessly be integrated into your shell sessions, and in the rare cases that you did overflow the terminals buffer, you'll have a temporary file with those contents. In order to keep the logging 'clean', there are some housekeeping steps you'll have to address. Additionally, the same security issues (log of all terminal output) will by default exist; there is a simple method to encrypt the logs however.
There are 3 basic steps:
- Configure redirection so that you split stdout (and stderr if desired) to a file and to the terminal. I kept this example simple and am not directing stdin or stderr to the file - however if you understand the stdout redirection example, the rest is trivial.
- Configure .bashrc so this logging starts whenever a shell is opened.
- When a given shell is closing, use the bash builtin
TRAP
to call user code which will terminate the session logging (you can delete the file, archive it, etc.)
With this approach you will effectively have an invisible safety net that will allow to see the entire history of a given shell session (based on what you redirect - again, to simplify things I am only showing stdout); when you don't need it you shouldn't even know it's there.
Details
1. Configure Redirection
The following code snippet will create a file descriptor 3, which points at a log file. stdout is redirected to 3, and using tee
, we then split that stream back into the terminal (equivalent to stdout). You can trivially add stderr to the same command / log file, pipe it to a different file, or leave it as is (unlogged).
logFile=$(mktemp -u)
exec 3>&1 1> >(tee $logFile >&3)
You'll find this log file to be far cleaner than that generated by script; it doesn't stored backspaces, linefeeds and other special characters that are frequently unwanted.
Note that if you want the logFile encrypted, you can do that fairly easily by adding an additional pipe stage after the tee command through openssl.
2. Automate the log generation
In .bashrc add the same code as above. Each time a new shell is created a log file specific to that session will be created.
export logFile=$(mktemp -u)
exec 3>&1 1> >(tee $logFile >&3)
echo "Current session is being logged in $logFile"
3. Automatically close out logging when shell is closing
If you want the log file to be deleted when the session is ended you can use the bash built-in trap
function to detect the session is ending and call a function to address the log file, for example (also in .bashrc).
trap closeLog EXIT
closeLog () {
rm -f "$logFile" >/dev/null 2>&1
}
Session logging cleanup could be handled in a number of different ways. This approach will get called when the shell is closing by trapping the 'exit' signal. At this point you could delete the log file, move it / rename it, or any number of things to clean it up. You also could have the log files cleaned up by a cron job rather than via a TRAP (if this approach is used, I'd suggest a periodic cleanup task if you don't already have one configured for the /tmp directory; as if the bash shell crashes the EXIT trap will not get triggered).
Note on handling subshells
An interesting situation will develop with subshells. If a new interactive shell is opened on top of an existing one, a new log will be created, and everything should work fine. When that shell is exited (returning to the parent), logging on that file will resume. If you want to address this more cleanly - perhaps even maintaining a common log for subshells (interactive or otherwise), you will need to detect (in .bashrc) that you are in a nested subshell, and redirect to the parent's log file rather than creating a new one. You will also need to check if you are in a subshell so that your 'trap' call doesn't delete the parent's log file on exit. You can get the the nested shell level from the bash environmental variable SHLVL, which stores the 'depth' of your shell stack.
Note on keeping your log 'clean':
If you do redirect stdin to the log file, you will end up with many of the same unwanted artifacts as the script utility generates. This can be addressed by adding a filter stage (e.g. sed/grep) between the redirection and the file. Simply create a regex that removes anything you don't want logged. To fully clean it up would require some fairly in depth processing (perhaps buffering each new line prior to writing to file, cleaning that up then writing it). Otherwise it will be difficult to know when a backspace is 'garbage' or intended.