Redirect stdout while a process is running -- What is that process sending to /dev/null

You can do it using strace.

Using strace you can spy what is being written to file-descriptor 1, which is the stdout file descriptor. Here is an example:

strace  -p $pid_of_process_you_want_to_see_stdout_of 2>&1 | \
    sed -re 's%^write\(1,[[:blank:]](.*),[[:blank:]]*[0-9]+\)[[:blank:]]*=[[:blank:]]*[0-9]+%\1%g' 

You may want to improve the filter, but that would be another question. We have the output, but now need to tidy it.

:WARNING: This solution has some limitations, see comments below. It will not always work, your mileage may vary.

Test:

Put this program (below) in file hello, and chmod +x hello

#!/bin/bash

while true
do
    echo -en  "hello\nworld\n"
done

This one in hello1 and chmod +x hello1

#!/bin/bash
dir=$(dirname $0)
$dir/hello >/dev/null

This one in hello2 and chmod +x hello2

#!/bin/bash
dir=$(dirname $0)
$dir/hello1 >/dev/null

then run with ./hello2 >/dev/null, then find pid of process hello and type pid_of_process_you_want_to_see_stdout_of=xyz where xyz is the pid of hello, then run line at top.

How it works. When hello is run, bash forks, redirects fd 1 to /dev/null, then execs hello. Hello sends output to fd1 using system call write(1, …. Kernel receives system call write(1, …, sees that fd 1 is connected to /dev/null and …

We then run strace (system-call trace) on hello, and see that it is calling write(1, "hello\nworld\n") The rest if the line above is just selecting the appropriate line of the trace.


No. You'll have to restart the command.

Stdio handles are inherited from parent to child process. You've given the child a handle to /dev/nul. It's free to do with it whatever it likes, including things like dup()'ing it or passing it along to its own children. There's no easy way to reach into the OS and change what another running process's handles point to.

Arguably, you could use a debugger on the child and start zapping its state, overwriting any locations where it's stored a copy of the current handle value with something new, or to trace its calls to the kernel, monitoring any i/o. I think that's asking a lot of most users, but it can work if it's a single child process that doesn't do anything funny with the i/o.

But even that fails in the general case, e.g., a script that creates pipelines and so on, duping handles and creating lots of its own children that come and go. This is why you're pretty much stuck with starting over (and perhaps redirecting to a file you can delete later even if you don't want to watch it now.)


I was looking for the answer to this question quite a long time. There are mainly two solutions available:

  1. As You stated here, strace option;
  2. Getting the output using gdb.

In my case none of them was saisfactory, because first truncates the output (and I couldn't set it longer). The second is out of question, since my platform doesn't have gdb installed - it's an embedded device.

Collecting some partial information on the Internet (didn't created it, just put pieces together), I reached the solution using named pipes (FIFOs). When the process is run, its output is directed to the named pipe and if no one wants to see it, a dumb listener (tail -f >> /dev/null) is applied to it to empty the buffer. When someone wants to get this output, the tail process is killed (otherwise the output is alternated between readers) and I cat the pipe. On listening finish, another tail starts up.

So my problem was to start a process, exit the ssh shell, and then log once again and be able to get the output. This is doeable now with following commands:

#start the process in the first shell
./runner.sh start "<process-name-with-parameters>"&
#exit the shell
exit

#start listening in the other shell
./runner listen "<process-name-params-not-required>"
#
#here comes the output
#
^C

#listening finished. If needed process may be terminated - scripts ensures the clean up
./runner.sh stop "<process-name-params-not-required>"

The script which accomplish that is attached below. I am aware of that it's not a perfect solution. Please, share Your thoughts, maybe it will be helpful.

#!/bin/sh

## trapping functions
trap_with_arg() {
    func="$1" ; shift
    for sig ; do
        trap "$func $sig" "$sig"
    done
}

proc_pipe_name() {
    local proc=$1;
    local pName=/tmp/kfifo_$(basename ${proc%%\ *});
    echo $pName;
}

listener_cmd="tail -f";
func_start_dummy_pipe_listener() {
    echo "Starting dummy reader";
    $listener_cmd $pipeName >> /dev/null&
}

func_stop_dummy_pipe_listener() {
    tailPid=$(func_get_proc_pids "$listener_cmd $pipeName");
    for pid in $tailPid; do
        echo "Killing proc: $pid";
        kill $tailPid;
    done;
}

func_on_stop() {
        echo "Signal $1 trapped. Stopping command and cleaning up";
    if [ -p "$pipeName" ]; then
        echo "$pipeName existed, deleting it";
        rm $pipeName;
    fi;



    echo "Cleaning done!";
}

func_start_proc() {
    echo "Something here"
    if [ -p $pipeName ]; then
        echo "Pipe $pipeName exists, delete it..";
        rm $pipeName;
    fi;
    mkfifo $pipeName;

    echo "Trapping INT TERM & EXIT";
    #trap exit to do some cleanup
    trap_with_arg func_on_stop INT TERM EXIT

    echo "Starting listener";
    #start pipe reader cleaning the pipe
    func_start_dummy_pipe_listener;

    echo "Process about to be started. Streaming to $pipeName";
    #thanks to this hack, the process doesn't  block on the pipe w/o readers
    exec 5<>$pipeName
    $1 >&5 2>&1
    echo "Process done";
}

func_get_proc_pids() {
    pids="";
    OIFS=$IFS;
    IFS='\n';
    for pidline in $(ps -A -opid -ocomm -oargs | grep "$1" | grep -v grep); do
        pids="$pids ${pidline%%\ *}";
    done;
    IFS=$OIFS;
    echo ${pids};
}

func_stop_proc() {
    tailPid=$(func_get_proc_pids "$this_name start $command");
    if [ "_" == "_$tailPid" ]; then
        echo "No process stopped. The command has to be exactly the same command (parameters may be ommited) as when started.";
    else
        for pid in $tailPid; do
            echo "Killing pid $pid";
            kill $pid;
        done;
    fi;
}

func_stop_listening_to_proc() {
    echo "Stopped listening to the process due to the $1 signal";
    if [ "$1" == "EXIT" ]; then
        if [ -p "$pipeName" ]; then
            echo "*Restarting dummy listener"; 
            func_start_dummy_pipe_listener;
        else 
            echo "*No pipe $pipeName existed";
        fi;
    fi;
}

func_listen_to_proc() {
    #kill `tail -f $pipeName >> /dev/null`
    func_stop_dummy_pipe_listener;

    if [ ! -p $pipeName ]; then 
        echo "Can not listen to $pipeName, exitting...";
        return 1;
    fi;

    #trap the kill signal to start another tail... process
    trap_with_arg func_stop_listening_to_proc INT TERM EXIT
    cat $pipeName;
    #NOTE if there is just an end of the stream in a pipe, we have to do nothing 

}

#trap_with_arg func_trap INT TERM EXIT

print_usage() {
    echo "Usage $this_name [start|listen|stop] \"<command-line>\"";
}

######################################3
############# Main entry #############
######################################

this_name=$0;
option=$1;
command="$2";
pipeName=$(proc_pipe_name "$command");


if [ $# -ne 2 ]; then
    print_usage;
    exit 1;
fi;

case $option in 
start)
    echo "Starting ${command}";
    func_start_proc "$command";
    ;;
listen)
    echo "Listening to ${2}";
    func_listen_to_proc "$command";
    ;;
stop)
    echo "Stopping ${2}";
    func_stop_proc "$command";
    ;;
*)
    print_usage;
    exit 1;
esac;

You can do it using the reredirect program:

reredirect -m <file> <PID>

You can restore initial output of your process later using something like:

reredirect -N -O <M> -E <N> <PID>

(<M> and <N> are provided by previous launch of reredirect).

reredirect README also explains how to redirect to another command or to redirect only stdout or stderr.