How/why does ssh output to tty when both stdout and stderr are redirected?
I've just noticed that
ssh user@host >/tmp/out 2>/tmp/err
can write something like
Warning the RAS host key...
...
Are you sure you want to continue connecting (yes/no)?
stdout and stderr were both redirected but this still shows up in the tty.
- How is
ssh
doing this? - Why is
ssh
doing this? Doesn't it violate *nix idioms? - When I am running
ssh
(or another program with similar behaviour) as a child process with stdin/stdout/stderr connected by pipes, I want the parent process to see the output from the child process. If the child process dodges stdout/stderr like this, how can the parent process capture it?
Solution 1:
How?
How is
ssh
doing this?
It opens /dev/tty
. The relevant line from strace ssh …
is:
openat(AT_FDCWD, "/dev/tty", O_RDWR) = 4
The file descriptor 4
is then used with write(2)
and read(2)
.
(Testbed: OpenSSH_7.9p1 Debian-10+deb10u2).
Why?
Why is
ssh
doing this? Doesn't it violate *nix idioms?
I'm not sure about "*nix idioms", whatever they are; but POSIX explicitly allows this:
/dev/tty
In each process, a synonym for the controlling terminal associated with the process group of that process, if any. It is useful for programs or shell procedures that wish to be sure of writing messages to or reading data from the terminal no matter how output has been redirected. […]
(Emphasis mine).
Tools that need to interact with the user tend to use /dev/tty
because it makes sense. Usually when users do this:
<local_file_0 ssh user@server tool >local_file_1 2>local_file_2
they want it to be as similar as possible to this:
<local_file_0 tool >local_file_1 2>local_file_2
The only difference should be where the tool
actually runs. Usually users want ssh
to be transparent. They don't want it to litter local_file_1
or local_file_2
and they don't want to wonder if they need to put yes
or no
in the local_file_0
in case ssh
asks. Often one cannot predict if ssh
will ask in any particular case.
Note when you run ssh user@server tool
there's a shell involved on the remote side (compare this answer of mine). The shell can source some startup scripts that can litter the output. This is a different issue (and a reason the relevant startup scripts should be silent).
Solutions
When I am running
ssh
(or another program with similar behaviour) as a child process with stdin/stdout/stderr connected by pipes, I want the parent process to see the output from the child process. If the child process dodges stdout/stderr like this, how can the parent process capture it?
As stated above, your wish is rather unusual. This doesn't mean it's weird or totally uncommon. There are usage cases where one really wants this. The solution is to provide a pseudo-terminal you can control. The right tool is expect(1)
. Not only it will provide a tty, but it will also allow you to implement some logic. You will be able to detect (and log) Are you sure you want to continue connecting
and answer yes
or no
; or nothing if ssh
doesn't ask.
If you want to capture the whole output while interacting normally then consider script(1)
.
Broader picture
Up to this point we were interested in allocating tty on the client side, i.e. where ssh
runs. In general you may want to run a tool that needs /dev/tty
on the server side. The SSH server is able to allocate a pseudo-terminal or not, the relevant options are -t
and -T
(see man 1 ssh
). E.g. if you do this:
ssh user@server 'sudo whatever'
then you will most likely see sudo: no tty present …
. Provide a tty on the remote side and it will work:
ssh -t user@server 'sudo whatever'
But there's a quirk. Without -t
the default stdin, stdout and stderr of the remote command are connected to the stdin, stdout and stderr of the local ssh
process. This means you can tell apart the remote stdout from the remote stderr locally. With -t
the default stdin, stdout and stderr (and /dev/tty
) of the remote command point to the same pseudo-terminal; now stdout, stderr and whatever the remote command writes to its /dev/tty
get combined into a single stream the local ssh
prints to its (local) stdout. You cannot tell them apart locally. This command:
ssh -t user@server 'sudo whatever' >local_file_1
will write prompt(s) from sudo
to the file! By using /dev/tty
sudo
itself tries to be transparent when it comes to redirections, but ssh -t
sabotages this.
In this case it would be useful if ssh
provided an option to allocate a pseudo-terminal (/dev/tty
) on the remote side and to connect it to /dev/tty
of the local ssh
, while still connecting the default remote stdin, stdout and stderr to their local counterparts. Four separate channels (one of them bidirectional: /dev/tty
).
AFAIK there is no such option (check this question and my answer there: ssh
with separate stdin, stdout, stderr AND tty). Currently you can have either three unidirectional channels (without /dev/tty
for the remote process) or what appears as one bidirectional channel (/dev/tty
) for the remote process and two unidirectional channels (stdin and stdout of the local ssh
) for the local user.
Your original command:
ssh user@host >/tmp/out 2>/tmp/err
does not specify a remote command, so it runs an interactive shell on the remote side and does provide a pseudo-terminal for it as if you used -t
(unless there is no local terminal). This is the "one bidirectional channel for the remote process" case. It means that /tmp/err
can only get stderr from the ssh
itself (e.g. if you used ssh -v
).
An interactive shell with output not being printed to the (local) terminal cannot be easily used interactively. I hope this was only a minimal example (if not then maybe you need to rethink this).
Anyway you can see a situation with /dev/tty
, ssh
and other tools that use /dev/tty
can be complicated.