Sharing SSH port tunnel with local network
I've successfully created a SSH tunnel to our cloud postgresql server on a local linux server, with this command:
ssh -N -f -L 5431:localhost:xxxx mycloudserver.com
(where xxxx is remote port)
With this command I can access remote PostgreSQL database through port 5431, but only on this linux server.
Now I want to "share" this connection with other PCs on the network so that I can connect to linux_server:5431 with an ODBC driver and read the cloud database, without installing any SSH software on the clients. Opening port 5431 with iptables didn't work.
iptables -A INPUT -p tcp -s 0/0 --sport 1024:65535 -d 192.168.128.5 --dport 5431 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 192.168.128.5 --sport 5431 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT
Already tried
ssh -N -f -L 5431:0.0.0.0:xxxx mycloudserver.com
too. The tunnel is successfully built and works on the server, but I can't "see" the connection from the clients.
Solution 1:
You need to modify your command to specify bind_address like:
ssh -N -f -L 0.0.0.0:5431:localhost:xxxx mycloudserver.com
This will make it listen on all interfaces. To verify, use:
netstat -lnp | grep 5431
Solution 2:
This is really the wrong approach. Don't use an SSH tunnel. Instead, set up SSL on the cloud server and make direct SSL connections, using IP-address restrictions if desired.
Or, if you must use a VPN, use one designed for that role rather than kludging ssh into the job.
SSH tunnels TCP over TCP, which causes problems with the congestion control, retransmit and window scaling algorithms. It's a great utility for ad-hoc work, but I would not suggest production multi-user operation for things that use significant bandwidth.
If you really must, then you can use the bind address of *
in your -L
, e.g.
-L *:5431:localhost:xxxx
or set GatewayPorts yes
in your .ssh/config
for that host, or via the -o
command-line option, like:
-o GatewayPorts=yes