Unable to mount gluster fs on glusterfs client: Transport endpoint is not connected

Solution 1:

I have the same problem.

Did you see? https://bugzilla.redhat.com/show_bug.cgi?id=1659824

Using the "IP" seems to be not "good" in GlusterFS, because the client relies on the remote-host address in the volume information from the server. If the server cannot reach enough Gluster nodes, the volume information for other nodes cannot be used. See https://unix.stackexchange.com/questions/213705/glusterfs-how-to-failover-smartly-if-a-mounted-server-is-failed

So - the problem is - the mount point reaches the node1, reads the volume informations (see /var/log/glusterfs/<volume>.log). There is an information about the other nodes in option remote-host). The client then tries to connect to that nodes on the private IP - and that fails (in my case). I assume, your public client cannot reach the private IPs - that is the problem behind Transport endpoint is not connected.

Solution A - using hostnames instead of IPs inside the Gluster cluster would work, because you could create aliases in /etc/hosts for all servers. But that means - the Gluster must be rebuilded to use DNS names (which is the 192-IP inside the Gluster nodes and the public IP on your client). I didn't try to switch from IP based Gluster to DNS based Gluster (especially on production?).

Solution B in the RH bugzilla is unclear to me. I don't understand, what should be in glusterfs -f$local-volfile $mountpoint - especially what is the real mountpoint option to ignore remote-host and what do they mean with vol-file. There is a response in the second post on SE. I think, this is the answer, but I didn't test that yet.

So - I think, this isn't a bug but a documentation gap. The information for building a volume (brick host names) is used inside clients to connect to other nodes then specified in the mountpoint options.