NFS traffic going to interface with different IP address but same subnet
My NFS server has 3 interfaces: 0: 1Gb, 1: 10Gb, 2: 10Gb.
Iface 0 is just used for admin purposes and 1/2 are for two different mounts.
All of the interfaces are on the same subnet (/24).
| server | ----> iface 0/1/2 ----> |private switch| ----> |all clients|
My clients are configured to connect to nfs via interfaces 1 and 2.
$ mount
...
iface1:/home on /home type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.1.1.3,local_lock=none,addr=172.1.1.1)
iface2:/scratch on /scratchlair type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.1.1.3,local_lock=none,addr=172.1.1.2)
...
Where iface 1 and 2 are 172.1.1.{1,2}, respectively. Iface 0 is 172.1.1.5.
My problem is that on the server, I'm seeing all the traffic going to iface 0, via nload. Ifaces 1 and 2 show no traffic.
This goes the same for all 10 clients connected to the nfs server.
What is causing traffic to go to iface 0, and how can I force the nfs client's traffic across the configured interface?
Solution 1:
To make this work you have to configure separate routing tables and rules for all three inet interfaces and enable arp_filter
.
You also might first want to test this in a VM environment as you can easily interrupt any connection during the next steps and there will be definitely some hickups.
First enable arp_filter
.
sysctl net.ipv4.conf.default.arp_filter=1
To make it permanent add this to your /etc/sysctl.conf
. Depending on your distribution you can also place it into a file beneath /etc/sysctl.d/
.
echo net.ipv4.conf.default.arp_filter = 1 >> /etc/sysctl.conf
Now let's add the routing tables.
cat << TABLES >> /etc/iproute2/rt_tables
101 rt1
102 rt2
103 rt3
TABLES
Assuming you have a /24
netmask and your default gateway is 172.1.1.254
. Further, interface 0, 1 and 2 are eth0
, eth1
and eth2
in the following example which might no match your setup, so you would have to adopt it.
ip route add 172.1.1.0/24 dev eth0 src 172.1.1.5 table rt1
ip route add table rt1 default via 172.1.1.254 dev eth0
ip rule add table rt1 from 172.1.1.5
ip route add 172.1.1.0/24 dev eth1 src 172.1.1.1 table rt2
ip route add table rt2 default via 172.1.1.254 dev eth1
ip rule add table rt2 from 172.1.1.1
ip route add 172.1.1.0/24 dev eth2 src 172.1.1.2 table rt3
ip route add table rt3 default via 172.1.1.254 dev eth2
ip rule add table rt2 from 172.1.1.2
To make these routing tables and rules permanent you have to add the above steps to your NIC configuration. On a RHEL based system, that would be as follows.
Routes and rules for eth0
.
cat << ROUTE > /etc/sysconfig/network-scripts/route-eth0
172.1.1.0/24 dev eth0 src 172.1.1.5 table rt1
table rt1 default via 172.1.1.254 dev eth0
ROUTE
cat << RULE > /etc/sysconfig/network-scripts/rule-eth0
table rt1 from 172.1.1.5
RULE
Routes and rules for eth1
.
cat << ROUTE > /etc/sysconfig/network-scripts/route-eth1
172.1.1.0/24 dev eth1 src 172.1.1.1 table rt2
table rt2 default via 172.1.1.254 dev eth1
ROUTE
cat << RULE > /etc/sysconfig/network-scripts/rule-eth1
table rt2 from 172.1.1.1
RULE
Routes and rules for eth2
.
cat << ROUTE > /etc/sysconfig/network-scripts/route-eth2
172.1.1.0/24 dev eth2 src 172.1.1.2 table rt3
table rt3 default via 172.1.1.254 dev eth2
ROUTE
cat << RULE > /etc/sysconfig/network-scripts/rule-eth2
table rt3 from 172.1.1.2
RULE
It might be easier as outlined in the comments already to just use different subnets for the interfaces and assign IP aliases on the clients for the different subnets.
I have tested this in a VM environment with a CentOS7 NFS server.