Linux Kernel not passing through multicast UDP packets
Recently I've set up a new Ubuntu Server 10.04 and noticed my UDP server is no longer able to see any multicast data sent to the interface, even after joining the multicast group. I've got the exact same set up on two other Ubuntu 8.04.4 LTS machines and there is no problem receiving data after joining the same multicast group.
The ethernet card is a Broadcom netXtreme II BCM5709 and the driver used is:
b $ ethtool -i eth1
driver: bnx2
version: 2.0.2
firmware-version: 5.0.11 NCSI 2.0.5
bus-info: 0000:01:00.1
I'm using smcroute to manage my multicast registrations.
b$ smcroute -d
b$ smcroute -j eth1 233.37.54.71
After joining the group ip maddr shows the newly added registration.
b$ ip maddr
1: lo
inet 224.0.0.1
inet6 ff02::1
2: eth0
link 33:33:ff:40:c6:ad
link 01:00:5e:00:00:01
link 33:33:00:00:00:01
inet 224.0.0.1
inet6 ff02::1:ff40:c6ad
inet6 ff02::1
3: eth1
link 01:00:5e:25:36:47
link 01:00:5e:25:36:3e
link 01:00:5e:25:36:3d
link 33:33:ff:40:c6:af
link 01:00:5e:00:00:01
link 33:33:00:00:00:01
inet 233.37.54.71 <------- McastGroup.
inet 224.0.0.1
inet6 ff02::1:ff40:c6af
inet6 ff02::1
So far so good, I can see that I'm receiving data for this multicast group.
b$ sudo tcpdump -i eth1 -s 65534 host 233.37.54.71
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65534 bytes
09:30:09.924337 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212
09:30:09.947547 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212
09:30:10.108378 IP 192.164.1.120.58866 > 233.37.54.71.15574: UDP, length 268
09:30:10.196841 IP 192.164.1.120.58848 > 233.37.54.71.15572: UDP, length 212
...
I can also confirm that the interface is receiving mcast packets.
b $ ethtool -S eth1 | grep mcast_pack
rx_mcast_packets: 103998
tx_mcast_packets: 33
Now here's the problem. When I try to capture the traffic using a simple ruby UDP server I receive zero data! Here's a simple server that reads data send on port 15572 and prints the first two characters. This works on the two 8.04.4 Ubuntu Servers, but not the 10.04 server.
require 'socket'
s = UDPSocket.new
s.bind("", 15572)
5.times do
text, sender = s.recvfrom(2)
puts text
end
If I send a UDP packet crafted in ruby to localhost, the server receives it and prints out the first two characters. So I know that the server above is working correctly.
irb(main):001:0> require 'socket'
=> true
irb(main):002:0> s = UDPSocket.new
=> #<UDPSocket:0x7f3ccd6615f0>
irb(main):003:0> s.send("I2 XXX", 0, 'localhost', 15572)
When I check the protocol statistics I see that InMcastPkts is not increasing. While on the other 8.04 servers, on the same network, received a few thousands packets in 10 seconds.
b $ netstat -sgu ; sleep 10 ; netstat -sgu
IcmpMsg:
InType3: 11
OutType3: 11
Udp:
446 packets received
4 packets to unknown port received.
0 packet receive errors
461 packets sent
UdpLite:
IpExt:
InMcastPkts: 4654 <--------- Same as below
OutMcastPkts: 3426
InBcastPkts: 9854
InOctets: -1691733021
OutOctets: 51187936
InMcastOctets: 145207
OutMcastOctets: 109680
InBcastOctets: 1246341
IcmpMsg:
InType3: 11
OutType3: 11
Udp:
446 packets received
4 packets to unknown port received.
0 packet receive errors
461 packets sent
UdpLite:
IpExt:
InMcastPkts: 4656 <-------------- Same as above
OutMcastPkts: 3427
InBcastPkts: 9854
InOctets: -1690886265
OutOctets: 51188788
InMcastOctets: 145267
OutMcastOctets: 109712
InBcastOctets: 1246341
If I try forcing the interface into promisc mode nothing changes.
At this point I'm stuck. I've confirmed the kernel config has multicast enabled. Perhaps there are other config options I should be checking?
b $ grep CONFIG_IP_MULTICAST /boot/config-2.6.32-23-server
CONFIG_IP_MULTICAST=y
Any thoughts on where to go from here?
Solution 1:
In our instance, our problem was solved by sysctl parameters, one different from Maciej.
Please note that I do not speak for the OP (buecking), I came on this post due to the problem being related by the basic detail (no multicast traffic in userland).
We have an application that reads data sent to four multicast addresses, and a unique port per multicast address, from an appliance that is (usually) connected directly to an interface on the receiving server.
We were attempting to deploy this software on a customer site when it mysteriously failed with no known reason. Attempts at debugging this software resulted in inspecting every system call, ultimately they all told us the same thing:
Our software asks for data, and the OS never provides any.
The multicast packet counter incremented, tcpdump showed the traffic reaching the box/specific interface, yet we couldn't do anything with it. SELinux was disabled, iptables was running but had no rules in any of the tables.
Stumped, we were.
In randomly poking around, we started thinking about the kernel parameters that sysctl handles, but none of the documented features was either particularly relevant, or if they had to do with multicast traffic, they were enabled. Oh, and ifconfig did list "MULTICAST" in the feature line (up, broadcast, running, multicast). Out of curiosity we looked at /etc/sysctl.conf
. 'lo and behold, this customer's base image had a couple of extra lines added to it at the bottom.
In our case, the customer had set net.ipv4.all.rp_filter = 1
. rp_filter is the Route Path filter, which (as I understand it) rejects all traffic that could not have possibly reached this box. Network subnet hopping, the thought being that the source IP is being spoofed.
Well, this server was on a 192.168.1/24 subnet and the appliance's source IP address for the multicast traffic was somewhere in the 10.* network. Thus, the filter was preventing the server from doing anything meaningful with the traffic.
A couple of tweaks approved by the customer; net.ipv4.eth0.rp_filter = 1
and net.ipv4.eth1.rp_filter = 0
and we were running happily.
Solution 2:
TL/DR Also make sure your multicast doesn't come from a vlan. tcpdump -e
would help determine if they do.
In all fairness, somebody ought to build a page with a checklist of things that can prevent multicast from reaching the userland. I've been struggling with that for a couple of days, and naturally nothing I could find on the web helped.
Not only I could see the packets in the tcpdump
, I could actually receive other multicast packets, for other producers, just on a different interface. The command I ended up using for testing whether I can receive multicast was:
$ GRP=224.x.x.x # set me to the group
$ PORT=yyyy # set me to the receiving port
$ IFACE=mmmm # set me to the name or IP address of the interface
$ strace -f socat - UDP4-DATAGRAM:$GRP:$PORT,ip-add-membership=$GRP:$IFACE,bind=0.0.0.0:$PORT,multicast-loop=0
The reason for strace
here is that I actually couldn't make socat
print out the packets to the stdout, but in strace
output you can clearly see if socat
is receiving actual data from the bound socket (it'll be mute otherwise after a couple of initial select
calls)
-
rp_filter
sysctl - doesn't apply, the systems are on the same IP network (I set them to0
all the same, seems that1
is a default setting now, at least for Ubuntu). - firewalls/etc - the receiving system is firewall free (I don't think packets will show up in tcpdump if they were firewalled, but I guess it's possible if the firewall is funny)
- IP/Multicast routing and multiple interfaces - I explicitly joined the group on the correct interface
- Wacky network hardware - this was my last resort, but changing some laptop to an Intel NUC didn't help. This is about where I started chewing my elbows and perpetrating posting this to SE.
- The problem in my case was use of VLANs by the specialized hardware that was producing those multicast packets. To see if this is your issue, make sure to include
-e
flag totcpdump
, and check for vlan tags. It will be required to configure an interface into the correct vlan before userland will be able to get those packets. The giveaway for me actually was that the multicast producers won't ping, but won't even get into the ARP cache, though I could clearly see ARP replies.
To get it running with VLAN this link might be helpful to configure multicast routing. (Sadly I'm new to this so Reputation does not allow me to add an answer. Hence this edit.)
Here is what I did (use sudo if needed):
ip link add link eth0 name eth0_100 type vlan id 100
ip addr add 192.168.100.2/24 brd 192.168.100.255 dev eth0_100
ip link set dev eth0_100 up
ip maddr add 01:00:5e:01:01:01 dev eth0_100
route -n add -net 224.0.0.0 netmask 240.0.0.0 dev eth0_100
This way an additional interface if created for the vlan traffic with vlan id 100. The vlan ip might be unnecessary. Then a multicast address is configured for the new interface (01:00:5e:01:01:01 is the link layer address for 239.1.1.1) and all incoming multicast traffic is bound to eth0_100. I also did all the possible steps in the answers above (check iptables, rp_filter etc).
Solution 3:
You might want to try and look at these settings:
proc
echo "0" > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
sysctl.conf
sed -i -e 's|^net.ipv4.icmp_echo_ignore_broadcasts =.*|net.ipv4.icmp_echo_ignore_broadcasts = 0|g' /etc/sysctl.conf
These have been used to enable multicasting in RHEL.
You might want to make sure that your firewall is allowing the mutlicast traffic; again with RHEL I've enabled the following:
# allow anything in on multicast addresses
-A INPUT -s 224.0.0.0/4 -j ACCEPT
-A INPUT -p igmp -d 224.0.0.0/4 -j ACCEPT
# needed for multicast ping responses
-A INPUT -p icmp --icmp-type 0 -j ACCEPT