ARP broadcast flooding network and high cpu usage
Hoping someone here might have some insight to the issue we are facing. Currently we have Cisco TAC looking at the case but they are struggling to find the root cause.
Although the title mentions ARP broadcast and high CPU usage, we are unsure if they are related or unrelated at this stage.
The original issue has been posted on INE Online Community
We have stripped the network down to a single link no redundancy setup, think of it as a star topology.
Facts:
- We use 3750x switches, 4 in one stack. Version 15.0(1)SE3. Cisco TAC confirms no known issues for high cpu or ARP bugs for this particular version.
- No hubs/ unmanaged switches connected
- Reloaded Core stack
- We don't have a default route "Ip route 0.0.0.0 0.0.0.0 f1/0". Using OSPF for routing.
- We see large broadcast packets from VLAN 1, VLAN 1 used for desktop devices. We use 192.168.0.0/20
- Cisco TAC said they don't see anything wrong with using /20, other then we'd have a large broadcast domain but should still function.
- Wifi, management, printers etc are all on different VLAN
- Spanning tree has been verified by Cisco TAC & CCNP/CCIE qualified individuals. We shutdown all redundant links.
- Configuration on the core has been verified Cisco TAC.
- We have the default ARP timeout on majority of the switches.
- We do not implement Q & Q.
- No new switches been added (at least none we know of)
- Cannot use dynamic arp inspection on edge switches because these are 2950
- We used show interfaces | inc line|broadcast to figure out where the large number of broadcast coming from, however both Cisco TAC and 2 other engineers(CCNP & CCIE) confirmed this is normal behaviour due to what is happening on the network (as in large number of mac flaps causing the larger broadcast). We verified the STP was functioning correctly on the edge switches.
Symptoms on the network and switches:
- Large number of MAC flaps
- High CPU usage for ARP Input process
- Huge number of ARP packets, rapidly increasing and visible
- Wiresharks shows that 100s of computers are flooding the network with ARP Broadcast
- For test purpose, we put approx 80 desktop machines different vlan, however we tested this and made no visible difference to high cpu or arp input
- Have ran different AV/ malware/ spyware, but no viruses visible on the network.
- sh mac address-table count, shows us approx 750 different mac addresses as expected on vlan 1.
#sh processes cpu sorted | exc 0.00%
CPU utilization for five seconds: 99%/12%; one minute: 99%; five minutes: 99%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
12 111438973 18587995 5995 44.47% 43.88% 43.96% 0 ARP Input
174 59541847 5198737 11453 22.39% 23.47% 23.62% 0 Hulc LED Process
221 7253246 6147816 1179 4.95% 4.25% 4.10% 0 IP Input
86 5459437 1100349 4961 1.59% 1.47% 1.54% 0 RedEarth Tx Mana
85 3448684 1453278 2373 1.27% 1.04% 1.07% 0 RedEarth I2C dri
- Ran show mac address-table on different switches and core itself (on the core, for example, plugged by desktop directly, my desktop ), and we can see the several different MAC hardware address being registered to the interface, even though that interface has only one computer attached to this:
Vlan Mac Address Type Ports
---- ----------- -------- -----
1 001c.c06c.d620 DYNAMIC Gi1/1/3
1 001c.c06c.d694 DYNAMIC Gi1/1/3
1 001c.c06c.d6ac DYNAMIC Gi1/1/3
1 001c.c06c.d6e3 DYNAMIC Gi1/1/3
1 001c.c06c.d78c DYNAMIC Gi1/1/3
1 001c.c06c.d7fc DYNAMIC Gi1/1/3
- show platform tcam utilization
CAM Utilization for ASIC# 0 Max Used
Masks/Values Masks/values
Unicast mac addresses: 6364/6364 1165/1165
IPv4 IGMP groups + multicast routes: 1120/1120 1/1
IPv4 unicast directly-connected routes: 6144/6144 524/524
IPv4 unicast indirectly-connected routes: 2048/2048 77/77
IPv4 policy based routing aces: 452/452 12/12
IPv4 qos aces: 512/512 21/21
IPv4 security aces: 964/964 45/45
We are now at a stage, where we will require huge amount of downtime to isolate each area at a time unless anyone else has some ideas to identify the source or root cause of this weird and bizarre issue.
Update
Thank you @MikePennington and @RickyBeam for the detailed response. I will try and answer what I can.
- As mentioned, 192.168.0.0/20 is an inherited mess. However, we do intend to split this up in the future but unfortunately this issue occured before we could do this. I personally also agree with majority, whereby the broadcast domain is far too big.
- Using Arpwatch is definitely something we can try but i suspect because several access port is registering mac address even though it doesn't belong to this port, the conclusion of arpwatch may not be useful.
- I completely agree with not being 100% sure finding all redundant links and unknown switches on the network, but as best of our finding, this this is the case until we find further evidence.
- Port security has been looked into, unfortunately management has decided not to use this for various reasons. Common reason is we constantly move computers around (college environment).
- We have used spanning-tree portfast with in conjunction with spanning-tree bpduguard by default on all access ports (desktop machines).
- We do not use switchport nonnegotiate at the moment on access port, but we are not getting any Vlan hopping attack bouncing across multitple Vlans.
- Will give mac-address-table notification a go, and see if we can find any patterns.
"Since you're getting a large number of MAC flaps between switchports, it's hard to find where the offenders are (suppose you find two or three mac addresses that send lots of arps, but the source mac addresses keep flapping between ports)."
- We started on this, picked any one MAC flaps and continued our way through all the core switch to distribution to access switch, but what we found was once again, the access port interface was hogging multiple mac address hence mac flaps; so back to square one.
- Storm-control is something that we did consider, but we fear some off the legitimate packets will be dropped causing further issue.
- Will triple check the VMHost configuration.
- @ytti the unexplainable MAC addresses is behind many access ports rather then an individual. Haven't found any loops on these interfaces. The MAC addresses also exist on other interfaces, which would explain large number of MAC flaps
- @RickyBeam i agree with why hosts are sending so many ARP requests; this is one of the puzzling issue. Rouge wireless bridge is an interesting one that i haven't given thought to, as far as we are aware, wireless is on different VLAN; but rogue will obviously mean it may well be on VLAN1.
- @RickyBeam, i don't really wish to unplug everything as this will cause massive amount of downtime. However this is where it may just be heading. We do have Linux servers but not more then 3.
- @RickyBeam, can you explain DHCP server "in-use" probing?
We (Cisco TAC, CCIEs, CCNP) globally agree that this is not a switch configuration but a host/device is causing the issue.
Solved.
The issue is with SCCM 2012 SP1, a service called: ConfigMrg Wake-Up Proxy. The 'feature' does not existing SCCM 2012 RTM.
Within 4 hours of turning this off within the policy, we saw steady drops in CPU usage. By the time 4 hours was up, ARP usage was merely 1-2%!
In summary, this service does MAC address spoofing! Cannot believe how much havoc it caused.
Below is a full text from Microsoft Technet as i feel it's important to understand how this relates to the issue posted.
For anyone who is interested, below is the technical details.
Configuration Manager supports two wake on local area network (LAN) technologies to wake up computers in sleep mode when you want to install required software, such as software updates and applications: traditional wake-up packets and AMT power-on commands.
Beginning with Configuration Manager SP1, you can supplement the traditional wake-up packet method by using the wake-up proxy client settings. Wake-up proxy uses a peer-to-peer protocol and elected computers to check whether other computers on the subnet are awake, and to wake them if necessary. When the site is configured for Wake On LAN and clients are configured for wake-up proxy, the process works as follows:
Computers that have the Configuration Manager SP1 client installed and that are not asleep on the subnet check whether other computers on the subnet are awake. They do this by sending each other a TCP/IP ping command every 5 seconds.
If there is no response from other computers, they are assumed to be asleep. The computers that are awake become manager computers for the subnet.
Because it is possible that a computer might not respond because of a reason other than it is asleep (for example, it is turned off, removed from the network, or the proxy wake-up client setting is no longer applied), the computers are sent a wake-up packet every day at 2 P.M. local time. Computers that do not respond will no longer be assumed to be asleep and will not be woken up by wake-up proxy.
To support wake-up proxy, at least three computers must be awake for each subnet. To achieve this, three computers are non-deterministically chosen to be guardian computers for the subnet. This means that they stay awake, despite any configured power policy to sleep or hibernate after a period of inactivity. Guardian computers honor shutdown or restart commands, for example, as a result of maintenance tasks. If this happens, the remaining guardian computers wake up another computer on the subnet so that the subnet continues to have three guardian computers.
Manager computers ask the network switch to redirect network traffic for the sleeping computers to themselves.
The redirection is achieved by the manager computer broadcasting an Ethernet frame that uses the sleeping computer’s MAC address as the source address. This makes the network switch behave as if the sleeping computer has moved to the same port that the manager computer is on. The manager computer also sends ARP packets for the sleeping computers to keep the entry fresh in the ARP cache. The manager computer will also respond to ARP requests on behalf of the sleeping computer and reply with the MAC address of the sleeping computer.
During this process, the IP-to-MAC mapping for the sleeping computer remains the same. Wake-up proxy works by informing the network switch that a different network adapter is using the port that was registered by another network adapter. However, this behavior is known as a MAC flap and is unusual for standard network operation. Some network monitoring tools look for this behavior and can assume that something is wrong. Consequently, these monitoring tools can generate alerts or shut down ports when you use wake-up proxy. Do not use wake-up proxy if your network monitoring tools and services do not allow MAC flaps.
When a manager computer sees a new TCP connection request for a sleeping computer and the request is to a port that the sleeping computer was listening on before it went to sleep, the manager computer sends a wake-up packet to the sleeping computer, and then stops redirecting traffic for this computer.
The sleeping computer receives the wake-up packet and wakes up. The sending computer automatically retries the connection and this time, the computer is awake and can respond.
Ref: http://technet.microsoft.com/en-us/library/dd8eb74e-3490-446e-b328-e67f3e85c779#BKMK_PlanToWakeClients
Thank you for everyone who has posted here and assisted with the troubleshooting process, very much appreciated.
ARP / Broadcast storm
- We see large broadcast packets from VLAN 1, VLAN 1 used for desktop devices. We use 192.168.0.0/20 ...
- Wiresharks shows that 100s of computers are flooding the network with ARP Broadcast ...
Your ARP Input process is high, which means the switch is spending a lot of time processing ARPs. One very common cause of ARP flooding is a loop between your switches. If you have a loop, then you also can get the mac flaps you mentioned above. Other possible causes of ARP floods are:
- IP address misconfigurations
- A layer2 attack, such as arp spoofing
First eliminate the possibility of misconfigurations or a layer2 attack mentioned above. The easiest way to do this is with arpwatch on a linux machine (even if you have to use a livecd on a laptop). If you have a misconfiguration or layer2 attack, then arpwatch gives you messages like this in syslog, which list the mac addresses which are fighting over the same IP address...Oct 20 10:31:13 tsunami arpwatch: flip flop 192.0.2.53 00:de:ad:85:85:ca (00:de:ad:3:d8:8e)
When you see "flip flops", you have to track down the source of the mac addresses and figure out why they're fighting over the same IP.
- Large number of MAC flaps
- Spanning tree has been verified by Cisco TAC & CCNP/CCIE qualified individuals. We shutdown all redundant links.
Speaking as someone who has been through this more times than I would like to recall, don't assume you found all redundant links... just make your switchports behave at all times.
Since you're getting a large number of mac flaps between switchports, it's hard to find where the offenders are (suppose you find two or three mac addresses that send lots of arps, but the source mac addresses keep flapping between ports). If you aren't enforcing a hard limit on mac-addresses per edge port, it is very difficult to track these problems down without manually unplugging cables (which is what you want to avoid). Switch loops cause an unexpected path in the network, and you could wind up with hundreds of macs learned intermittantly from what should normally be a desktop switchport.
The easiest way to slow down the mac-moves is with port-security
. On every access switchport in Vlan 1 that is connected to a single PC (without a downstream switch), configure the following interface-level commands on your cisco switches...
switchport mode access
switchport access vlan 1
!! switchport nonegotiate disables some Vlan-hopping attacks via Vlan1 -> another Vlan
switchport nonnegotiate
!! If no IP Phones are connected to your switches, then you could lower this
!! Beware of people with VMWare / hubs under their desk, because
!! "maximum 3" could shutdown their ports if they have more than 3 macs
switchport port-security maximum 3
switchport port-security violation shutdown
switchport port-security aging time 5
switchport port-security aging type inactivity
switchport port-security
spanning-tree portfast
!! Ensure you don't have hidden STP loops because someone secretly cross-connected a
!! couple of desktop ports
spanning-tree bpduguard enable
In most mac/ARP flooding cases, applying this configuration to all your edge switch ports (especially any with portfast) will get you back to a sane state, because the config will shutdown any port that exceeds three mac-addresses, and disable a secretly looped portfast port. Three macs per port is a number that works well in my desktop environment, but you could raise it to 10 and probably be fine. After you have done this, any layer 2 loops are broken, rapid mac flaps will cease, and it makes diagnosis much easier.
Another couple of global commands that are useful for tracking down ports associated with a broadcast storm (mac-move) and flooding (threshold)...
mac-address-table notification mac-move
mac address-table notification threshold limit 90 interval 900
After you finish, optionally do a clear mac address-table
to accelerate healing from potentially full CAM table.
- Ran show mac address-table on different switches and core itself (on the core, for example, plugged by desktop directly, my desktop ), and we can see the several different MAC hardware address being registered to the interface, even though that interface has only one computer attached to this...
This whole answer assumes your 3750 doesn't have a bug causing the problem (but you did say that wireshark indicated PCs that are flooding). What you're showing us is obviously wrong when there is only one computer attached to Gi1/1/3, unless that PC has something like VMWare on it.
Misc thoughts
Based on a chat conversation we had, I probably don't have to mention the obvious, but I will for sake of future visitors...
- Putting any users in Vlan1 is normally a bad idea (I understand you may have inherited a mess)
- Regardless of what TAC tells you, 192.168.0.0/20 is too large to manage in a single switched domain without risks of layer2 attacks. The larger your subnet mask is, the greater exposure you have to layer2 attacks like this because ARP is an unauthenticated protocol and a router must at least read a valid ARP from that subnet.
- Storm-control on layer2 ports is usually a good idea as well; however, enabling storm-control in a situation like this it will take out good traffic with the bad traffic. After the network has healed, apply some storm-control policies on your edge ports and uplinks.
The real question is why are hosts sending so many ARPs in the first place. Until this is answered, the switch(es) will continue to have a hard time dealing with the arp storm. Netmask mismatch? Low host arp timers? One (or more) hosts having an "interface" route? A rouge wireless bridge somewhere? "gratuitous arp" gone insane? DHCP server "in-use" probing? It doesn't sound like an issue with the switches, or layer 2; you have hosts doing bad things.
My debugging process would be unplug everything and watch closely as things are reattached, one port at a time. (I know it's miles from ideal, but at some point you have to cut your losses and attempt to physically isolate any possible source(s)) Then I'd work towards understanding why select ports are generating some many arp's.
(Would a lot of those hosts happen to be linux systems? Linux has had a very d***med stupid ARP cache management system. The fact that it will "re-verify" an entry in mere minutes, is broken in my book. It tends to be less of an issue in small networks, but a /20 is not a small network.)