What are the obstacles to providing reliable Internet access and Wi-Fi at large tech conferences? [closed]
Every tech conference I've ever been to, and I've been to a lot, has had absolutely abysmal Wi-Fi and Internet access.
Sometimes it's the DHCP server running out of addresses. Sometimes the backhaul is clearly inadequate. Sometimes there's one router for a ballroom with 3000 people. But it's always SOMETHING. It never works.
What are some of the best practices for conference organizers? What questions should they ask the conference venue or ISP to know, in advance, if the Wi-Fi is going to work? What are the most common causes of crappy Wi-Fi at conferences? Are they avoidable, or is Wi-Fi simply not an adequate technology for large conferences?
Solution 1:
(For those that are interested, I have finally written up my 2009 report on the wireless at PyCon).
I have done the wireless for the PyCon conference most of the years since we moved from George Washington University into hotels, so I have some ideas about this, which have been proven in battle -- though only with around a thousand users.
One thing I hear a lot of people talking about in this discussion is "open air coverage in a ballroom". One theory I operate under is that the ballroom is NOT an open air environment. Human bodies soak up 802.11b/g and 802.11a quite nicely.
Here are some of my thoughts, but more details are available in my conference reports if you search google for "pycon wireless" -- the tummy.com links are what you want.
I use just the non-overlapping channels, and spread the APs out. For 802.11b/g, I run the radios at the lowest power settings. For 802.11a I run it at the higest power setting because we have so many channels.
I try to keep the APs fairly low, so the bodies can help reduce interference between APs on the same channel.
I set all the APs to the same ESSID so that people can "roam" to different APs as loads (number of associated clients) go up or coverage goes down (more people coming in, etc).
Lots and lots of APs. The first year we had the hotel do the networking, they eventually brought in 6 APs, but they had started with only a couple. Despite that we had told them that we would be heavily using their wireless. But we also had other problems like the DHCP server giving out leases with a gateway in a different network than the address. (Calls to support resulted in "I'll just reboot everything.").
We are running relatively inexpensive D-Link dual-radio APs, costing around $100 or $200 each. We just haven't really had the budget to buy 20 to 40 of the $600+ high end APs. These D-Link APs have worked surprisingly well.
In 2009 we had a hell of a problem with netbooks. Something about the radios in these just stinks for use at this sort of conference. I've heard reports of people putting Intel wireless cards in the Netbooks and getting much better performance. At PyCon 2009, my netbook couldn't get a reliable connection after the conference started, but my ThinkPad had no problems. I heard similar reports from people with Mac and other "real" laptops, but the cheapest hardware just was not working.
I have NOT done anything with directional antennas. I was expecting to need them, but so far it has worked out well enough.
Most hotels cannot provide enough bandwidth. Don't worry though, there are lots of terrestrial wireless providers that can provide 100mbps. I'm not talking about the places that run 802.11g from some tower, but people with real, serious radios and backhaul to cope with it.
Over the last several years we haven't really had much in the way of wired ports, mostly because of budget and volunteer effort required to cable all those locations. In 2010 we expect to have quite a few wired ports. I like the idea of wiring every seat for wired, but I would doubt we'll be able to cover even 10% simply due to the effort required to wire and maintain such a network. Getting people off the wireless is great.
Getting people off the 802.11b frequencies is good as well. Most people talking about since Joel has brought it up have been saying things like "3 non-overlapping channels", which is true for the 2.4GHz spectrum. However, we have seen a HUGE move towards the 5.2GHz spectrum. The first year I ran the network (2006?), we had around 25% usage. In 2008 we had over 60% in 5.2GHz.
So, yes, running wireless with thousands of people requires some thought. But, giving it some thought seems to have resulted in a fairly high level of satisfaction.
Sean
Solution 2:
I think the major issue is that Wi-Fi is probably the wrong technology for the job, if you're really talking about 3,000 clients in a small area like a ballroom. For fewer clients spread over a large space, I think it's feasible.
Covering a ballroom with potentially thousands of clients is going to be a stretch for Wi-Fi, assuming that the clients are actually using the network. You've only got 3 non-overlapping channels (in the US), and I've never seen an access point (AP) reasonably support more than 50 clients effectively. You're going to end up with a lot of access points sitting on the same channel and a lot of contention for the air. That's a lot of client devices to have in a small area.
If you could rig some kind of highly directional antennas and radio power was clamped down to target small numbers of clients you might make this better. For a temporary event like a conference, the level of obsessive care that such a site survey would require would, I'd imagine, be unreasonably expensive.
Assuming you're covering a lower client density than 3,000 clients in a single open-air space, you'd want to space APs with coverage zones sized to handle a significant fraction of the possible number of clients that AP can support (by tweaking radio power / antennas), and you'll want to try and keep adjacent APs on non-overlapping channels. The more APs the better, and don't overload the APs with too many clients. (Tweaking radio power / antennas to make coverage zones seems non-intuitive to anybody who hasn't tried to scale Wi-Fi to handle a large number of clients in a small physical area.)
From a layer 2 broadcast perspective, it would make sense to broadcast multiple SSIDs and back-end them into different VLANs / IP subnets. That would depend on the number of client devices and the character of the traffic. Personally, I wouldn't put more than about 500 devices in a single layer 2 broadcast domain on a corporate LAN. I can only imagine that a conference Wi-Fi network would be worse.
DHCP should be a no-brainer, though redundancy is a concern. I'd probably use the ISC dhcpd and work out a failover arrangement to a second server. I think I'd be on the lookout for rogue DHCP servers, too. On wired Ethernet you could easily disable the ports that rogue DHCP servers show up on. For wireless Ethernet, it's a little more problematic. Anybody know if there are APs that support blocking mobile units based on MAC address? (That doesn't help if the rogue DHCP server spoofs its MAC once detected, but it's a start...)
Obviously, the firewall / edge router should be able to handle the number of NAT table entries that such a number of clients might generate. A consumer toy NAT router isn't going to handle it. A redundant router protocol (HSRP, VRRP, etc) and multiple edge router devices are going to be a necessity to prevent a single point of failure from ruining the whole show.
As for bandwidth contention on the backhaul, you could throttle client bandwidth to the Internet. That should also limit the overall contention on the air, to some extent.
I'd throw something like Squid Cache in place as a transparent proxy for HTTP traffic. That's going to help with utilization of the backhaul. Your HTTP proxy cache shouldn't be a point of failure, so you'll need infrastructure to monitor the cache's health and, if it fails, route around it.
I don't have the energy to fire up a spreadsheet and look at the economics of a bunch of small Ethernet switches and patch cables strewn about, but the more that I read, the more that it sounds like wired Ethernet would be a great way to pull off decent connectivity. There would be, no doubt, major effort needed to run the Ethernet cables and power the switches, but it provides a much more manageable network infrastructure, more reliable bandwidth, and requires a lot less obsessive tweaking than wireless. You could get away with using low-end gear for the edge switches, too, since 100 Mbps service would plenty for the purposes of accessing the Internet.
Cisco has a little 8 port switch that draws its power from PoE-- the Catalyst 2960PD-8TT-L. That'd be sweet for this application-- putting something like that on each table, drawing its power from a larger PoE-capable switch. I'm guessing that those are pretty expensive for this application, but I'm guessing that there's a "downmarket" option that's not as pricey available from somebody. (Searching for switches powered by PoE seems to be fairly difficult with Gooogle...)
Intel has a 2006-era paper re: providing Wi-Fi access at conferences. Looking at their numbers, they had 50 clients on a single AP at one point, and a peak client load under 100 clients total. Those seem like pretty small numbers compared to what you're talking about, and in 2006 everybody wasn't carrying around iPhones, etc.
Solution 3:
Michael Arrington, of TechCrunch, hired Mariette Systems for TechCrunch 50 and had stellar results. From the comments, it appears they had hundreds of CISCO switches providing RJ45 connections at every seat (picture) which probably got enough bandwidth off the air to make it work.
Giving 2,000 hard core Internet users simultaneous access from a single location is very, very hard. I’ve seen grown men cry when they tried and failed.
This year, though, WOW. There was more Internet at TechCrunch50 than you could shake a stick at. And for that, Mariette Systems gets that big wet kiss I promised.
The team: Ernie Mariette, Cliff Skolnick and Tim Pozar. They came in, brought bandwidth (100 Mbps line-of-sight microwave link from WiLine and 30 Mbps from Telekenex), hooked it into a BSD router and distributed it throughout the building via more than 100 Cisco switches and 28 wifi access points. There were hundreds of ethernet connections (and power strips) at attendee tables. Plus dedicated bandwith to Ustream, the DemoPit area and the main stage. And, overall, lots of very happy attendees.
There were more than 1,200 simultaneous connections at peak points, and bursts of up to 88 Mbps inbound bandwidth usage. But no one was ever cut back. And I noticed multiple people in the audience watching the live Ustream feed on their laptops. Others were watching the US Open livestream. In other words, the audience was totally wasting bandwidth. And it was wonderful.
In fact, I was a little disappointed that the audience failed to make our Internet fail. They tried their best, and were found wanting.