How should an IT department choose a standard Linux distribution?
There is a lot of community feeling about what Linux distributions are appropriate for production server environments and which aren't, however, a lot of this feeling seems religiously based, and seldom presented with supporting evidence.
Assuming that we were trying to select a Linux distribution to standardize on (because we have an interest in keeping our environments as homogeneous as possible), what criteria are important, and how do you make determinations about how well different distributions meet those criteria?
Solution 1:
I currently work in an environment that has used Linux for more than a decade. Everybody in the office uses different distros on their desktops as well as the servers. As such, the choices of distribution tend to revolve around a number of things in no particular order:
- History - Obviously systems like RedHat and Debian have been around for a long time. As such, the adage "if it ain't broke, don't fix it" can be used for these. Upgrading becomes easier if the software is supported well on a distro.
- Familiarity - Similar to History, however we all have our favourites. I cut my teeth on Debian, and migrated to Ubuntu (a hard decision at the time because I tend to commit to a community). Conversely, it's a pain to have to remember how to do things on a dozen different distros (not to mention the scratch-built ones).
- Support - I migrated to Ubuntu mainly because I appreciated what they were doing as far as offering paid support. That was a selling point if ever a client had a concern about running a system long-term. Similar to RedHat's approach (but RPM hell was going on at the time). We have a number of RedHat servers for this reason also.
- Dependencies - Some softwares are easier to use on some distros simply because the dependent packages are more easily obtainable or buildable. As example of this would be oVirt on RedHat. There are no packages for some softwares on some distros. And you could compile it, but why would you if the package was right there on another distro?
- Granularity - Distros like Gentoo offer finer control over versioning and software-switch granularity. Other distros have "pinning" in various forms, but that's still not as controllable or reliable.
- Binding - While it's possible to compile from source on most distros, some distros are better at it than others. This can have an effect, say, if your project patches existing libraries for extended functionality.
- Prettiness - Some distros are just better-looking. Every geek knows it's just fluff (and you could probably get away with doing it as a web app these days) but some clients are wowed by this stuff, and we all know it.
- Stability - Some distros stream "stable" versions of software as opposed to "testing", "experimental", etc. This can mean alot if you know that the version you're building on will eventually reach a consensus on stability. You may develop on "experimental" knowing that by the time your project is finished it will have reached "stable" and be good to rely on.
- Package management - If you're developing something on a daily basis, and it's going to go out to 1000s of machines in one hit, then you probably want something that makes it easy to build, maintain, and track packages across those systems.
- Consistency - This is more an argument for the same distro. Less mistakes get made (and less errors in security) when people can focus on one distro as opposed to several.
- Predictable release schedule - If you want to be sure that your software stays supported, planned upgrades offer a certain type of stability.
- Security - Some distros have active security teams whose job it is to respond immediately to genuine security risks in any approved package.
Those are just a few things that come off the top of my head regarding reasons why each system was chosen. I don't see any one guiding light or preference of one distro over another in this decision. Diversity and choice can be great and offer you some really good options to get a project started quickly, but it's also the noose that can hang you. Make sure you think ahead of what you're going to need. Plan what the system's needs are as well as when the system is going to be upgraded or retired. Don't assume you'll always be the one maintaining it.
Solution 2:
I'll share my experiences working as a technologist in a few different fields...
(Caution: this is a story about Red Hat and how I grew up professionally with it)
I started working with Linux professionally in 2000-2002. This was during the wide adoption of Red Hat and the Red Hat Professional Editions (6.x, 7.x, 8.0). These were available for free download as well as boxed packaged sets. They could easily be found in computer retail stores.
For me, this had the benefit of engaging hobbyist and home users with the same product that was beginning to emerge in the enterprise. My work at this time was to move customer server systems from commercial Unices (HP-UX, AIX and SCO) to the Red Hat platform.
The cost savings was substantial! Replacing $100k+ HP9000 PA-RISC servers with $40k Compaq ProLiant Intel servers was an absolute win on cost and performance.
So, why Red Hat?
Red Hat was the first one to this market, gaining critical business, vendor and hardware support. Seeing large application vendors use Red Hat as a target platform sealed the deal. Hobbyist users like me were able to transfer the skills honed at home to our work environments with ease. The community was growing. Slashdot, Freshmeat and LAMP stacks ruled! It was a good time for Linux.
By this point, I was responsible for the development and evaluation of Linux distributions as a platform for a proprietary ERP software solution. I stuck with Red Hat. Every so often, I'd try another distro (Mandrake, SuSE, Debian, Gentoo), but would find issues with packaging, hardware support (servers or peripherals), the (size of the) community or some other deal-breaker.
An example: I was using Compaq/HP ProLiant hardware outfitted with Digi Serial expansion PCI-X cards and Esker VSIfax production fax software. The latter two only had driver support for Red Hat operating systems. In some cases, software was only delivered in binary or RPM form, precluding easy use on other Linux variants.
Momentum matters in the Information Technology World
Nobody wants to be one who recommends the losing solution or project that eventually gets orphaned, so you stick with safe choices. I was managing a technology stack that needed to work reliably and have several layers of support. Choosing a different distribution at that point would have just. been. irresponsible.
The Red Hat honeymoon ended for me in 2003 with the discontinuation of the professional editions of the software. Red Hat Enterprise Linux was the replacement and came with quite a bit of baggage... Cost (expensive subscription-based model), accessibility (shrinking the user base and community) and general confusion about the future...
I began to look for alternatives, reevaluating Gentoo, Debian and SuSE. I could not get the right support on all of the components of our technology stack. I was forced to stick with the Red Hat ecosystem... Due to the wild cost-shift associated with Red Hat Enterprise Linux, I ended up running a highly-modified Red Hat 8.0 for years past its end-of-life. It wasn't until the RHEL clones matured (Whitebox Linux, and later, CentOS) that I prepared a real move away from my standard.
The major advantage of Red Hat derivatives was and is binary-compatibility with the paid RHEL versions. It's even possible to perform in-place conversions between RHEL and CentOS, and vice-versa. I continued to work with RHEL-like systems until I made the next career move...
I later found myself in the high-frequency financial trading industry, where I was responsible for R&D and Linux engineering for critical automated trading systems. The emphasis in this world was speed, by means of careful testing and tuning. Again, hardware support was key. I'd have specific network cards, specialized hardware, server hardware or application libraries that were only certified for RHEL or RHEL-like systems. Even in cases where things could be compiled for other Linux variants, the community factor arose. When I was at the point where I needed to research a problem, it was often times an issue that could be traced to notes or comments in Red Hat Bugzilla reports, or sometimes, I'd simply submit a patch or request for the next release.
As I started to delve into low-latency networking and kernel tuning, I began to dissect the stock RHEL kernels and RHEL MRG Realtime kernels. I noticed how much work when into the releases... 200+ patches to a vanilla kernel.org kernel. Read the comments and commit notes. You may have small things like sysctl
parameters exposed or more sane defaults applied. Red Hat pays people to patch, test and fix these issues. I didn't see the same commitment from other Linux distributions... Add the fact that the enterprise platform is guaranteed to have real security, bugfix and backport support for years.
So I eventually moved to another financial firm that was nearly all-Gentoo on the server and desktop... It was a disaster for me. Coming from the Red Hat and CentOS world, I encountered numerous stability and management problems with the Gentoo setup. Version control was the biggest issue, but dwindling community support and lack of real testing were also concerns. I began to introduce RHEL into the environment because some of our third-party software required it...
But there was a problem... My developers were used to Gentoo and having relatively-easy upgrades paths for core libraries and application versions. They could not adjust to having the fixed major versions that Red Hat Enterprise Linux standardizes on. The development and release process was plagued with questions about why GLIBC 2.7 couldn't be grafted onto RHEL 5.x or why a certain compiler or library version was not available. When told that upgrades between major versions of RHEL/CentOS essentially required full rebuilds, they lost a lot of confidence in the solution.
At this point, I realized that Red Hat was moving far too slow for developers who wanted to be on the bleeding/leading-edge. RHEL 6.x was a much-needed and welcome upgrade, but this theme became more evident once I started interviewing with startups and firms that subscribed to DevOps principles.
Today...
An increasing number of developers and Linux users are coming from non-Red Hat, non-SuSE, non-enterprise Linux environments.
- They're using Ubuntu or Debian...
- They didn't have to deal with old-school hardware or big vendor support.
- They're writing their own applications from the ground-up (self-supported).
- Virtualization and cloud-computing abstracts the hardware layer, so worries about funky RAID controller drivers, PCI-X peripherals or binary-distributed management agents aren't even on the radar.
- These users want the tools and userland that they're accustomed to.
So there's a conflict... These users don't understand why they'd be restricted on application or library versions. Old-school administrators are still adjusting to the new paradigm. Arguments that seem to be rooted in religion are really just functions of how people developed their respective skillsets.
I saw a job ad today for a very senior DevOps Linux engineer position that read:
Must be proficient-to-expert in Debian-based Linux distributions (Ubuntu and variants okay. Red Hat passable, but not preferred)
So I guess it works both ways... I've walked away from job opportunities because the 800 CentOS servers I'd be managing were slated to be converted to Ubuntu. Sure, Linux is Linux... but I didn't feel that I'd be as effective I could be... I've fumbled with Debian installations and wished that an RPM-based distro were in use. I've had the heated arguments about the merits of various platforms (usually placing Gentoo at the bottom of the list).
So what's right for YOUR environment? It depends. I've been in firms where systems engineers drive decisions, as well as organizations where the developers are king. I think that the best arrangement is when developers and the people supporting the systems agree on platform. But outside of that, think about long-term support, usability, community and what accommodates your application stack in the most-appropriate manner.
A talented developer should be able to work in a RHEL-like or Debian-like environment. And well, development platforms should mirror the production environment. You go from there...