Do you continue to use your end-of-life server/network equipment?

So you spend lots of money on nice servers, storage arrays, or network equipment and it works wonderfully for you for years. But after 3-6 years your vendor no longer offers maintenance for the device, but it is still working.

  • Under what conditions would you continue using the equipment?
  • What factors do you consider when trying to determine the risks associated with continuing to use the equipment?
  • If you believe that the risk is to great, how do you convince management to loosen up on the purse strings in a difficult economy?

Solution 1:

  • Under what conditions would you continue using the equipment?

I continue using legacy equipment as long as it's working fine and hasn't caused me too many problems in the past. If there is no reason to get rid of a perfectly fine piece of hardware, don't

  • What factors do you consider when trying to determine the risks associated with continuing to use the equipment?

The top factors I always consider are security, scalability, and reliability. Does this piece of hardware meet security standards? Will it be able to handle more load when upgrading the network infrastructure in the future? How reliable has it been for me in the past?

Do you need the support that was offered prior to the hardware's end-of-life period? If so, you can look into other companies offering support for outdated hardware. If not, save the company some money and let your boss know you can handle the equipment without further support.

  • If you believe that the risk is to great, how do you convince management to loosen up on the purse strings in a difficult economy?

This is a tough task to accomplish, it requires a fair bit of social engineering on your part. But whatever you do, don't lie. Be up front with your boss and learn to tone down technical jargon into information he or she can understand.

As opposed to saying something like this: "We need to upgrade the backbone on our network as it will not be able to keep up with the required throughput with the addition of a new office full of employees".

try something such as: "We will need to spend roughly $1000 on part X. Adding a new office means we will need to upgrade our current hardware to handle more computers."

Solution 2:

  • Under what conditions would you continue using the equipment?

Older gear is fine for development, test or scratch boxes; basically, anything that will not (significantly) hurt the business when it goes boom.

If it's deemed production, mission critical, or the business requires you to maintain a SLA on it, then kit without support is not fit for the job.

  • What factors do you consider when trying to determine the risks associated with continuing to use the equipment?

The biggest consideration is the availability of parts. If you can't get a new power supply or disk when the server throws one, the business stops and starts losing money. That's reason enough to not have it in production.

This can be mitigated if you are able to self-spare (acquire spare drives / power supplies / RAM) while parts are still available from the vendor. If you need 'em, great; if not, you've just wasted a bunch of next year's hardware refresh budget. Bear in mind that parts bought when the server has been end-of-life'd are significantly more expensive than current generation gear.

  • If you believe that the risk is to great, how do you convince management to loosen up on the purse strings in a difficult economy?

There's two ways to attack this: how much it would cost the business if the kit failed, and how much you can save by replacing it.

The first is hard for you to define unless you can define the value of the work done by server. If the server allows people to buy your product and you know the value of average sales per day, this is pretty easy. If it's a development server, the cost of lost developer productivity while the server's dead should also be easy to work out. If you don't have access to these numbers, you may need to get assistance from your finance department to work it out.

The latter is significantly easier for you: prove you can do more with less new kit. Look into virtualisation and consolidation to reduce the box count (big wins there - I've got ~100VM's on ~7 hosts worth of resources), define the reduction in effort required to support less boxes (and therefore, how much more time you will have for projects and improving the environment) and work out the standing datacenter cost savings for power, cooling and rackspace.

Moving forward, build planned obsolescence into your projects from day one, and start the refresh cycle six months before the switch off day you specified (and had management agree to) in the project documentation. Revisiting the environment at ~2 1/2 years gives you plenty of time to jump through the necessary management and finance hoops for new kit to be ordered and to implement a smooth migration before you send the old kit to the big vendor in the sky.

Having management buy in to the refresh cycle will help in the long run ... as long as you can sell the value of it.

Good luck.