What is a reasonable maintenance schedule for Windows PCs?

Here are some "pages" out of my personal "operations manual":

  • All user data is saved on servers, period. It might be replicated to client computers via functionality like "Offline Files" (for laptop computers, particularly) and Outlook's "Cached Exchange Mode", but there is no primary storage of data on client computers, ever.

  • No recurring backups of client computers are performed. No data is stored there. Users are instructed (ideally by corporate policy documents) to save data in approved areas and that anything saved outside those areas is not backed-up.

  • Software should be automatically installed via an automated mechanism wherever economically feasible. (The "break even" has been, for purposes of my Customers, a program being installed on five (5) or more computers. If it goes on fewer I'll probably just do the installation manually.) The Active Directory security group membership and location (OU) are sufficient to determine a machine's software load.

  • I have taken an image of a client computer being used in a very business-critical role now and again, but in general the majority of client computers I work with are built-up from their factory load and automated software installs. Where I've seen it done I've felt that maintaining a "library" of disk images of client computers has been cumbersome and error-prone.

  • Since Windows 7 added software RAID-1 to the "Professional" OS I have made use of that, increasingly, for client computers that are in more "mission-critical" roles. Windows software RAID is much more forgiving and workable than "motherboard RAID" (which is nothing but trouble).

  • Antivirus software should use a "management console" that can provide centralized, automated alerting for fault or anomaly conditions. This often means buying "enterprise"-oriented antivirus software.

  • Computer environment settings (firewall, security options, etc) are pushed out via Group Policy. (Anything that can be done with Group Policy is done that way.)

  • No maintenance of the hardware (fans, etc) is done except when the environment is harsh, and even then only in a reactive manner. Hardware has gotten pretty solid in the last 10-15 years.

  • Updates are installed via WSUS. Compliance is tracked in WSUS and, if the environment warrants (for PCI compliance, for example) with whatever auditing tool is financially appropriate. (SCCM is nice, for example, but not always appropriate from a cost perspective.)

Edit (now that I have a couple more minutes to write):

  • My definition of "user data" includes user profiles. I use Folder Redirection to get the big folders out of the profile. I generally redirect AppData, which seems to be heavily discouraged throughout the industry (because dimwit software developers make assumptions about the AppData folder being local that may not be true... >cough< Apple >cough< iTunes >cough<).

  • Users never have "Administrator" rights on client computers for their day-to-day user accounts. Dealing with small privately-held businesses, as I do, can often require some finesse in explaining to the owner why their user account doesn't have Administrator rights. (With the advent of scary-as-heck malware, though, making this argument has become a lot easier. Score one for malware, I guess...) I do create secondary local Administrator accounts for users who are technically competent and who have a legitimate need on a case-by-case basis (after consulting with my contact and weighing the pros and cons). Making this one change drastically decreases "software maintenance". If you do nothing else, do this.

Some of the goals of this methodology are:

  • Allow for a user to "hot desk" if they have major failure (smoke rolling out of the computer, etc). All their software might not be available (because of licensing limitations that limit installed seats, etc), but they should have basic functionality. (I support a reasonable number of client computers throughout my Customer-base. I need a simple PC failure to be a non-emergency event or I can't scale to any significant number of Customers.)

  • Reduces most troubleshooting for user issues simply to determining if the problem is user profile-specific or machine-specific. User profile-specific issues are resolved either by restoring the profile from a known-good backup or, in drastic situations, starting with a clean profile. Machine-specific problems are resolved by bringing out a spare machine or wiping / re-imaging the failed computer.

  • There is no data loss impact when the eventual failure of client computer hard disk drives occurs.

  • Computer replacement (and keeping the Customer sticking to a computer lifecycle plan) is easy.


Keeping a separate image of each machine sounds like a very labor and cost intensive plan. How are you going to store all those images and keep them matched to each machine. Will you even be able to restore them properly when the old hardware dies and is replaced with new, different hardware?

I don't backup end user machines. All their important data is stored on our file servers, exchange and sharepoint. Each user has their own personal network share as well as departmental ones. We educate all the users not to store data on their local machine.

All the machines are imaged with the software they need when we deploy them. Microsoft offers tools to do this for free, like Microsoft Deployment Toolkit and Windows Deployment Services. Or you can get System Center Configuration Manager which also monitors the health of the machines, allowing you to ensure updates are applied and unauthorized software isn't installed. All the machines are set by domain policy to automatically apply updates from our WSUS server.

When a machine dies or gets a virus, we just replace/repair the hardware and reimage it. The new machine already has all the software they need, their network drives get automatically mapped by group policy as soon as they login. Printers are automatically installed, etc. So the machines are basically disposable.

For the most part I don't do ANY maintenance to the end user machines unless their is a problem with them. System center helps identify problems that the users don't report. WSUS and our antivirus software give reports on which machines are missing updates. When there IS a problem with a machine, it gets dusted out and hardware checked over while we repair it.


Rethink your strategy. Don't backup client devices.

Spend your time doing the following:

  1. Create an OS image that can be deployed automatically over the network with WDS/MDT. When a computer gets hosed, just reimage it. This image should have all of your software/configuration in it.

  2. Centralize your user data on servers. This means having a proper mail server and not using POP3 with individual user backups. Just backup the mail server and you're done. This also means centralizing user files on file servers. You can use Group Policy to transparently redirect the desktop, documents, appdata, and other folders to this file server. You can use offline files to sync this data locally so users can still work remotely. Now, you have one machine to back up and not 25.

  3. Centralize your patching. Use WSUS so that you can generate reports on what clients have which patches. Then, you only need to "spot check" machines that aren't compliant. You should do this every month after Patch Tuesday.

As for physical maintenance, unless you're in an exceptionally filthy environment (factory floor, inside of a vacuum cleaner, etc) don't open computers and dust/vacuum them out. In some instances, this can potentially void the warranty. In other cases, it's just a waste of time. If you must do this to sleep soundly at night, do it every few years.


(Wow-- I've never added a second answer to a question before... This is surreal!)

Everything else that I said in my other answer applies, pretty much, except that I'm going to go out on a limb and suggest client computer backup. (Yikes!)

The Client Computer Backup functionality in Windows Server 2012 R2 Essentials might just be up your alley. Functionally it's a block-level deduplicated backup store that sits on a server computer, combined with a client application that performs the backups and uploads data to the server. It's actually pretty slick, as solutions like it go. There is a bare-metal restore ISO image that will allow you to restore a machine from a empty hard disk drive. (I've used it before and it works rather well.)

Windows Server 2012 R2 Essentials requires no Client Access Licenses (CALs) for up to 25 clients. If you needed more clients than than (which it sounds like you might) you should look at licensing Windows Server 2012 R2 Standard, purchasing the necessary CALs, and adding the Essentials Experience feature to the server.

Planning for disk utilization is going to be difficult because you're not going to know how much "win" you're going to get from the deduplication. Your environment is going to determine how well that will work so there's no guideline to give you here.

For the cost of some software and a small server computer you could cobble together a pretty good PC backup solution. While I am somewhat "morally opposed" to the idea of client computer backups I think it could be a viable strategy in your environment given the amount of work it would take to transition it into a more "normal" corporate Windows environment.

If you perform backups of the server computer itself (which you should, obviously) you can also get an effective off-site backup of all the client computers using this tool. That would certainly be an attractive add-on, given that right now a fire or flood in your office would probably represent complete destruction of all your company's data.


I don't use windows, but a lot of the basics carry over, and the main thing that you should try to do is set up a system that will alert you if a part of it fails.

For instance, if I was backing up a folder to another computer every day, I might have a separate script running on the destination to check the latest timestamp, and if no file has been modified in the past day it sends an email to say the back is no longer running. This is just a vague example, but in general you can't remember to do everything yourself - murphys law says the one thing you forget will usually be the wrong thing that goes wrong, so it's best to have some sort of automation to detect failures.

Similarly, set up an automatic testing suite for system stuff, like antivirus version, firewall functionality, etc. I'd back up any important files on an hourly basis if at all possibly, daily as a minimum, but it depends on how fault-tolerant your business is - how much will losing a day's worth of emails cost you?

If you're worried about hard disk failure, look into RAID. Backups are for catastrophic failures (flood, fire, etc) and user-error (help, I deleted the wrong file!). Raid is for disk failure.