How can I prevent accidentally jacking with a production database?

If this is something you see yourself doing often, automate it. And since you're both developers, writing some code should be in your wheelhouse. :) Seriously though... by automating it, you can do things like:

  • Verify that you're restoring on the correct server (i.e. no dev -> prod restores)
  • Verify that it's the right "type" of database (in your case, "staging" and "production")
  • Figure out what backup(s) to restore automatically by looking at the backup tables in msdb

Et cetera. You're only limited by your imagination.


I disagree with the assumption in the question —this is security— but I also disagree that automation is going to save the day on its own. I'll start with the problem:

You shouldn't be able to accidentally do anything to production!

That includes doing automated things accidentally.

You're confusing system security with concepts like "who's allowed to do what". Your development accounts should only be able to write to their copies, the version control server and the dev database. If they can read/write production, they can be hacked and exploited to steal customer data or (as you've demonstrated) can be mishandled into losing customer data.

You need to start by sorting your workflow out.

  • Your developer accounts should be able to write to their own copies, version control and perhaps pull from version control into a testing environment.

  • Backup users should only be able to read from production and write to your backup store (which should be aptly protected).

  • Doing any other read/write on production should require special and inconvenient authentication. You shouldn't be able to slip into it or forget you're logged in. Physical access control is useful here. Smart cards, flip-switches to "arm" the account, simultaneous-turn dual-key access.

    Accessing production shouldn't be something you need to do every day. Most of the work should be on your testing platform and out-of-hours deployments made to production after careful scrutiny. A little inconvenience won't kill you.

Automation is part of the solution.

I'm not blind to the fact that the full turnaround (uploading to VCS, checking coverage, pulling to test server, running automated tests, reauthentication, creating a backup, pulling from VCS) is a long process.

That's where automation can help, per Ben's answer. There are many different scripting languages that make running certain tasks much, much easier. Just make sure you don't make it too easy to do stupid things. Your reauthentication steps should still be pronounced (and if dangerous) they should be inconvenient and hard to do without thinking.

But alone, automation is worse than useless. It'll just help you make bigger mistakes with less thought.

Suitable for teams of all sizes.

I noticed you pointing out the size of your team. I'm one guy and I put myself through this because it only takes one person to have an accident. There is an overhead but it's worth it. You end up with a much safer and much more secure development and production environment.


One of my coworkers has an interesting approach to this. His terminal color scheme for production is fugly. Gray and pink and hard to read, which is theoretically supposed to ensure that whatever he writes, he really intended to write.

Your mileage may vary... and I probably don't have to say that it's hardly bulletproof on its own. :)


Developers shouldn't know the password to the production database. The prod password should be random and not memorable — something like the result of keyboard mashing (Z^kC83N*(#$Hx). Your dev password can be $YourDog'sName or correct horse battery staple or whatever.

Sure, you could find out what the password is, especially if you're a small team, by looking at the client application's configuration file. That's the only place where the prod password should exist. That ensures that you would have to make a deliberate effort to get the prod password.

(As always, you should have point-in-time backups for your production database. For example, with MySQL, archive the binary logs as incremental backups. For PostgreSQL, archive the write-ahead logs. That is your protection of last resort for any kind of disaster, self-inflicted or otherwise.)


The short answer is RBAC - role-based access control.

Your permissions to all environments need to be different - and as annoying as things like things like UAC are, you need them: especially for PROD environments.

There is NEVER a reason for devs to have direct access to Prod - regardless of how small the organization/team is. Your "Dev" may also wear the "Stage" and "Prod" hats, but he needs to have different credentials and processes to hit different environments.

Is it annoying? Absolutely. But does it [help] prevent borking your environments? Absolutely.