SQL Server Security Crackdown

I've been working a job for 2 years now as a SQL Server DBA for a city.

I was brought in to support an existing DBA, so I've slowly taken everything in and have been working for a while, trying to gain credibility in the organization. It's time to tackle the toughest and most important issue: security.

Our main server is a failover cluster with sql 2005 on it. It holds 166 databases. It serves 550 or so connections with app pooling for most applications, so it easily supports thousands of users. We also serve a public website that gets a fair amount of traffic. Vendor apps, in-house apps, access back-ends - quite a variety with a few critical applications.

Many developers have production access. Changes are being made outside the RFC process and we have no idea what data is being changed by developers. They also enjoy having production access during implementations of systems - which I'd also like to stop.

With so many applications running, any sane DBA would lock the server down.

How would you transition a team of developers from wide open security to a more restrictive environment?

I'm working on an emergency request system which will give them access to their database(s) for a limited time. This would give us auditing of access and ddl changes. Hopefully this will not be used - as it means we did not respond to a page. It will use HTTP, encrypted webconfig, login/pwd on screen(not email), request throttling to 3 per 12 hours. We would also be paged when a login is issued.

Any warnings about this type of system?

The major risk that I can see is that if a user's ntlogin were compromised - so would be their databases. We have that vulnerability right now since they always have access, but would never know if it happened.


edits: Development and staging are robust systems with 8 cores and 8GB memory. Planning on implementing a prd to stg copy mechanism that developers can operate if their db isn't massive and doesn't contain sensitive infos.

Management is supportive. If we can guarantee them access during emergencies, their biggest worry is taken care of. The other problem might be having access to production during implementations - which I'd rather not give. We can practice the move to staging first and test and fixes during the go-live there - with DBA's migrating to prd.


I've been working with the same thing in my company. I actually found that most of the people were more receptive to this than I thought, but a few things have been key:

  • Remind them that the primary purpose of this is to prevent accidents. Accidents are something that everyone can relate to. Almost no one thinks they will ever be hacked, and talking about internal hackers puts them on the defensive immediately ("don't you trust me?").
  • Quickly respond to requests when they do run into a block.
  • Be willing to work with them. We found that some of the things I initially locked down were needed for them to do their jobs and wouldn't cause problems if they were opened back up a little.
  • Be consistent. You have to apply these policies universally. When possible, apply them to yourself, also.

I've also been trying to do this in smaller increments to make it less disruptive. For example:

  • All new databases now have restrictive permissions from the start. You don't miss what you've never had.
  • Start by removing sysadmin and other server-level rights, if they currently have them. Most people will agree that they don't need to reconfigure the server, and it removes a lot of security holes. Once they have seen that they can live with that, it sets the stage for more incremental tightening.
  • Tighten down one application's databases at a time. This makes it more manageable for you, plus that enables you to learn over time what rights they need so future tightening goes more smoothly.

Good luck!


Start by putting a DDL trigger in place to pick up any schema changes that take place. Just log every DDL command that goes through, looking at who and when.

You may find that things aren't quite as bad as it seems. Work out who is pushing most changes out, and get them on board. After that, you should be able to tie things down better.

Also - put a trace on to pick up what's being run from applications which aren't your proper apps. Look at the ApplicationName and/or HostName fields in the trace. This will help you get a field for ad-hoc queries. Then you should be able to find out who jumps in to hack data often.

I'd always start with monitoring this type of situation first. Lock it down soon, but start monitoring it now, to get a picture of how big the problem is.


That is a serious undertaking. These people are probably very very set in their ways and it's going to take a lot of support from their superiors to get that changed. Do they already have a test environment? I'm assuming they do, but if they don't that should be step one. Also, make sure that it is up to snuff. If it's old and slow they will cringe at being forced to use it. You need to be prepared to throw them a bone in other areas such as test in order to ease them out of their ways diplomatically.