Poor internal database - replace it or chuck hardware at it?
Solution 1:
I'm going to disagree with everyone here. Chuck some hardware at it. It's cheap, fast, easy, and will buy you the time needed to implement a proper CRM solution. The reason I'm advocating something that is anathema to just about everyone on not only this board, but stackoverflow as well, is that I've been a project manager/manager and have been on the "Business" side for a while(business is in quotes due to my hatred for the word). Based on your description of the software, it will take close to a year to rebuild something else. Just discovering/documenting the business rules/quirks, will probably take 2 months. It will also be unbelievably expensive to develop. Especially when compared to the cost of a tricked out server.
I'm actually about to host a set of web apps for a company for just that reason. The internal IT department will not move it to better hardware because they want to redevelop them on a new platform. That cost is approximately triple what it would cost to move it to new hardware. Not too mention that the company might not have the contract renewed in a year.
Solution 2:
You might not need to do either. My suggestion is to simply add some indexes/keys to the table.
tables with tens of thousands of rows with no primary key or index, which are also used heavily in multi-table joins
Before spending a lot of money or time, take a couple hours and add indexes (or primary keys if you can) to any tables involved in those joins ... particularly for columns used in a where clause. You could easily improve the performance by a factor of 10 in just a few hours.
Solution 3:
The lack of disk I/O implies that queries are fed mostly out of RAM. If you 'suddenly' have your hot tables not fit in RAM anymore and the server starts working the disks, you may be in for a bad ride. 2GB or RAM isn't very much these days, but back in the SQL2000 era it would have been sizeable. I'm guessing that the amount of data that the application normally manipulates is smaller than the RAM that you have. You might want to look at the amount of "used" space in the data files. This will give you an idea of how much RAM the database might consume, worst-case. SQL Server doesn't keep the data that it doesn't need in RAM, but it can be difficult to know what tables are used and when.
Hyperthreading doesn't always helpful with SQL Server. You may get better performance turning it off. It's hard to test because flipping it off and on requires a reboot, and that's a big hassle on a production server.
"Hundreds of thousands of queries a minute" translates to thousands of queries a second. That sounds pretty busy, but much of that traffic may just be cursor fetches by Access. Access is particularly bad at efficiently retrieving result sets from SQL. You may get better performance by turning the SQL Server parallelization setting off.
You also want to look for blocking. Throwing hardware at a blocking problem does not always produce the hoped-for dramatic improvement. If there is not much blocking and queries are satisfied by RAM, rather than disk, you are basically relying on the processor's grunt, and their ability to pull data across the memory channels. In that case, faster hardware ought to provide a good improvement. If you are in a hurry (to get past this issue) and growing slowly, that might be good enough.
As a solution, adding hardware doesn't scale as well as database improvements. If you get a surge in growth, you may find your new hardware struggling. Another thought is that successful applications draw users. If the application becomes more responsive, users may be more likely to run more reports and such on it than they would if they needed to go for coffee while waiting for the report to finish.
If the database schema is really bad, you may be able to get some performance wins simply by looking at the indexing on the tables. Concentrate on tables that you know get queried often. You can use Profiler to watch queries running against the server, just tell it to look for queries that read a lot of data (like 100,000 pages) and then work down towards queries that don't read much. You mentioned that some of the tables don't have keys. Are there natural keys in the data, just not enforced by constraints or unique indexes?
Do the tables have clustered indexes? Lack of clustered indexing can cause all sorts of secondary effects.
Are there lots of nonclustered indexes, with many columns? This is often an attempt to build many covering indexes, rather than implementing a more effective indexing strategy. SQL Server can effectively build covering indexes on the fly during a query, if it makes sense to do so and there are supporting nonclustered and clustered indexes.
Lastly, it's worth asking: Is maintenance (reindexing and/or update statistics) being done on the tables?
Solution 4:
this is a business question not a technical question.
As a business owner: How strategic is the system to the business? the less strategic, the less i care & fixing it & any money spent, is money I could be using elsewhere to grow my business.
Computer folk scare me as they all get in a big room & argue about design & cost me a fortune. Keep the system going! whether this means performance tuning (without re-architecting) or throwing more hardware at it, It's only a priority if it stops working.
As an IT consultant: Your system is legacy and has hidden operational costs. We can design a system that's right for you, that will scale and provide a platform for future growth & strategic advantage. Sign here & all your dreams will come true.
As an IT employee: I can be the superhero here & save the company by averting an imminent disaster by optimizing the hell out of this thing! my manager will shower me with gifts & praise as I'll have saved the company thousands.