Solution 1:

Your problem probably lies with the IDE drives in the older machine. I've hit this myself on large servers (15 drive "SATA I" arrays). Depending on the load, you can probably do fine upgrading to a server with new SATA drives, or if you have the money, SAS. For example of the boost you'll get, when I went from a 15 drive RAID5 of SATA drives (on a 3ware 9000 controller) to a 15 drive RAID5 of SATA II drives (with NCQ, on a 3Ware 9550) my performance more then doubled. As James posted...iostat will show you how loaded your disks really are...so you can confirm that your drives are your bottleneck and not your CPU.

Going to a full blown SAN for your size environment wouldn't be cost effective. You'd be better off buying a newer server with more capacity and faster drives.

Solution 2:

Use the "iostat -x" command to check your disk utilization. The await column is the average wait time in milliseconds. If it's consistently high (more than a couple hundred) on your storage volumes, then you're definitely I/O bound.

Assuming this is the case, either a bigger server or a SAN solution could meet your needs, so long as it's sized to meet your needs. In general, you'll improve performance by moving to something that has faster disks, and more of them. More spindles means more simultaneous reads and writes.

Without knowing anything else about your operation, I would suggest that a SAN is probably overkill. Where I work, we use SAN when you need multiple computers to share the same storage at a block level, or when you need really fast disk I/O (think database or engineering applications). A SAN is also better if you know you're going to need to continue to add disk space. They're generally designed to scale big.

For your purposes, though, I think you would probably do just fine with a server upgrade, and maybe an external disk array. For instance, a newer DL380 G5 (or G6) will house 8 (or 16) SAS drives, which are about as fast as you get outside of Fibre Channel or SSD. You could hang an MSA70 off the back of it to get another 25 SAS drive slots. Go 64-bit and give it plenty of RAM. Linux will cache files in RAM, and I'd imagine Windows does the same, which helps improve performance if your users frequently access the same set of files.

Solution 3:

SAN is probably overkill as another person said. Likewise make sure you go x64 on the Windows boxes.

As far as Windows 98 - what says WS2008 won't support Win9x clients? You just need to scale some default security settings back.

Thanks, Brian Desmond Active Directory MVP

Solution 4:

Buying a SAN is a very expensive operation. Depending on the SAN solution you look at it can easily run into hundreds of thousands of dollars.

A couple of decent servers with some fast hard drives will probably be more cost effective.

Solution 5:

I've met some applications like that. I feel your pain. Lots of itty bitty transfers is something that Samba isn't that good at when there are lots of folks doing it. Moving to Windows, even on the same hardware would probably buy you some CPU overhead just from that.

IDE/SATA isn't that great for lots of itty bitty random I/O, which is why I'm seconding MediaManNJ in suggesting SAS drives if you can afford them. Moving to newer hardware will also increase the sizes of the caches sitting between the server and the disk platters which may help with the itty-bitty I/O; write reordering can do a lot if done right.

Going 64-bit will increase your cache sizes by quite a bit, something the 32-bit Linux kernel wasn't especially great about either.

For true Blazing Speed On The Cheap, a dual-channel RAID card and one of those MSA70's will really help reduce I/O bottlenecks. HP RAID cards allow raid-sets to span channels to further reduce I/O contention.