Recommended disk/partition setup for a SQL Server
Solution 1:
Here's a nice blog post: http://sqlserveradvisor.blogspot.com/2009/03/sql-server-disk-configuration.html
White paper on disk alignment: http://msdn.microsoft.com/en-us/library/dd758814.aspx
In short your OS should be on RAID 1, your data files on RAID 10 (preferably) and log files on RAID 1.
SQL Performance article: http://www.sql-server-performance.com/faq/raid_1_raid_5_p1.aspx
PDF on top 10 best performance tips: http://www.stlssug.org/docs/Best_Practices_for_Performance.pdf
Also remember to put your TEMPDB on a separate disk for performance reasons. I'm sure Paul Randal will come in here and blow your mind with why in a bit.
MS says why for tempdb: http://msdn.microsoft.com/en-us/library/ms175527.aspx
Solution 2:
This is a big 'it depends' question.
I can't answer the how to create the individual RAID arrays question for you, as I'm not a storage expert, but I can help you with the rest.
The first thing you nede to consider is what is the workload on the various databases - OLTP (read/write) or DSS/DW (read-mostly). For read/write workloads, you should be looking at RAID 1 or RAID 10 (RAID 1+0), as these provide redundancy and great read/write performance. For read-mostly workloads you can use RAID 5. The reason RAID 5 should not be used for read/write workloads is that you pay a performance penalty on writes.
Transaction logs, by their very nature are read/write (or write-mostly, depending on whether you're using the transaction log for anything - e.g. log backups or replication) and so should never be put on RAID 5.
This means that for some databases and workloads, you may have data files on RAID 5 and log files on RAID 1/10, and for other databases you may have everything on RAID 1/10. Going further, if you have a partitioned database, it may contain some read-mostly and some read/write data, possibly even within the same table. This could be split into separate filegroups and then each filegroup put on an appropriate RAID level.
Separation of actual databases again depends on the workload, and the capabilities of the underlying IO subsystem - a higher degree of separation may be required for storing things on individual RAID arrays than on a SAN, for instance.
Tempdb is a special case all on its own, as its usually a heavily-loaded database and should be stored separate from the other databases. The system databases should not be heavily used and can be placed anywhere as long as there is redundancy.
Here's a link to a whitepaper I helped write that should help you: Physical Database Storage Design. Also make sure your IO subsystem can handle the anticipated workload - see this whitepaper: Predeployment I/O Best Practices. Finally, make sure that you use the correct RAID stripe size (usually 64K or higher on newer systems), the correct NTFS allocation unit size (usually 64K), and that on systems prior to Windows Server 2008, you set the disk partition offset correctly. For info on these, and pointers to more information on them and why you should configure them this way, see this blog post: Are your disk partition offsets, RAID stripe sizes, and NTFS allocation units set correctly?.
Bototm line: know your workload and your IO subsystem capabilities and then implement accordingly.
I hope this is helpful to you.
PS As far as tempdb is concerned, it's a big can of worms over how you should configure it and there's all kinds of conflicting information. I wrote a comprehensive blog post about tempdb data file configuration at Misconceptions around TF 1118.
Solution 3:
The short answer for servers I've set up has always been
Logs on separate physical disks, raid 1 or 10 (striping + mirroring)
Database on own disks, depending on performance needs usually RAID5
Lots of cache on the raid controller(s)
Preferably stick your OS and Windows Pagefile on a separate array again, usually just a mirror (Raid 1). This keeps all write operations separated so heavy performance doesn't drag everything down.
What I've experienced in the past is that having database writes + log writes + pagefile writes will bog down a Raid5 array and performance will go to heck in a handbasket. The problem is that your performance will be fine in testing, dev, etc. But when you get into production and usage skyrockets this problem will appear "out of blue" and user complaints skyrocket.
Solution 4:
There are far better MSSQL guys on here than me but in general terms I'd suggest the following;
OS and code on C: - this should be the local disks, should be a RAID1 array pair - we use 2 x 2.5-inch SAS 146GB 10krpm disks for this but you could use 2 x SATA 7.2 disks. Data should be on a pretty fast (10krpm or better) RAID 1/10, 5/50/6/60 array of whatever size you need - we hold ours on FC SAN LUNs, usually on a 'tier 2'/10krpm disk group. Logs should be on a separate VERY FAST (15krpm) small (10GB or less?) RAID 1 array pair - we hold ours on FC SAN LUNs, usually on a very small 'tier1'/15krpm disk group or on a 'tier0'/ssd group.
Either way you want each chunk of these on separate spindles/arrays for performance - of course it'll all work off a single disk but I guess you're looking for a balance of performance and cost.
We store our master/tempdb with our regular databases but you could break it into a separate data array LUN.
Hope this helps.