Best practice for 24 Disk Array
we have just migrated for our old fibre SAN storage to an IBM v3700 storwize with 24 SAS 600GB disks.
This storage is connected directly to two IBM ESXi 5.5 servers each with two 6Gbps multipath SAS controllers.
Up until now, I have configured the storage that I have used into multiple RAID5 groups. Each group would be for a different server/purpose. Mainly the RAID groups would be OracleDB, Oracle archive, SQL Server and the rest (file server, mail, etc). The most critical applications are Oracle and SQL Server.
My first concern is safety and then performance for our applications. So I have decided to go with RAID6 + spare(s).
My main concern now is, since we are using ESXi, should I configure the entire storage into one single RAID, saving space, and create data store volumes from ESXi for each server, or is this not a good practice and it's better to create separate hardware RAID groups?
Each vendor has their own recommendations, so start by asking IBM. You can open a ticket asking for configuration advice without paying for additional support, usually. That or whoever sold it to you can.
Briefly googling, I discovered this redbook. Page 212, you likely want basic raid 6, which means 1 spare and a drives per array goal of 12. That will mean two raids, one of 12, one of 11. I wouldn't recommend raid 10, because you lose half your capacity. It does avoid parity, but that's something you only need to worry about on low-end or internal storage. Your storage will hide random overwrites' parity overhead behind the cache. My shop uses raid 6 exclusively for half a petabyte of VMWare 5.5, and it's fine.
You should read that book and understand how they do mdisks and pools. You want to create a pool to wide stripe across all your spindles, once your raid groups are set up.
Disclaimer - This is highly opinion-based, and have flagged the question as such, but I will attempt to offer an answer as I have quite recently configured almost the exact same setup.
I highly doubt that any kind of database will perform well on a RAID5 or 6 array. Most vendors are actively discouraging (and even in come cases prohibiting) use of unnested parity-based RAID levels due to high rebuild times, which leads to an increased risk or a URE during a rebuild.
I would personally split this into two distinct groups - a RAID10 for your high-IO load such as databases, and a RAID50 for the rest of your data. How many disks you dedicate for each array depends on how much data you need to store.
For example, for your 24-disk array, you can lose two disks for enclosure spares, and create four 2-disk spans (so 8 disks total) to get a logical RAID10 of around 2.4TB. That leaves you with 14 disks for your RAID50, with 7 disks per span, and around 7.2TB of available space. Of course you can juggle the number of spans, but do bear in mind that RAID10s need multiples of 2.
As for datastores, it doesn't really make a huge amount of difference if you're not using fancy features like Storage vMotion and DRS to shuffle resources around.
Also, to clarify your last paragraph: more, smaller disks is usually preferable to less, larger disks due to the amount of time it takes to rebuild a failed disk, and the load placed on the other disks during the rebuild.