Solution 1:

30 VMs served from just 2 spindles (disks) will probably suffer an IO bottleneck, even if those VMs aren't particularly IO intensive (either random or sequential). You're looking at 30 separate concurrent read requests occurring across widely separated areas of the disks. Lots and lots of time wasted seeking between places.

I'd recommend setting up a second drive array if the option is easily available to you (spare drive slots or a spare external housing), and migrating your VMs across to it. 4-6 disks min. Another improvement would be a larger read/write cache, if you're only running on a 128 or 256 chip.

Another place to check is the vCPU allocations as Zypher mentioned - assigning too many vCPUs to each VM is (counter-intuitively) likely to slow all the VMs down (each VM has to wait for a free core for every single one of its vCPUs before it can get CPU time, so a 4vCPU VM may get less cycles than a 2vCPU VM)

Edit: thinking about it a bit more, there are also some locking problems you might come across by having so many VMs on a single LUN. You can encounter per-datastore locks during various VM operations, possibly power-on/suspends etc. That'll start to stack up quite quickly so slow boot-ups etc may be caused by this. You can get around this by setting up separate datastores within the same amount of drive space (resize the current partition to half, then create a new partition in the blank space. Spread VMs evenly between the datastores). About 15 Vms per datastore is a good maximum.