qlogic HBA tuning recommendations random iops

Solution 1:

I've worked on a similar setup, but your question lacks out some important information. Here is a list of things I would look at in order to squeeze the most performance out of your setup.

There are several layers in the block IO path that you'll want to look at separately. I like to start at the bottom and work my way up the stack.

The very basic layers from the OS to the SAN are as follows:

BIO -BLock IO Unit request submitted from application. since your talking about a DB server, the size of this request will probably be some multiple of the page size the db uses. (collect some iostat data to see what the average request size is per device)

Device Mapper / Multipath (/dev/dm-*) -BIO submitted to virtual device created by multipathd if that's what you're using? ->IO scheduler at virtual device layer makes decisions based on read or write BIO's and either merges the request in an existing queue or adds request to a new queue(more logic happens here but its beyond this scope) ->Since device is managed by multipathd, routing decisions on how to distribute the BIO's to underlying devices can be found in /etc/multipathd.conf -> there are tunable parameters in this config file that change the way BIO units are distributed amongst paths

Underlying Physical Paths that make up the Virtual Device /dev/sd* ->Once BIO's are delivered to these underlying devices ->more decisions are made here depending on queue options ->BIO's passed onto HBA

HBA -HBA (qlogic) has an execution throttle that says the card is able to have x number of BIO's in flight (per lun) before rejecting new requests

SAN ->Once BIO's are handed off to the SAN you lose control of it's queue and decision making.

Since you asked specifically about your HBA, I would look at your HBA execution throttle and check to see what it's set to. You can see if you ever hit the max by keeping an eye on the busy column:

cat /proc/scsi/sg/device_hdr /proc/scsi/sg/devices

Next I would start by profiling your system workload by collecting iostat and vmstat data. Then I would try playing around with the multipath.conf options, queue sysfs options, file system options, and IO scheduler options to see if changes to the each of those layers result in better block IO performance. Remember to only make 1 change at a time and run at least 3 or so test per change while collecting the data.