How to [politely?] tell software vendor they don't know what they're talking about

Not a technical question, but a valid one nonetheless. Scenario:

HP ProLiant DL380 Gen 8 with 2 x 8-core Xeon E5-2667 CPUs and 256GB RAM running ESXi 5.5. Eight VMs for a given vendor's system. Four VMs for test, four VMs for production. The four servers in each environment perform different functions, e.g.: web server, main app server, OLAP DB server and SQL DB server.

CPU shares configured to stop the test environment from impacting production. All storage on SAN.

We've had some queries regarding performance, and the vendor insists that we need to give the production system more memory and vCPUs. However, we can clearly see from vCenter that the existing allocations aren't being touched, e.g.: a monthly view of CPU utilization on the main application server hovers around 8%, with the odd spike up to 30%. The spikes tend to coincide with the backup software kicking in.

Similar story on RAM - the highest utilization figure across the servers is ~35%.

So, we've been doing some digging, using Process Monitor (Microsoft SysInternals) and Wireshark, and our recommendation to the vendor is that they do some TNS tuning in the first instance. However, this is besides the point.

My question is: how do we get them to acknowledge that the VMware statistics that we've sent them are evidence enough that more RAM/vCPU won't help?

--- UPDATE 12/07/2014 ---

Interesting week. Our IT management have said that we should make the change to the VM allocations, and we're now waiting for some downtime from the business users. Strangely, the business users are the ones saying that certain aspects of the app are running slowly (compared to what, I don't know), but they're going to "let us know" when we can take the system down (grumble, grumble!).

As an aside, the "slow" aspect of the system is apparently not the HTTP(S) element, i.e.: the "thin app" used by most of the users. It sounds like it's the "fat client" installs, used by the main finance bods, that is apparently "slow". This means that we're now considering the client and the client-server interaction in our investigations.

As the initial purpose of the question was to seek assistance as to whether to go down the "poke it" route, or just make the change, and we're now making the change, I'll close it using longneck's answer.

Thank you all for your input; as usual, serverfault has been more than just a forum - it's kind of like a psychologist's couch as well :-)


Solution 1:

I suggest that you make the adjustments they have requested. Then benchmark the performance to show them that it made no difference. You could even go so far to benchmark it with LESS memory and vCPU to make your point.

Also, "We're paying you to support the software with actual solutions, not guesswork."

Solution 2:

Providing you are confident you are within the given system specs they document.

Then any claim they are making in regards to requiring more RAM or CPU they should be able to back up. As the experts in their system I hold people to account on this.

Ask them specifics.

  • What information provided on the system indicates more RAM is needed and how did you interpret this?

  • What information provided on the system indicates more CPU is needed and how did you interpret this?

  • The data I have - at first glance - contradicts what you are telling me. Can you explain to me why I may be interpreting this incorrectly?

  • I am interpreting this [obvious series of data] to mean [obvious interpretation]. Can you confirm I am interpreting it correctly with regards to my problem?

Having dealt with support in the past I have asked the same questions. Sometimes I was right and they were not focusing their attention on my problem properly. Other times however, I was wrong and I was interpreting the data incorrectly, or failing to include other data which was important in my analysis.

In any case, both of these situations were a net benefit to me, either I learnt something new I did not know before - or I have got their support teams to think harder about my problem to get a a decent root cause.

If the support team are unable to provide you with a logical expansion of their argument to a basis you can be satisfied with (you need to have an open mind to compromise yourself, be reasonable to accept your interpretation of the data is wrong) then it should become very present in their response. Even in the worst case scenario you can use this as a basis for escalating the problem.

Solution 3:

The big thing is to be able to prove that you are using best practices for your system allocation, notably RAM and CPU reservations for your SQL server.

All this being said the easiest thing is to make the adjustments requested, at least temporarily. If nothing else it tends to get vendors over feet dragging. I can't count the number of times I've needed to do something crazy like this to satisfy a tech on the other end of the line that it really is their software not behaving.

Solution 4:

For this specific situation (where you have VMware and application developers or a third party who does not understand resource allocation), I use a week's worth of metrics obtained from vCenter Operations Manager (vCops - download a demo if needed) to pinpoint the real constraints, bottlenecks and sizing requirements of the application's VM(s).

Sometimes, I've been able to satisfy the more stubborn consumers by modifying VM reservations or changing priorities to handle contention scenarios; "If RAM|CPU are tight, YOUR VM will take precedence!". Bad-bad things have happened when I've allowed software vendors to dictate their requirements on my vSphere clusters without real analysis.

But in general, numbers and data should win-out.


An example of something I used to justify VM sizing to the developer of a Tomcat application:

Dev: The VM needs MOAR cpu!

Me: Well, memory is your biggest constraint, and here's a heat map of your performance versus time... Wednesdays at 6pm are the most stressful periods, so we can spec around that peak period. Oh, and here's a sizing recommendation based on the past 6 weeks of production metrics...

enter image description here

enter image description here

enter image description here