Difference between acceptance test and functional test?

What is the real difference between acceptance tests and functional tests?

What are the highlights or aims of each? Everywhere I read they are ambiguously similar.


Solution 1:

In my world, we use the terms as follows:

functional testing: This is a verification activity; did we build a correctly working product? Does the software meet the business requirements?

For this type of testing we have test cases that cover all the possible scenarios we can think of, even if that scenario is unlikely to exist "in the real world". When doing this type of testing, we aim for maximum code coverage. We use any test environment we can grab at the time, it doesn't have to be "production" caliber, so long as it's usable.

acceptance testing: This is a validation activity; did we build the right thing? Is this what the customer really needs?

This is usually done in cooperation with the customer, or by an internal customer proxy (product owner). For this type of testing we use test cases that cover the typical scenarios under which we expect the software to be used. This test must be conducted in a "production-like" environment, on hardware that is the same as, or close to, what a customer will use. This is when we test our "ilities":

  • Reliability, Availability: Validated via a stress test.

  • Scalability: Validated via a load test.

  • Usability: Validated via an inspection and demonstration to the customer. Is the UI configured to their liking? Did we put the customer branding in all the right places? Do we have all the fields/screens they asked for?

  • Security (aka, Securability, just to fit in): Validated via demonstration. Sometimes a customer will hire an outside firm to do a security audit and/or intrusion testing.

  • Maintainability: Validated via demonstration of how we will deliver software updates/patches.

  • Configurability: Validated via demonstration of how the customer can modify the system to suit their needs.

This is by no means standard, and I don't think there is a "standard" definition, as the conflicting answers here demonstrate. The most important thing for your organization is that you define these terms precisely, and stick to them.

Solution 2:

I like the answer of Patrick Cuff. What I like to add is the distinction between a test level and a test type which was for me an eye opener.

test levels

Test level is easy to explain using V-model, an example: enter image description here Each test level has its corresponding development level. It has a typical time characteristic, they're executed at certain phase in the development life cycle.

  1. component/unit testing => verifying detailed design
  2. component/unit integration testing => verifying global design
  3. system testing => verifying system requirements
  4. system integration testing => verifying system requirements
  5. acceptance testing => validating user requirements

test types

A test type is a characteristics, it focuses on a specific test objective. Test types emphasize your quality aspects, also known as technical or non-functional aspects. Test types can be executed at any test level. I like to use as test types the quality characteristics mentioned in ISO/IEC 25010:2011.

  1. functional testing
  2. reliability testing
  3. performance testing
  4. operability testing
  5. security testing
  6. compatibility testing
  7. maintainability testing
  8. transferability testing

To make it complete. There's also something called regression testing. This an extra classification next to test level and test type. A regression test is a test you want to repeat because it touches something critical in your product. It's in fact a subset of tests you defined for each test level. If a there's a small bug fix in your product, one doesn't always have the time to repeat all tests. Regression testing is an answer to that.