iOS Tests/Specs TDD/BDD and Integration & Acceptance Testing

Solution 1:

tl;dr

At Pivotal we wrote Cedar because we use and love Rspec on our Ruby projects. Cedar isn't meant to replace or compete with OCUnit; it's meant to bring the possibility of BDD-style testing to Objective C, just as Rspec pioneered BDD-style testing in Ruby, but hasn't eliminated Test::Unit. Choosing one or the other is largely a matter of style preferences.

In some cases we designed Cedar to overcome some shortcomings in the way OCUnit works for us. Specifically, we wanted to be able to use the debugger in tests, to run tests from the command line and in CI builds, and get useful text output of test results. These things may be more or less useful to you.

Long answer

Deciding between two testing frameworks like Cedar and OCUnit (for example) comes down to two things: preferred style, and ease of use. I'll start with the style, because that's simply a matter of opinion and preference; ease of use tends to be a set of tradeoffs.

Style considerations transcend what technology or language you use. xUnit-style unit testing has been around for far longer than BDD-style testing, but the latter has rapidly gained in popularity, largely due to Rspec.

The primary advantage of xUnit-style testing is its simplicity, and wide adoption (amongst developers who write unit tests); nearly any language you could consider writing code in has an xUnit-style framework available.

BDD-style frameworks tend to have two main differences when compared to xUnit-style: how you structure the test (or specs), and the syntax for writing your assertions. For me, the structural difference is the main differentiator. xUnit tests are one-dimensional, with one setUp method for all tests in a given test class. The classes that we test, however, aren't one-dimensional; we often need to test actions in several different, potentially conflicting, contexts. For example, consider a simple ShoppingCart class, with an addItem: method (for the purposes of this answer I'll use Objective C syntax). The behavior of this method may differ when the cart is empty compared to when the cart contains other items; it may differ if the user has entered a discount code; it may differ if the specified item can't be shipped by the selected shipping method; etc. As these possible conditions intersect with one another you end up with a geometrically increasing number of possible contexts; in xUnit-style testing this often leads to a lot of methods with names like testAddItemWhenCartIsEmptyAndNoDiscountCodeAndShippingMethodApplies. The structure of BDD-style frameworks allows you to organize these conditions individually, which I find makes it easier to make sure I cover all cases, as well as easier to find, change, or add individual conditions. As an example, using Cedar syntax, the method above would look like this:

describe(@"ShoppingCart", ^{
    describe(@"addItem:", ^{
        describe(@"when the cart is empty", ^{
            describe(@"with no discount code", ^{
                describe(@"when the shipping method applies to the item", ^{
                    it(@"should add the item to the cart", ^{
                        ...
                    });

                    it(@"should add the full price of the item to the overall price", ^{
                        ...
                    });
                });

                describe(@"when the shipping method does not apply to the item", ^{
                    ...
                });
            });

            describe(@"with a discount code", ^{
                ...
            });
        });

        describe(@"when the cart contains other items, ^{
            ...
        });
    });
});

In some cases you'll find contexts in that contain the same sets of assertions, which you can DRY up using shared example contexts.

The second main difference between BDD-style frameworks and xUnit-style frameworks, assertion (or "matcher") syntax, simply makes the style of the specs somewhat nicer; some people really like it, others don't.

That leads to the question of ease of use. In this case, each framework has its pros and cons:

  • OCUnit has been around much longer than Cedar, and is integrated directly into Xcode. This means it's simple to make a new test target, and, most of the time, getting tests up and running "just works." On the other hand, we found that in some cases, such as running on an iOS device, getting OCUnit tests to work was nigh impossible. Setting up Cedar specs takes some more work than OCUnit tests, since you have get the library and link against it yourself (never a trivial task in Xcode). We're working on making setup easier, and any suggestions are more than welcome.

  • OCUnit runs tests as part of the build. This means you don't need to run an executable to make your tests run; if any tests fail, your build fails. This makes the process of running tests one step simpler, and test output goes directly into your build output window which makes it easy to see. We chose to have Cedar specs build into an executable which you run separately for a few reasons:

    • We wanted to be able to use the debugger. You run Cedar specs just like you would run any other executable, so you can use the debugger in the same way.
    • We wanted easy console logging in tests. You can use NSLog() in OCUnit tests, but the output goes into the build window where you have to unfold the build step in order to read it.
    • We wanted easy to read test reporting, both on the command line and in Xcode. OCUnit results appear nicely in the build window in Xcode, but building from the command line (or as part of a CI process) results in test output intermingled with lots and lots of other build output. With separate build and run phases Cedar separates the output so the test output is easy to find. The default Cedar test runner copies the standard style of printing "." for each passing spec, "F" for failing specs, etc. Cedar also has the ability to use custom reporter objects, so you can have it output results any way you like, with a little effort.
  • OCUnit is the official unit testing framework for Objective C, and is supported by Apple. Apple has basically limitless resources, so if they want something done it will get done. And, after all, this is Apple's sandbox we're playing in. The flip side of that coin, however, is that Apple receives on the order of a bajillion support requests and bug reports each day. They're remarkably good about handling them all, but they may not be able to handle issues you report immediately, or at all. Cedar is much newer and less baked than OCUnit, but if you have questions or problems or suggestions send a message to the Cedar mailing list ([email protected]) and we'll do what we can to help you out. Also, feel free to fork the code from Github (github.com/pivotal/cedar) and add whatever you think is missing. We make our testing frameworks open source for a reason.

  • Running OCUnit tests on iOS devices can be difficult. Honestly, I haven't tried this for quite some time, so it may have gotten easier, but the last time I tried I simply couldn't get OCUnit tests for any UIKit functionality to work. When we wrote Cedar we made sure that we could test UIKit-dependent code both on the simulator and on devices.

Finally, we wrote Cedar for unit testing, which means it's not really comparable with projects like UISpec. It's been quite a while since I tried using UISpec, but I understood it to be focused primarily on programmatically driving the UI on an iOS device. We specifically decided not to try to have Cedar support these types of specs, since Apple was (at the time) about to announce UIAutomation.

Solution 2:

I'm going to have to toss Frank into the acceptance testing mix. This is a fairly new addition but has worked excellent for me so far. Also, it is actually being actively worked on, unlike icuke and the others.

Solution 3:

For test driven development, I like to use GHUnit, its a breeze to set up, and works great for debugging too.

Solution 4:

Great List!

I found another interesting solution for UI testing iOS applications.

Zucchini Framework

It is based on UIAutomation. The framework let you write screen centric scenarios in Cucumber like style. The scenarios can be executed in Simulator and on device from a console (it is CI friendly).

The assertions are screenshot based. Sounds inflexible, but it gets you nice HTML report, with highlighted screen comparison and you can provide masks which define the regions you want to have pixel exact assertion.

Each screen has to be described in CoffeScript and the tool it self is written in ruby. It is kind of polyglott nightmare, but the tool provides a nice abstraction for UIAutomation and when the screens are described it is manageable even for QA person.