Git branching strategy integated with testing/QA process

Our development team has been using the GitFlow branching strategy and it has been great !

Recently we recruited a couple testers to improve our software quality. The idea is that every feature should be tested/QA by a tester.

In the past, developers work on features on separate feature branches and merge them back to the develop branch when done. The developer will test his work himself on that feature branch. Now with testers, we start asking this Question

On which branch should the tester test new features ?

Obviously, there are two options:

  • on the individual feature branch
  • on the develop branch

Testing On Develop Branch

Initially, we believed this is the sure way to go because:

  • The feature is tested with all other features merged to the develop branch since it's development started.
  • Any conflicts can be detected earlier than later
  • It makes the tester's job easy, he is only dealing with one branch (develop) at all time. He doesn't need to ask the developer about which branch is for which feature ( feature branches are personal branches managed exclusively and freely by relevant developers )

The biggest problems with this is:

  • The develop branch is polluted with bugs.

    When the tester finds bugs or conflicts, he reports them back to the developer, who fixes the issue on the develop branch (the feature branch were abandoned once merged ), and there could be more fixes required afterward. Multiple subsequence commits or merges (if a branch is recreated off develop branch again for fixing the bugs) makes rolling back the feature from the develop branch very difficult if possible. There are multiple features merging to and being fixed on the develop branch at different times. This creates a big issue when we want to create a release with just some of the features in the develop branch

Testing On Feature Branch

So we thought again and decided we should test features on the feature branches. Before we test, we merge the changes from the develop branch to the feature branch ( catch up with the develop branch ). This is good:

  • You still test the feature with other features in the mainstream
  • Further development ( e.g. bug fix, resolving conflict ) will not pollute the develop branch;
  • You can easily decide not to release the feature until it is fully tested and approved;

However, there are some drawbacks

  • The tester has to do the merging of the code, and if there's any conflict (very likely), he has to ask the developer for help. Our testers specialize in test and are not capable of coding.
  • a feature could be tested without the existence of another new feature. e.g. Feature A and B are both under test at the same time, the two features are unaware of each other because neither of them has been merged to the develop branch. These means you will have to test against the develop branch again when both of the features are merged to the develop branch anyway. And you have to remember to test this in the future.
  • If Feature A and B are both tested and approved, but when merged a conflict is identified, both of the developers for both features believe it is not his own fault/job because his feature branch past the test. There is an extra overhead in communication, and sometimes whoever resolving the conflict is frustrated.

Above is our story. With limited resource, I would like to avoid testing everything everywhere. We are still looking for a better way to cope with this. I would love to hear how other teams handle this kind of situations.


Solution 1:

The way we do it is the following:

We test on the feature branches after we merge the latest develop branch code on them. The main reason is that we do not want to "pollute" the develop branch code before a feature is accepted. In case a feature would not be accepted after testing but we would like to release other features already merged on develop that would be hell. Develop is a branch from which a release is made and thus should better be in a releasable state. The long version is that we test in many phases. More analytically:

  1. Developer creates a feature branch for every new feature.
  2. The feature branch is (automatically) deployed on our TEST environment with every commit for the developer to test.
  3. When the developer is done with deployment and the feature is ready to be tested he merges the develop branch on the feature branch and deploys the feature branch that contains all the latest develop changes on TEST.
  4. The tester tests on TEST. When he is done he "accepts" the story and merges the feature branch on develop. Since the developer had previously merged the develop branch on feature we normally don't expect too many conflicts. However, if that's the case the developer can help. This is a tricky step, I think the best way to avoid it is to keep features as small/specific as possible. Different features have to be eventually merged, one way or another. Of course the size of the team plays a role on this step's complexity.
  5. The develop branch is also (automatically) deployed on TEST. We have a policy that even though the features branch builds can fail the develop branch should never fail.
  6. Once we have reached a feature freeze we create a release from develop. This is automatically deployed on STAGING. Extensive end to end tests take place on there before the production deployment. (ok maybe I exaggerate a bit they are not very extensive but I think they should be). Ideally beta testers/colleagues i.e. real users should test there.

What do you think of this approach?

Solution 2:

Before test, we merge the changes from the develop branch to the feature branch

No. Don't, especially if 'we' is the QA tester. Merging would involve resolving potential conflicts, which is best done by developers (they know their code), and not by QA tester (who should proceed to test as quickly as possible).

Make the developer do a rebase of his/her feature branch on top of devel, and push that feature branch (which has been validated by the developer as compiling and working on top of the most recent devel branch state).
That allows for:

  • a very simple integration on the feature branch (trivial fast-forward merge).
  • or, as recommended by Aspasia below in the comments, a pull request (GitHub) or merge request (GitLab): the maintainer does a merge between the feature PR/MR branch and develop, but only if not conflict are detected by GitHub/GitLab.

Each time the tester detects bug, he/she will report it to the developer and delete the current feature branch.
The developer can:

  • fix the bug
  • rebase on top of a recently fetched develop branch (again, to be sure that his/her code works in integration with other validated features)
  • push the feature branch.

General idea: make sure the merge/integration part is done by the developer, leaving the testing to the QA.

Solution 3:

The best approach is continuous integration, where the general idea is to merge the feature branches into the developer branch as frequently as possible. This reduces on the overhead of merging pains.

Rely on automated tests as much as possible, and have builds automatically kick off with unit tests by Jenkins. Have the developers do all the work with merging their changes into the main branch and provide unit tests for all their code.

The testers/QA can take participate in code reviews, check off on unit tests and write automated integration tests to be added to the regression suite as features are completed.

For more info check out this link.

Solution 4:

We use what we call "gold", "silver", and "bronze". This could be called prod, staging, and qa.

I've come to call this the melting pot model. It works well for us because we have a huge need for QA in the business side of things since requirements can be hard to understand vs the technicals.

When a bug or feature is ready for testing it goes into "bronze". This triggers a jenkins build that pushes the code to a pre-built environment. Our testers (not super techies by the way) just hit a link and don't care about the source control. This build also runs tests etc. We've gone back and forth on this build actually pushing the code to the testing\qa environment if the tests (unit, integration, selenium ) fail. If you test on a separate system ( we call it lead ) you can prevent the changes from being pushed to your qa environment.

The initial fear was that we'd have lots of conflicts between this features. It does happen were feature X makes it seem like feature Y is breaking, but it is infrequent enough and actually helps. It helps get a wide swath of testing outside what seems is the context of the change. Many times by luck you will find out how your change effects parallel development.

Once a feature passes QA we move it into "silver" or staging. A build is ran and tests are run again. Weekly we push these changes to our "gold" or prod tree and then deploy them to our production system.

Developers start their changes from the gold tree. Technically you could start from the staging since those will go up soon.

Emergency fixes are plopped directly into the gold tree. If a change is simple and hard to QA it can go directly into silver which will find its way to the testing tree.

After our release we push the changes in gold(prod) to bronze(testing) just to keep everything in sync.

You may want to rebase before pushing into your staging folder. We have found that purging the testing tree from time to time keeps it clean. There are times when features get abandoned in the testing tree especially if a developer leaves.

For large multi-developer features we create a separate shared repo, but merge it into the testing tree the same when we are all ready. Things do to tend bounce from QA so it is important to keep your changesets isolated so you can add on and then merge/squash into your staging tree.

"Baking" is also a nice side effect. If you have some fundamental change you want to let sit for a while there is a nice place for it.

Also keep in mind we don't maintain past releases. The current version is always the only version. Even so you could probably have a master baking tree where your testers or community can bang on see how various contributors stuff interact.