Help me understand how QA works in Scrum [closed]

My opinion is that you have an estimation problem. It seems that the time to test each feature is missing, and only the building part is being considered when planning the sprint.

I'm not saying it is an easy problem to solve, because it is more common than anything. But things that could help are:

  • Consider QA as members of the dev team, and include them in the sprint planning and estimating more closely.

  • 'Releasable Dev tasks' should not take up most of the sprint. Complete working features should. Try to gather metrics about dev time vs QA time for each kind of task and use those metrics when estimating future sprints.

  • You might need to review your backlog to see if you have very coarse grained features. Try to divide them in smaller tasks that could be easily estimated and tested.

In summary, it seems that your team hasn't found what its real velocity is because there are tasks that are not being considered when doing the estimation and planning for the sprint.

But in the end, estimation inaccuracy is a tough project management issue that you find in agile-based or waterfall-based projects. Good luck.


A little late to the party here but here's my take based on what you wrote.

Now, Scrum is a project management methodology, not a development one. But it is key, in my opinion, to have development process in place. Without one, you spend the majority of your time reacting rather than building.

I'm a test-first guy. In my development process I build tests first to enforce the requirements and the design decisions. How is your team enforcing those? The point I'm trying to make here is that you simply can't "throw stuff over the fence" and expect anything but failure to occur. That failure is either going to be by the test team (by not testing very well and thus letting problems slip by) or by the developers (by not building the product that solves the problem). I'm not saying you must write tests first - I'm not a militant or a test-first evangelist - but I'm saying you must have a process in place to produce quality, tested, ready-for-production code when you reach an iteration's end.

I've been right where you are in this development methodology that I call the Death Spiral Method. I built software for the government (US) for years in such a model. It doesn't work well, it costs a LOT of money, it produces late code, poor code, and does nothing for morale. You can't make any headway when you spend all your time fixing bugs you could have avoided making in the first place. I was absolutely beaten down by the affair.

You don't want QA finding your problems. You want to put them out of work, really. My goal is to make QA flabbergasted because everything just works. Granted, that is a goal. In practice, they'll find stuff. I'm not super-human. I make mistakes.

Back to scheduling...

At my current job we do Scrum, we just don't call it that. We aren't into labels here but we are into producing quality code on time. Everyone is on-board. We tell QA what we'll have ready to test and when. If they come a-knocking two weeks early for it, they can talk to the hand. Everyone knows the schedule, everyone knows what will be in the release and everyone knows that the product has to work as advertised before it goes to QA. So what does that mean? You tell QA "don't bother testing XYZ - it is broken and won't be fixed until release C" and if they go testing that, you point them back at that statement and tell them not to waste your time. Harsh, perhaps, but sometimes necessary. I'm not about being rude, but everyone needs to know "the rules" and what should be tested and what is a 'known issue'.

Your management has to be on board. If they aren't you are going to have troubles. QA can't run the show and the dev group can't completely run it either. All the groups (even if those groups are just one person per group or a guy that wears several hats) need to be on the same page: the customer, the test team, the developers, management, and anyone else. More than half the battle is communication, typically.

Perhaps you are biting off more than can be accomplished during a sprint. That might be the case. Why are you doing that? To meet a schedule? If so, that is where management needs to step in and resolve the issue. If you are giving QA buggy code, expect them to toss it back. Better to give them 3 things that work than 8 things that are unfinished. The goal is to produce some set of functionality that is completely implemented on each iteration, not to throw together a bunch of half-done stuff.

I hope this is received as it is intended to be - as an encouragement not a rant. Like I mentioned, I've been where you are and it isn't fun. But there is hope. You can get things turned around in a sprint, maybe two. Perhaps you don't add any new functionality in the next sprint and simply fix what is broken. You'll have to decide that as a team.

One more small plug for writing test code: I've found myself far more relaxed and far more confident in my product since adopting a 'write the tests first' approach. When all my tests pass, I have a level of confidence that I simply couldn't have without them.

Best of luck!


It seems to me that there is a resource allocation problem in scenarios requiring QA functional testing in order for a given feature to be 'done' within a sprint. No one seems to address this in any QA-related scrum discussion I've found so far, and the original question here is almost the same (at least related), so I wanted to offer a partial answer and extend the question a bit.

As to the specific original question about development tasks taking the full sprint - it seems that the general advice of easing up on these tasks makes sense if functional testing by QA is part of your definition of 'done'. Given lets say a 4 week sprint, if it takes about a week to test multiple features from multiple developers, then it seems like development tasks taking about 3 weeks, followed by a lag week of testing tasks taking about 1 week is the answer. QA would of course start as soon as possible be we recognize that from the last set of delivered features, there will be about a week lag. I realize that we want to get features to QA asap so you don't have this waterfall-like scenario in a sprint, but the reality is that development usually can't get real, worthwhile delivered functionality to QA until 1 to 3 weeks into the sprint. Sure there are bits and pieces here and there, but the bulk of the work is 2-3 weeks development, then about a week's testing leftover.

So here is the resource allocation problem, and my extension to the question - in the above scenario QA has time to test the planned features of a sprint (3 weeks worth of development tasks, leaving the last week for testing the features delivered last). Also let's assume QA starts to get some testable features after 1 week of development - but what about week #1 for QA, and what about week #4 for development?

If QA functional testing is part of the definition of 'done' for a feature in a sprint, then it seems this inefficiency is unavoidable. QA will be largely idle during week #1 and development will be largely idle during week #4. Of course there are some things that fill in this time naturally, like bug fix and verification, design/plan, etc., but we are essentially scheudling our resources at 75% capacity.

The obvious answer seems to be overlapping sprints for development and QA since the reality is that QA always lags beind development to some degree. Demonstrations to product owners and others would follow the QA sprint since we want features to be tested before being shown. This seems to allow more efficient use of both develoment and QA since we don't have as much wasted time. Assuming we want to keep developers developing and tester testing, I can't see a better practical solution. Perhaps I have missed something, and I hope someone can shed some light on this for me - otherwise it seems this rigid approach to scrum is flawed. Thanks.


Hopefully, you fix this by tackling fewer dev tasks in each sprint. Which leads to the questions: Who's settings dev's goals? Why is Dev falling short of those goals consistently?

If dev isn't setting their own goals, that's why they're always late. And that isn't the ideal way to practice Scrum. That's just incremental development with big, deadline-driven deliverables and no actual stake-holder responsibility on the part of developers.

If dev can't set their own goals because they don't know enough, then they have to be more involved up front.

Scrum depends on four basic principles, outlined in the Agile Manifesto.

  1. Interactions matter -- that means dev, QA, project management, and end users need to talk more and talk with each other. Software is a process of encoding knowledge in the arcane language of computers. To encode the knowledge, the developers must have the knowledge. [Why do you think we call it "code"?] Scrum is not a "write spec - throw over transom" methodology. It's ANTI-"write spec - throw over transom"

  2. Working Software matters -- that means that each piece dev bites off has to lead to a working release. Not a set of bug fixes for QA to wrestle with, but working software.

  3. Customer Collaboration -- that means dev has to work with business analysts, end users, business owners, everyone who can help them understand what they're building. The deadlines don't matter as much as the next thing handed over to the customer. If the customer needs X, that's the highest priority thing for everyone to do. If the project plan says build Y, that's a load of malarkey.

  4. Responding to Change -- that means that customers can rearrange the priorities of the following sprints. They can't rearrange the sprint in process (that's crazy) but all the following sprints are candidates for changing priorities.

If the customer drives, then the deadlines become less artificial "project milestones" and more "we need X first, then Y, and this thing in section Z, we don't need that any more. Now that we have W, Z is redundant."