In transitioning to Agile Development, many organizations suffer from the completely accepted but incorrect premise that coding tasks and testing tasks are and should be maintained as separate activities. In other words, designing solutions and writing code should be done by one part of the organization and the identification of test cases and the writing of test scripts (automated or manual) is performed by another part of the organization after the code is completed.

Except for cases where it is desirable to perform testing on completed “releases” of software products (for example, a final, end-to-end checkout of a release before shipping or moving it to the production environment), nothing could be more wrong or less inefficient than continuing the perpetration of this arrangement if one is serious about building a high quality product. Let’s look at some of the problems created when we separate our testers and our coders.

Coders and Testers get a different view of the product that they are mutually responsible for building – In non-Agile implementations, coders are supposed to read the functional specifications written by the business analyst (BA) and/or systems analyst (SA) in order to understand what it is they are supposed to build. In a similar fashion, testers are supposed to read the same functional specification, identify test cases and write test scripts (automated or manual) that will fully test the functionality once built. This is supposed to ensure that the coders don’t somehow influence the testers such that the identified test cases are constrained by the coder’s approach. In the end, however, all this actually accomplishes is an increase in the likelihood that the coder’s interpretation of the functional specification and the tester’s interpretation of the functional specification is going to be divergent. Even when supported directly by the BA and/or SA in separate conversations, there’s a good possibility that the coder’s and the tester’s interpretations will be different in important ways.

More to the point, testing is not supposed to be a test of the skills of the coders, it’s supposed to be a validation of the work of the developers collectively — that they have written good code that satisfies the needs of the customer. Writing code and testing the code in separate cycles does nothing but ensure an inefficient feedback loop consisting of handoffs from coders to testers (when the coders finish writing code) and the testers to the coders (when the testers find problems with what the coders wrote). If you were an aircraft pilot, would you fly for a few hours and then check to see if you’re on the right heading? If you were a surgeon, would you just perform a procedure, checking on the status of the patient only once every twenty or thirty minutes? Clearly the answer to both of these questions is “no.” The pilot checks his course constantly and makes corrections as needed; the surgeon relies on the anesthesiologist and the rest of the surgical team to monitor the health of the patient throughout the procedure. Next question: If you were a coder, would you just write code for several days, checking only after to see if everything still works? Why should the answer to this be “yes?” It shouldn’t be. Writing an application is as much about the testing as it is about coding.

When working separately, agendas diverge – when coders and testers work in separate teams (or even in the same team, but not together), their priorities diverge. Coders become focused on getting the code done and satisfying the customer’s acceptance criteria often without considering the various possible failure modes that a trained tester can frequently bring to bear before the code is written. On the other hand, testers often find themselves focused on trying to “break the code” as opposed to creating a relationship where coding strategies and testing strategies are combined to create a high-quality product more efficiently. In the end, you get an adversarial relationship with coders and testers distrustful of one another.

When working separately, roles diverge – how many times have you seen or felt clear evidence that testers are considered second-class citizens to coders? Personally, I’ve seen it more often than not and in more places than I would care to admit. For example, while you might hear about coders going to conventions and attending classes to improve their skills, it is less likely as a tester that you might be given an opportunity to take advantage of such opportunities. I’ve seen many more instances of coders playing a large role in how testing is to be done as opposed to testers playing a large role in how coding should be done. Moving from testing to coding is often seen as career advancement while going from coding to testing might imply that you were unable to handle the coding work (admittedly, I did hear a story once about a company whose testers came from the ranks of the best coders). Worse, I see management concerned about keeping the same coders on a product, while being more than willing to move testers from project to project (or worse, thin them out by assigning them to multiple projects at the same time).

When working separately, workflows diverge – when we separate our testers from our coders (and worse, when we give our testers a longer, different list of projects to work on than the coders have), the work of coding the product and the work of testing the product diverges. Coders write code (and if we’re lucky, they are also doing unit testing) and then hand over the “completed” work to testers. However, the testers may be lagging behind because of a problem experienced in another project where they have to figure out exactly what’s wrong and who did it before they can return it to the responsible coder. When they finally sit down and begin testing the latest coding, so much time may have passed that when the testing finds defects, the coder has no idea exactly what lines of code he or she changed or added. Thus, diagnosis and solution takes much longer to pursue and achieve (this is often the reason why many defects found by testers are left “until after coding is finished” because interrupting the coders causes more problems with what they are currently working on). In other words, by separating the coding tasks from the testing tasks, we are actually making both more expensive to perform.

The Agile Advantage

On an Agile team (whether its Scrum or XP doesn’t matter), the testers and coders work together, side-by-side. I frequently teach my CSM classes about collaboration on a team by asking them to imagine the team on the first day of the Sprint (or iteration) deciding how the work is going to be divided amongst the team members. The conversation sounds a little like this:

Tony, one of the team’s testers, says, “Hey, I’d like to work on the patient registration story. Who’s with me?”

Barb, the analyst, says, “Sure, I’m in; I wrote the initial story and acceptance criteria.”

Alan, one of the team’s coders, says, “Yep, I’ll help out.”

So, Tony, Barb, and Alan — a tester, analyst, and coder on the team all move to one corner of the team room and, using the basic solution to which the team agreed during Sprint Planning (iteration planning), use a whiteboard to plan out their solution and approach. Every now and then, they get the team’s UI analyst involved for an opinion on the user interface. Keep in mind, the registration story has been sliced down pretty small by the entire team during previous backlog grooming sessions, so the patient registration story that the team is working on consists only of capturing the patient’s first and last name and getting them into the database. So, in a short period of time, having mapped out the basics, they sit down side-by-side to begin building a product.

While Barb starts writing the functional specification that will ultimately document what the team built, Tony and Alan collaborate on the code and the tests. Tony supports Alan by identifying special failure cases that Alan will need to handle. Alan supports Tony by writing the code in such a way that it is easily testable using the tools that Tony and the rest of the company’s testers use. Alan further improves the code by writing unit tests and frequently running them to ensure that his code is working properly. When ready, Alan and Tony run the code and all of the tests, both new and already existing, that validate that what they’ve put together is working as they discussed and that nothing else in the product has broken as a result of their work. When they discover defects, the code and the tests are corrected as needed and work continues (notice, Tony isn’t spending time opening defect reports each time something breaks; he and Alan work out any problems through face-to-face conversation).

Barb continues to work on the functional specification, keeping an eye on the functionality created by Alan and Tony and suddenly realizes that they all forgot to include length checks on the patient’s last name. “It’s on the acceptance criteria for the story,” she says, “but we forgot to add the data length checks.” They discuss the data length checks a bit further and then move ahead; Barbs adds the details of the checks to the functional specifications, Alan adds the actual checking logic to his code, Tony creates additional test cases and tests, as well as some new test data, to ensure that the data length checks work properly in the patient registration workflow.

Barb, Alan, and Tony continue supporting one another, finding and fixing defects, satisfying all of the known acceptance criteria (and possibly even adding a few new ones during the day), and updating the team’s task board as they go. When they are finished with the story (i.e., the DONEness criteria are satisfied), they mark the story as completed and move on to more work.

This is how testing should work in an Agile team – side-by-side with coders and analysts. There should not be any handoffs or disconnects in the process of understanding, designing, building, validating, and documenting software. All of these activities are accomplished face-to-face (or, lacking collocation, as much virtual face-to-face as can be created), all of these activities are equal in importance as DONEness cannot be satisfied unless all are completed.

Do your testers work side-by-side with your coders? If not, why not? Try it.

FEATURED RESOURCE:

FEATURED RESOURCE:

Successful Event Planning Guide

Download Now!

Swipe this FREE DOWNLOAD to supercharge your work sessions.

Normally available ONLY to Artisan students, we’re offering this handy guide to you for a limited time!

More About #TransitioningToScrum

It’s All About Value

One small change can make a BIG difference to your Product Owner success. Before worrying about putting value on backlog items, here's some simple tips.

What Does it Mean to BE Agile?

With the Agility mindset, it's critical to understand the concept of "being" Agile. Some people want "to do Agile", but you don't DO Agile, you must BE Agile.

Self-Organizing Teams

When I teach or coach people about Scrum, we talk about self-organization, but the importance of self-organizational principles cannot be understated.

Ground Rules for Your Team

Some initial “rules” that everyone agrees to follow will help your team avoid misunderstandings and deliberate misconduct.

DONEness Definitions (DoD)

What's the one thing YOU could do as a Scrum Master to improve your team's reliability, quality, and productivity? Read on and find out!