In my last article, "Going with the Flow", I argued that building functional silos around different disciplines hinders our ability to deliver value. In particular, testing shouldn't be seen as a separate activity involving a handoff. Instead we should seek to eliminate handoffs by working together. Since then I've been doing more thinking about the role of testers on a team. Having worked on 'agile' (and supposedly 'agile') teams for a decade, I've noticed the same frequent patterns of behaviour. Why is it testers are treat lesser to the devs? Why do the testers feel stressed and pressured? Why do arguments about the nature of testing break out between the testers and the business, in a way that rarely happens between business and devs?

My suspicions are that this stems from how we see testing as separate - in terms of who performs testing, why we test, and where we see it as an activity within the overall software development lifecycle.

Let's delve into it.

Juking the Stats

Commonly, 'agile' project tracking tools such as JIRA are set up in a way that enforce handoffs. Projects set up with separate swimlanes for "development", "code review" and "testing" are a prime example of this. They're activities performed at different times by different people. This makes me uncomfortable, because it's similar to the concept of "juking the stats", as explained in the Wire.

From a team perspective, it allows us to continue to enforce Work In Progress (WIP) limits, whilst also making it seem like we're doing more. However, this comes back to bite us because we end up committing to too much. Projects set up this way end up encouraging poor practice among the teams. In particular, I've observed all of the following through my career:

  • A developer can fling an item into code review and immediately start working on something else. But that item in code review is not done - so why is something new being started?!
  • The need to salami-slice work, so that everyone is working "to capacity". At any point, we all need to be working on our own ticket. This prioritises individual productivity, disregarding and compromising the ability for the team to focus as a whole on getting something fully developed to completion.
  • The requirement to "project manage", because when developers prioritise new work over code reviews, code reviews never get done.
  • The introduction of the term "dev complete", which creates the logical fallacy that the work is done pending bureaucracy. Therefore, testers become a barrier to getting things done, being just another hoop to jump through.
  • Testing is seen as a "bottleneck" which itself needs to be "project managed". Typically developers outnumber testers, so when we prioritise everyone working to capacity, testers are overwhelmed by having to test lots of tickets at the end of a sprint.
  • Testers getting the blame when things don't get done, because their activity occurs later on the path to "done".

Nowadays, I prefer to simply have an "in progress" column that encompasses development, testing AND business sign-off/UAT activity. This simpler approach reduces the swimlanes down to "To Do", "In Progress" and "Done". It helps us maintain our focus and WIP limits, ensuring that as a team we work together to finish stuff that we start, before starting on new stuff. Project managers who want "visibility" can still get this by speaking to the team.

This is a dire, but all-too-common state of affairs. We do our testers harm by rigging the game like this.

Testers as QAs - Quality Advocates

Thinking about this more, if my argument is for work to be treat as "in progress" until it's done, what is the role of a tester on a team? Fair question.

I've spoken to many senior IT bods who fundamentally view testing as a cost-sink activity that rarely delivers the "assurance" of quality they're (not) paying for. This thinking comes from experiences with offshore testing, especially when performed with cheap labour in a rote-based manual fashion. We all understand these folks are committed to quality and are doing their best, but when testing is seen as something that gets in the way and rarely delivers results, it's difficult to come to the conclusion that quality is being under-invested. Instead, they'll try to make the entire department redundant by "automating the testing".

Automation is a good thing, for what it's worth. Automation provides fast, reliable, repeatable, auditable feedback. Compared to manual, rote-based activity conducted via a hand-off to another department, automation is a leaner model for delivering value. We should recognise though that automation works well for some activities, but not all. Laborious tasks that are repeated frequently and require precision checking are prime candidates for automation. Computers don't get tired or change-blind. Computers aren't creative and so we still need that human touch of ingenuity on top of our automation packages. This 'exploratory' testing is a high-value activity that requires a deep contextual understanding of the product being developed and the environment in which it operates.

That's the crux of it, really. Testing, when under-invested, becomes something that rots to the point where we pay for it because we have to, in order to get sign-off. If you see it as a cost-sink rather than a value-add then you're incentivised to reduce costs. You need to pay for quality.

Quality cannot come simply from one discipline. Although used extensively in this post, I don't like the term tester. I prefer QA. However, I don't mean QA in the traditional sense of quality assurance. Quality cannot be assured. It can only be advocated for. Therefore, I see QA as meaning quality advocate, not quality assurance.

(Incidentally, I'm not the first person to suggest this. Alister B Scott proposed the same idea 7 years ago, demonstrating that someone always beats me to an idea!)

Quality Advocate more accurately describes the role being performed on the team. The QA may perform certain testing activities, but so will a developer, and so will the product owner. A QA champions quality, empowering the team to develop skills and put them into practice. The QA will care very deeply about building a high-quality product and seek to coach the team. They can advise on where automation is best-placed, they question if we can get faster feedback, they challenge our ways of working.

The tester is not there as the safety net, catching bugs before they go to production. They're questioning why you need the proverbial 'safety net', when a belay system and good rope discipline will stop the person falling in the first place.

Just as professional sports players and musicians have coaches, so do professional software development teams. It's not a sign of failure, it's a sign that the team is supported to self-improve their ways of working.

Mindset -> Frame of Reference

I take umbrage to the widely-circulated argument that testing is a "mindset" which you either have or don't, and that this is why some people are testers and some people are developers. This is complete and utter bollocks.

A mindset, like a culture, is not a fixed thing. As mentioned in my previous post, culture is a lagging, not a leading indicator. The sample applies to mindsets.

"Testing is a mindset" risks the subtext of "you don't have it, stay in you lane". That's not empowering at all. At large, it causes all the problems described above, because testing becomes a separate thing done by separate people at a separate time.

It can't be true though. As a developer I write tests every day, so how can I not have the testing mindset? It just doesn't make sense when viewed this way. A mindset is not some innate quality that you possess by gift of birth. It's a lagging indicator because it is indicative of your frame of reference - what your training and experience has led you to. Developers have deep understanding of programming languages, development frameworks and whatnot because of the work they've done and the training they've received. Similarly, testers have a deep understanding of the common causes of failure in applications because of their backgrounds. Both are entirely learnable.

Some express concerns with the notion of individuals in teams having "specialisms". But viewed in this way, it makes sense that someone with a test-focused frame of reference can be well-equipped for exploratory testing, just in the sense the database person is well-equipped for fine-tuning the application's database performance. They're all important roles.

Testing is Everyone's Responsibility

Testing is already an activity everyone on the team commits to. Developers should be writing a lot of tests: unit, integration and acceptance. QAs champion quality within the team, but may also allocate some of their time for exploratory testing sessions. They're well-placed to do this not because of their "mindset", but because of their frame of reference as advocates of quality. The notion of testing a ticket in isolation blurs away to a ticket being "in progress", with quality being built in. Testing is a value-add activity, rather than a cost-sinking bureaucratic hurdle. As a continuous activity throughout the software development lifecycle, it doesn't fall upon one person's (or discipline's) shoulders. Rather, it's the responsibility of everyone in the team.

To the QAs I've worked with in the past - I'm genuinely sorry for this state of affairs, and as a developer I commit to empowering you to become advocates of change within your teams.