The Watchmen Problem

The Watchmen Problem

You come in every day, read your passing test results, and rubber stamp a release as good to go. At some point in your AQA career, you’re going to realize something. Suddenly, you’ll get a chill in your spin. What if your tests are passing, but the project is actually broken? How would you know? If your tests pass, why does manual QA keep finding bugs?

What if... your tests are broken?

In Watchmen, the question of ‘who watches the Watchmen?’ stems from the fact that humans were watching over other humans - so what’s to stop the Watchmen from being just as curruptable?

AQA is about writing code that watches other people’s code. So who watches the AQA code? Who watches the watchmen?

Addressing the Watchmen

So few people talk about this. It’s like taboo in AQA to admit that maybe we should be more careful when writing code and choosing frameworks. Maybe even the whole test base is flawed.

I understand why. In some sense, talking it out undermines your integrity and cuts the team’s confidence in your tests. If you admit that maybe your tests are flawed, stakeholders at many organizations would ask: then what’s the point? In a certain sense, in some people’s minds, you’d effectively be arguing yourself out of a job. If we can't rely on the test code, what’s the point in writing it?

This is, as a whole, incorrect. As an industry, we should not shy away from addressing this problem. Maybe your tests are wrong. Maybe you are using an incorrect framework. Maybe your tests are poorly designed. Maybe somewhere deep in the codebase someone did set assertAll = true (true story). But you won't know if you don't look.

Automation code isn't perfect. It never will be. There are days where all I do is fix bugs in my own test code. There are also times when I throw out days, even weeks, worth of work and test cases as I find them to be ineffective at their original task. This is the reality of AQA. In an open environment on a high-functioning team, an AQA should be free to voice these kinds of concerns and the team should address them appropriately and work to find a solution.

Another way to look at it is thus: if this is a problem in your organization, then your tests aren't providing any value. If you’re not providing any value, you’re not advancing your career. Eventually, someone will question your value, and when that time comes you’ll be left with little to show for it. It’s better to stare down that barrel on your own terms then to wait for the firing squad later.

Watching the Watchmen

In my humble opinion, the solution to this problem is insidiously simple: keep your AQA codebase simple and have well documented test cases.

Your production code needs to be tested because it is complicated. Functions are abstracted, libraries are imported, there’s a lot of coupling between classes and integration between systems. If you didn’t do that, you wouldn’t need to test your code so heavily.

As such, write simple AQA code. Don’t pull in libraries you don’t need. Don’t abstract functionality unless it really needs to be. Don’t divide your code across ten different files scattered in five different systems (true story - that team relied on distributed batch files triggered by Jenkins builds to deploy different parts of the automation code, backed by multiple databases [at least one that contained execution context and another that contained the test data]. Debugging was nigh impossible).

Sacrifice efficiency for readability. The code should be well formatted and have enforced naming conventions and standards. (true story - I found a UI declaration page that used 'dd', 'Ddown', '_s', 'select', and 'Dropd' to designate a Select field - all in the same UI page. The problem compounded when they also used both setSelect and sendKeys interchangeably). Each line of code should serve an obvious purpose. Make it such that anyone can come in and clearly see what every line of code does.

Have well documented test cases that make it easy to interpret the code. Having a test plan or using a test management system will help here. These will make it easy to tell what parts of the system are under test in your test case - assuming you properly link your test cases to the test plan (preferably with unique IDs).

You should be able to prove the validity of a test case simply by having someone read it. If you can’t, it’s too complicated, and you’re going to run afoul of the watchmen.


 

AQA Classes and Certifications

AQA Classes and Certifications

Retro-Automation

Retro-Automation