Unit Testing: Minimum Viable Test

Unit Testing: Minimum Viable Test

Unit testing is testing of a single ‘unit’ of code. What’s a ‘unit’? Is it just one method? Methods often call other methods though, so are they combined a single unit? What about external black-boxed libraries, are they one unit or multiple? Well, that’s up to you and the Developers to define. The only hard-line I’ve seen on unit tests is “it cannot call an external system”. So if your code calls out to a database or an API, it is larger than a single unit.

Unit testing became extremely popular in the late-2000’s. In a sense, it was the first attempt at AQA - code that tested other code in a compact fashion as part of an auto-deploy process. Since then, most organizations have adopted a policy of unit testing as part of the development process. All major IDEs and programming languages have unit test capabilities built-in.

In a broad sense, unit tests work by short-circuiting your code. They inject data directly into a given function or method and then verify that the method returns the expected response. They can cover both positive and negative test cases, and depending on the framework can be an extremely simple 2-3 lines of code per test.

Unit testing can give you a decent degree of confidence that each check-in is at least working in isolation - they’re not going to blow up catastrophically or fail to compile. Equally, if you are worried about certain edge-cases, it’s easy to just look at the unit tests and see what the Developer tested and request a couple more tests for specific data sets. This allows people to see how the code responds without having to actually read the code. It can also give you insight into how the Developer interpreted the task - if there’s a unit test where an email with dashes is returned as invalid, but dashes are actually considered valid characters, then you know that there was a miscommunication and can work to correct it.

Be wary of the results of unit tests. Unit tests have two fatal flaws: writer and scale.

The writer of the tests is the same as the writer of the code. That means that any misunderstanding of the requirements that made it into the code will also make it into the tests - the tests will pass even if it fails requirements (there’s also lots of research around Developers subconsciously not wanting to break their code and thus not writing tests they think will fail. The research says this is a thing, but it’s not something I’ve personally seen). This can, at least partially, be overcome by having someone, preferably a QA resource, periodically review the unit tests.

The scale of the tests is another issue. Unit tests test units, tiny bits of your codebase, in isolation. If a method returns 4 and that meets requirements, the test passes. But if another method is expecting it to return 6, then your project will still error. Just because things work in isolation doesn't mean they’ll work together. We’ll see this issue again in event-driven integration testing, but at a unit testing level this is ok - unit tests are not meant to test things working together.  

While all this is great and it’s a great first step toward cleaning up a codebase, it is not really considered AQA today. As mentioned, typically unit tests are rolled in with development: after a Developer writes their code, they are expected to write a couple unit tests to check in with it and that’s considered part of the acceptance criteria or definition of done for the task. This makes sense, as unit testing requires an intimate knowledge of the codebase that no one outside the Developer that wrote it could be expected to have.

Integration Testing: The Foundation of AQA

Integration Testing: The Foundation of AQA

Testing Taxonomy

Testing Taxonomy