AQA Reporting

AQA Reporting

Proper reporting is a big issue I see in AQA right now. A lot of off the shelf tools either don’t offer reporting, or what they do offer is very lackluster, and there’s not a whole lot of discussion about this in the AQA community.

Most AQA tools have some sort of reporting mechanism for the tests. GUI options often have some display after the tests are run. Code-based solution export XML or even HTML reports. Build/Deploy software can even sometimes integrate with these reports and display them in some graphical manner or at least make them slightly more accessible. However, in my experience, they all tend to fall short in one category or another.

Increasing Visibility

Why are we talking about this at all? Well, AQA has a serious visibility problem. It’s getting better as companies start to rally around ‘automate all the things’ (for better or worse), but there’s still skepticism. At many companies, it’s hard to show the value of AQA over other initiatives the company may want to invest in. It’s doubly hard when you have several AQA engineers come to go/no-go meeting with a list of verbal bugs and nothing to show for it.

In the mind of stakeholders, what’s the value of AQA over manual QA if they result in the same thing? Isn't the point of automation to make things, well, automated? Indeed, it is, and one of the great features of AQA is the ability to have concrete reports. THIS passed, THAT failed - here’s a chart that shows us as such.

Proper reporting can help increase the visibility of your AQA initiatives, providing real-time results that any can drill into. Want to know what tests ran? Here you go. Want to see how this released stacked versus the previous one? Click this button. Is everything on fire? Why yes it is, someone should fix that. A good report can answer all the common buy-in questions: Why are we investing in this? What value does this have? What have these people been working on? How does this increase the value of our product? And many more.

Tenets of Proper Reporting

I’ve instituted reporting standards at every company I’ve worked for - much to the joy of teams and stakeholders alike. In doing so, I’ve developed a list of core tenets, things that all reports should strive to accomplish.

Note that this section is light on the implementation details. The main barrier to why there’s no universal or off the shelf solution to this problem is how unique AQA is across organizations (and perhaps that's another issue that should be considered separately). I’ve worked on many different kinds of AQA across many different organizations, and never have I seen the same project structure twice - every single one was configured differently - how they ran it, where they ran it, what tools were used, the definition of ‘suite,’  ‘case’, and many other common nomenclature. All these things affect the structure of your reports and, most importantly for us here, how you’d go about parsing them.

Available

First and foremost, the reports need to be made available. To everyone. On demand. It doesnt matter if they’re XML files or pretty HTML reports - make them available. Put them out there for people to review and parse for themselves.

Though the AQA Engineer that made the test will be the ultimate interpreter of that test, other people should have a stake in the results as well. By making your reports available you’re showing that you don't have anything to hide and are open to feedback and communication (and hopefully you are, because after you make your reports available you’re going to be receiving a lot of both).

Filtered

Second, this site is going to contain a lot of reports. Potentially thousands of them within the first month of being active, and they’re going to come from all across the organization (see: unified). Further, people viewing your reports are going to be from all across the organization. You’re going to need some way of filtering them such that anyone looking can quickly get to the results they want to see and filter out the noise.

Usable

Third, make them pretty. An XML file is better than nothing, but no one has the time to read it. Every morning. For 30 APIs. And 50 front-end tests. It’s really simple to formulate some basic tables from your data sets and turn your XML data into an HTML table that shows the details of the reports.

Consider the different people that will be reading the reports. You’ll want a high-level overview probably in the form of a pretty chart or graph, and a low-level drill down that shows the actual test steps and error messages. Try not to leave anything out - every piece of information a test framework may give you is potentially valuable and you want your reporting to be a one-stop-shop.

If you struggle with this component, perhaps consider reaching out to someone in User Experience (UX) and get their feedback. That may require some back and forth and a bit of explanation, but it will be well worth your time. Remember, this is a showcase for you work, you want it to look nice and be usable.

Unified

Fourth, unified and aggregated. All the tests for one product should be reported together in one place. I shouldnt need to go to one server to see the API results, and another for performance tests and a third for front-end. Or worse, a different reporting page for each individual API.

This also means you need to have a common tracker. Some kind of unique ID to tag your tests with that allow you to distinguish the results allows you to filter on the page as well as good naming conventions will allow you to filter the results - a test management system will help here.

Historical

Fifth, historical. Results need to be archived - all of them, not just the good ones. This typically means some sort of database with a parser that translates the XML report into the database. These historical reports should also be readily available on the same page - with just a click or two I should be able to see the history of a given test. This allows you to see if a bug is new or if it’s something that’s happened before.

Measured

Finally, any metrics that appear on the page need to be well understood. Make sure everyone understands what the reports mean, especially stakeholders. This is easy if you go with a simple table of results that list ‘pass’ or ‘fail’, it’s harder if you include charts or graphs.

If you have a chart that says 95% of your tests passed, everyone should know exactly how to interpret that and what it means in terms of the health of the product. I’ve seen products where 95% is the healthiest that product has ever been. I’ve also seen 95% means everyone is fired for incompetence. There is no best practice for how to configure these metrics, they need to be agreed upon by the team - and likely revised several times as issues arrive.

Sample

Capture5.PNG

You can see it’s a very basic table with some filtering to the side and if you click the link on the far right it’ll take you to a detailed log page that would contain the raw log output an any artifacts (eg, screenshots). This is a very basic approach that took me maybe a few days with HTML, Javascript, and PHP. It doesn't have any fancy charts, but those should probably be a stretch goal for most projects.

Team Ownership

Obviously this is where a Community of Practice for your AQA resources will really help. You can’t have most of these things if every AQA is off building their own thing, there needs to be communication between AQA resources.

One last note, AQA needs to own this. If someone asks for some functionality, don’t be afraid to say no. Someone somewhere, probably in management, will ask for a feature that will violate the integrity of the reports or will alter the data projection to paint a more favorable picture. Shut that down, loud and simply.  You want a feature to delete tests? No. You want a feature that can alter a failing test to a passing one? No. That simple. Both of those are true stories by the way, and I didn't say no and boy the trouble that caused.

Make Tests, Not Tasks

Make Tests, Not Tasks