Quality Indicators

One of the questions I get very often as a quality coach is “how can we measure quality?” If I were to reply right off the bat, my answer would be “No clue!” as I find it very hard to quantify something as abstract and ever changing as quality. Nevertheless I think the premise of the question is useful, as I interpret it to be “how do we know we are doing a good job?”

Over time I have noticed that certain practices, processes or habits seem to serve as gauges pointing to whether things go right or wrong. I call them quality indicators and here are some that I find useful.

Number of reported bugs that are not “real” bugs

Some bugs reported by customers are as “real” as they can get. Orders gone missing, deleted items from their lists, system down, unable to log in and so on. Once they are reported, everybody agrees that they need to be fixed. But there are also those bugs that start the “bug or feature?” debate. These are the bugs that users report, just because they didn’t understand how to use your software. These “fake” bugs are a good indication that you are implementing something different than what your users expect. They hint to problems in user experience or help documentation. If you observe a surge in such bugs, analysing them and finding patterns in their content might help reduce them significantly.

Time interval between customer reporting bugs and developer committing a fix

Looking at the lifecycle of a reported bug can be an interesting exercise in software development as well as a beneficial one. You can think of all the ways a customer can report an issue. Are issues reported through one channel fixed faster than those reported from another? If yes, why is that? How long does it take to decide that a change is actually required to address the issue? If a change is decided, how easy is it to assign it to anyone to fix or is there a need to contact the right set of people to do so? How much time does it take to identify the right people? You see, even if you are doing continuous deployment, you might still lose time in trying to figure out how to get started on working on a solution and that is something to keep an eye on.

Time it takes to fix automated tests

You solved the customer issue in 5 minutes, ran unit and component integration tests successfully and pushed your changes through the deployment pipeline. You are royalty and you deserve a hot cup of chocolate while you wait for the “Thank you!” call from the customer. But alas! 3 integration tests are failing, so you do the honourable thing and start fixing them. 5 hours later you are done! If you have ever had a similar experience or heard someone in your team complain about it, it is a good idea to take the time and have a look at your automated tests and how easy they are to maintain.

If anybody asked me for solid evidence that these indicators indeed improve the quality of a product, I would unfortunately have none to offer. Nevertheless, the time interval between customer reporting bugs and developer committing a fix can be used to signal changes of your lead time. The number of  “fake” bugs and the time it takes to fix automated tests may have an impact on the measures of your software delivery performance as described in Accelerate: Building and Scaling High Performing Technology Organizations by Forsgren, Humble and Kim.

Measuring quality is hard – if not impossible -, but identifying and monitoring areas that can be improved is at least, a good start.

Of Quality, Testing and Other Demons

Being hired as a software tester implied that I needed to have a strong connection to product quality. And indeed the word Quality appeared everywhere. It was in my job description, Quality Engineer, I was taking care of the Quality Assurance, I would provide proof of the quality of our product to pass the company’s Q-gates. So, for a long time, testing and quality were intertwined concepts for me.

As our product evolved and our team was moving closer to the “everybody’s responsible for quality” mentality, I was asked to share my testing techniques with my developer colleagues. Developers learning how to test was a path to improve quality. But is it the only path?

Here is my attempt to answer this question, based on the working definitions I have been using for the past years.

I go by using Weinberg’s definition of quality:

Quality is value to some person.

Every time we change our software, we aim to improve the experience of the people who use it, i.e. to make it more valuable to them. We use our domain knowledge and expertise to make these changes. So if we improve our skills and stay up-to-date with the newest trends of our individual crafts, we can make more educated changes, thus improving the quality of our product.

Testing is performed to collect all the necessary information, so we can assess if our changes indeed add value. According to Kaner’s definition of testing:

Software testing is an empirical, technical investigation conducted to provide stakeholders with information about the quality of the product or service under test.

To obtain this information, we investigate, we question, or in other words we test, to find the value of every change we make. Skilled testers have an arsenal of questions, stemming from the various disciplines involved in software development but also from product management or sales. They know how to prioritize them and when to stop asking. So teaching developers how to test, i.e. how to ask enough meaningful questions to evaluate their changes, indeed improves the quality of the product.

But with the same token, shouldn’t testing also be taught to everyone involved with the production of software? And to take this a step further, to be able to come up with the most relevant questions, shouldn’t all people participating in the product have at least an understanding of the basics of all the disciplines involved in software production and delivery? If we know facts about our product that are outside the sphere of our craft, isn’t it easier to make a valuable change?

For example, imagine you are a UI designer that needs to implement a new view and the only requirements you have is that it should be accessible and integrating well with the rest of the application. How would your design look if you considered only the stated requirements? How would it look if you also knew the demographic of the users provided by marketing research? Would it be the same if you knew that certain elements are harder to describe in the documentation than others?

Summarizing my amateur philosophical ramblings:

improve

Staying educated both on our craft and on the craft of others equips us to do better work. Going the extra mile to teach our peers about the specifics of our craft might make us revise beliefs that we are holding to be true. Getting feedback on the things we find important might make us reconsider our priorities.

Enabling continuous education might be beyond a tester’s job description but it should be included in any quality advocate’s to-do list. I am pretty sure that this list consists of more ways to improve the quality of software, focusing on how we make a change rather than how we evaluate it once it is done. And if you are aware of them, I would be happy to know them too.