Shifting my Quality Perspective: From a Quality to a Product Owner Role

Not having the word “quality” in my job description is something new for me. I went from “Quality Engineer”, to “Quality Product Accountable”, to “Quality Coach”, always being directly involved in the quality of the product or the process. Now as a product owner I am trying to figure out what is my role in supporting the quality of our product.  Here are some things I’ve noted down.

Not directly involved in the test strategy

In the team I am working in, we don’t have a dedicated tester or someone in a quality role. The developers decide on the test strategy. What to test, when and how is entirely up to them. I have a very high level idea of how it looks like, but I don’t have the time to deep dive and understand it properly. I can only experience the outcomes of the strategy in the form of increase of usage of our tool or bugs reported by our users. Even if  I would like to find out how testing is done, I don’t have the time to do so. The quality of the process is no longer my priority, which makes me feel a bit awkward.

Not involved in functional testing

Even if I am tempted to test the inside-outs of any new feature we want to release, I need to hold myself back and simply not do it. If by the time I have a look at new functionality the straightforward scenario paths are not working, we have all done something very wrong. My main testing efforts concern whether what we are building fits the user’s expectations, so mostly I spend my day talking to them to try to figure out what they need. Nevertheless, from time to time I do some exploration on our product, to identify gaps or possible improvements in the workflows.

Acceptance criteria – not as easy as I thought

As a tester I complained (a lot) about not well defined acceptance criteria. Now, as I am the one trying to identify them, I realise what a complicated task this is. Not all users have exactly the same process of doing things. Figuring out what a “correct” flow of actions should look like, especially for a new product, can be daunting at times. I have in mind techniques like example mapping that can help, but I have not figured out how to incorporate it in our Scrum process. Another aspect that confuses me, is how to improve the acceptance criteria during development, as sometimes our original assumptions might be wrong. I try to be involved as much as I can in development, but it doesn’t always work out.  This part is really work in progress for me.

Providing vs receiving information

As a tester within a development team I would test, collect my findings and provide this information to whoever needed it to make a decision. Now, I find myself on the other side, making the decisions based on the input from the team. This is the most complicated aspect for me, as it boils down to me a tester, trusting the information provided by the team, aka the developers. This is definitely a steep learning curve. The instinct of questioning the quality of the information is alive and kicking. But I find that slowly, as we work longer together as a team this gets better. I am positive that a fair amount of scepticism from my side will always be there, but at least it will be directed to the correct things to question and not everything.

I find myself wondering why in Scrum, the PO is not a member of the development team. Wouldn’t be more efficient, from a quality point of view, if POs were more involved in the development process? I try to participate as much as I can in all development discussions, but in the end it might be just a time availability constraint.

Identifying shifts in my perspective of quality, helps me use my testing skills to serve my needs as a PO. I try to focus on that rather than become a PO whose main concern is testing. But it seems that the road is long and bumpy.

Quality Indicators

One of the questions I get very often as a quality coach is “how can we measure quality?” If I were to reply right off the bat, my answer would be “No clue!” as I find it very hard to quantify something as abstract and ever changing as quality. Nevertheless I think the premise of the question is useful, as I interpret it to be “how do we know we are doing a good job?”

Over time I have noticed that certain practices, processes or habits seem to serve as gauges pointing to whether things go right or wrong. I call them quality indicators and here are some that I find useful.

Number of reported bugs that are not “real” bugs

Some bugs reported by customers are as “real” as they can get. Orders gone missing, deleted items from their lists, system down, unable to log in and so on. Once they are reported, everybody agrees that they need to be fixed. But there are also those bugs that start the “bug or feature?” debate. These are the bugs that users report, just because they didn’t understand how to use your software. These “fake” bugs are a good indication that you are implementing something different than what your users expect. They hint to problems in user experience or help documentation. If you observe a surge in such bugs, analysing them and finding patterns in their content might help reduce them significantly.

Time interval between customer reporting bugs and developer committing a fix

Looking at the lifecycle of a reported bug can be an interesting exercise in software development as well as a beneficial one. You can think of all the ways a customer can report an issue. Are issues reported through one channel fixed faster than those reported from another? If yes, why is that? How long does it take to decide that a change is actually required to address the issue? If a change is decided, how easy is it to assign it to anyone to fix or is there a need to contact the right set of people to do so? How much time does it take to identify the right people? You see, even if you are doing continuous deployment, you might still lose time in trying to figure out how to get started on working on a solution and that is something to keep an eye on.

Time it takes to fix automated tests

You solved the customer issue in 5 minutes, ran unit and component integration tests successfully and pushed your changes through the deployment pipeline. You are royalty and you deserve a hot cup of chocolate while you wait for the “Thank you!” call from the customer. But alas! 3 integration tests are failing, so you do the honourable thing and start fixing them. 5 hours later you are done! If you have ever had a similar experience or heard someone in your team complain about it, it is a good idea to take the time and have a look at your automated tests and how easy they are to maintain.

If anybody asked me for solid evidence that these indicators indeed improve the quality of a product, I would unfortunately have none to offer. Nevertheless, the time interval between customer reporting bugs and developer committing a fix can be used to signal changes of your lead time. The number of  “fake” bugs and the time it takes to fix automated tests may have an impact on the measures of your software delivery performance as described in Accelerate: Building and Scaling High Performing Technology Organizations by Forsgren, Humble and Kim.

Measuring quality is hard – if not impossible -, but identifying and monitoring areas that can be improved is at least, a good start.