Of Quality, Testing and Other Demons

Being hired as a software tester implied that I needed to have a strong connection to product quality. And indeed the word Quality appeared everywhere. It was in my job description, Quality Engineer, I was taking care of the Quality Assurance, I would provide proof of the quality of our product to pass the company’s Q-gates. So, for a long time, testing and quality were intertwined concepts for me.

As our product evolved and our team was moving closer to the “everybody’s responsible for quality” mentality, I was asked to share my testing techniques with my developer colleagues. Developers learning how to test was a path to improve quality. But is it the only path?

Here is my attempt to answer this question, based on the working definitions I have been using for the past years.

I go by using Weinberg’s definition of quality:

Quality is value to some person.

Every time we change our software, we aim to improve the experience of the people who use it, i.e. to make it more valuable to them. We use our domain knowledge and expertise to make these changes. So if we improve our skills and stay up-to-date with the newest trends of our individual crafts, we can make more educated changes, thus improving the quality of our product.

Testing is performed to collect all the necessary information, so we can assess if our changes indeed add value. According to Kaner’s definition of testing:

Software testing is an empirical, technical investigation conducted to provide stakeholders with information about the quality of the product or service under test.

To obtain this information, we investigate, we question, or in other words we test, to find the value of every change we make. Skilled testers have an arsenal of questions, stemming from the various disciplines involved in software development but also from product management or sales. They know how to prioritize them and when to stop asking. So teaching developers how to test, i.e. how to ask enough meaningful questions to evaluate their changes, indeed improves the quality of the product.

But with the same token, shouldn’t testing also be taught to everyone involved with the production of software? And to take this a step further, to be able to come up with the most relevant questions, shouldn’t all people participating in the product have at least an understanding of the basics of all the disciplines involved in software production and delivery? If we know facts about our product that are outside the sphere of our craft, isn’t it easier to make a valuable change?

For example, imagine you are a UI designer that needs to implement a new view and the only requirements you have is that it should be accessible and integrating well with the rest of the application. How would your design look if you considered only the stated requirements? How would it look if you also knew the demographic of the users provided by marketing research? Would it be the same if you knew that certain elements are harder to describe in the documentation than others?

Summarizing my amateur philosophical ramblings:


Staying educated both on our craft and on the craft of others equips us to do better work. Going the extra mile to teach our peers about the specifics of our craft might make us revise beliefs that we are holding to be true. Getting feedback on the things we find important might make us reconsider our priorities.

Enabling continuous education might be beyond a tester’s job description but it should be included in any quality advocate’s to-do list. I am pretty sure that this list consists of more ways to improve the quality of software, focusing on how we make a change rather than how we evaluate it once it is done. And if you are aware of them, I would be happy to know them too.

A Collection of Suggestions I Strenuously Resisted

As soon as I started working as a tester, I knew that this was what I wanted to do for the rest of my professional life. Being completely inexperienced in software development, my idea of testing was that its purpose was to find bugs. Oh and I loved finding bugs. Starting from gaps in the requirements, to functional bugs, to inaccuracies in the documentation, to missing identifiers of texts that needed to be translated, I was there, creating my bug tickets. Life was good.

But alas, my manager and senior developers thought that I could contribute more to the quality of the product by doing additional activities, beyond just finding my beloved bugs. Here is a collection of suggestions that I received, which I initially, emphatically,  denied.

Provide test ideas to developers before they start coding

The architect of the project noticed that I could spot gaps in the user stories as soon as they came in the backlog. He suggested that I add my testing ideas in the story, so the developers could take care of the pitfalls before they even started coding. I was outraged! If I laid out all of my ideas, how would I find bugs? If they wrote good software, how would I ever be happy with no bug to be found? I even had the audacity of contradicting him. He smiled politely and let me sleep on it, until I came to my senses.

Share my knowledge with the rest of the team

Not only were the developers interested in having my test ideas beforehand, even worse, they wanted to know how I came to think about them. They wanted to learn how to do exploratory testing efficiently and I was supposed to be the one telling them all of the testing craft’s secrets. I was not thrilled with the idea, nevertheless I paired with them for a couple of timed sessions and we found many issues together. We also discussed how a bug ticket should look like, so that it is easy for everyone to understand its importance and resolve it in a timely fashion. Eventually they became quite good at it, leaving me with even less bugs to find. At least, I had a hidden sense of pride for my “students”.

Merge bugs and enhancements into a single list

Another outrage. My well thought of corner-case bugs in the same bucket as things like “change colour of button from #4286f4 to #2977f4”. I half-heartedly accepted and agreed to have a look at these enhancements, reported by all sorts of people, from developers to sales colleagues. What I found out was that most of them were valid points that would significantly increase the good adoption of our product, sometimes with minimum effort. That gave me another perspective of how to approach testing, going beyond finding problems and looking into ways to actually improve our service.

Talk to people from sales

Another waste of my precious time. I knew what they would tell me. More features released faster, even if they were functionally incomplete, because this was what the customers wanted. In a nutshell, I thought that my job was to ensure that the customers got software that worked and not if what they got had any value to them (I wish I had come across Alberto Savoia’s “Test is dead” presentation much earlier than I did). After the first discussions, I reluctantly started seeing the error of my ways. Even if I couldn’t make any decisions on how features were prioritized, it sure made a difference to my testing approach, understanding what things the customers found important. What their resilience to failure was and how long they allowed for a fix. What their business was like and how our product could serve it. I swallowed my pride and admitted defeat yet again.

In all of the above cases, I embarrassed myself by either loudly refusing or grumpily accepting to even consider anything that was out of my testing comfort zone. Luckily, with approaches for creating quality software like Modern Testing becoming popular, most of the ideas I resisted are becoming mainstream. Will there be more things in the future that I will resist? Sure! Have I learned anything from my mistakes? I did. Developing software, and testing as part of it, is a creative process fulfilling the people that practice it. But if you don’t share your craft with the rest of your team and don’t listen to the people that consume your work, you will inevitably be left behind by both.