I won’t lie. Every time we release something new to our users, I get stressed. Will they like it? Will they use it? Did we miss something? Did we break something we didn’t expect?
And over the years, in order to relieve my anxiety, I often wondered:
How can we measure the quality of a feature before we unleash it to our users?
Honestly, I don’t think we can, but of course I might be terribly wrong. What I do believe though, is that we could, and actually should, put in place guardrails to increase our confidence before we release something.
Here are the ones that I find the most useful:
Not all functionality is equally critical. Having a conversation of what are the worst things that can happen if we get things wrong helps us understand the level of attention we should pay. It also helps everyone involved to get on the same page about the importance of what we are about to do and the implications it might have for our users.
Solid test practices
Testing the software can be as easy or as hard as we allow it to be. Investing in testable code, providing environments to test in, sharing the knowledge between different roles can make it easier to test both straightforward and complicated scenarios and increase our confidence. A comprehensive overview of what was tested, how and the information that was revealed, is a valuable confidence booster for the aspects we have covered (without ever knowing what we have missed, of course).
Making the best of standards
In regulated environments we have to follow certain standards to certify our software. Even though sometimes this might seem just as extra bureaucracy making us slower, understanding the purpose and the risks that the standards are mitigating can actually help us increase our confidence if we fulfil them. Going the extra mile and making it as easy as possible to adhere to them is another step in knowing that what we do has less chances of going wrong.
Redundancy & failure handling
If anything is deemed critical for the functionality we are rolling out, we need to have a look at what we have in place to make it stable. That means, going through all the infrastructure aspects that keep the lights on and making sure that there is an emergency generator if they go out. We also need a tested plan to handle failure. What would the users experience during an error, do they have enough information when it happens, who could they turn to for help. I find that trained reaction to failure reduces stress to set functionality free.
Psychological safety to speak up
Last but not least, an environment in which people can feel safe to raise their concerns without the fear of retribution or of being labelled as troublemakers can just be the last line of defense. A machine cannot really tell us whether corners are cut, or unethical decisions are made. That’s why it is really a good practice to encourage people to speak out their mind rather than put them in a position where they think “I knew this was a bad idea, but why would I be the one sticking my head out to say so?”
For me, these guardrails are not “done” criteria that we go through and decide to which degree we have adhered to them to get a confidence score (even though we could, I guess.)
Putting everything in place takes time and it is not easy. Nevertheless, I believe that continuously working on these guardrails and trying to make them stronger, keeps us in a good quality path which in turn increases our confidence level to release our software.
One thought on “Five Guardrails to Build Up the Confidence to Release”