Quotes & Notes on Building Successful Communities of Practice by Emily Webber

The book: Building Successful Communities of Practice

The author: Emily Webber

…training and learning are not the same thing.

Companies and individuals allocate time and money for the “training” but might ignore or forget the “learning”.

Communities of practice create the right environment for

  • social learning,
  • experiential learning, and a
  • rounded curriculum

leading to accelerated learning for members.

Your community will have the best chances of building trust between its members if people have the chance to be physically in the same place and are able to meet face-to-face.

If your community is likely to be distributed over a large distance, think about how they will build trust. Try to find a way for them at least to spend some time together in the same space.

Maybe some video calls between small subgroups of the community could also help, in case travelling is not an option.

As a community matures, it will move on from just sharing knowledge to solving shared problems, using the collective knowledge of the community. This will create better practice.

This is especially important for bigger organisations with multiple projects or even products. A team might solve a problem but nobody else in the organisation is aware of the solution, so they need to re-invent the wheel.

Communities of practice encourage active participation and decision-making by members as opposed to decision-making by the leader or group of leaders.

It would be interesting to see how communities of practice make decisions if the members are separated in fractions. Can they work?

Those who lead a community should set the standards for what “good” looks like within the profession, at various levels.

Even if members of the communities make decisions, it seems that there is always the need for a person or a group of people to lay down the groundwork.

I have found that communities thrive best when there is an understanding of the boundaries around membership. These boundaries provide members with the emotional safety necessary for needs and feelings to be exposed and for intimacy to develop.

The community vision should be

  • aspirational,
  • achievable, and
  • easy to understand.

Having a vision gives the group a shared understanding of why they exist, which helps create common tasks.

From vision, to goals, to tasks.

Goals should be SMART

  • Specific,
  • Measurable,
  • Realistic, and
  • Timely.

The community achieving a goal is not the same thing as all the members achieving it as well. It’s up to the ones that accomplished something to bring the ones left behind forward as well.

Make sure you create time for the community to have less-structured meetings where they can discuss things that are on their mind and bring their problems to the community’s safe space.

This might not be directly affecting the community’s existing goals, but might help set new ones.

Make it easy for people to self-identify with what you say about community members, so that they can easily see how it would benefit them.

If a community is to become self-sustaining, leadership needs to be taken on by the group as a whole and not owned by just one person or one small group.

This could nicely transfer to quality. “If a development team is to become self-sustaining, quality needs to be taken on by the group as a whole and not owned by just one person or one small group.”

Communities of practice only exist as long as there is an interest from members in maintaining the group.

If the community’s goals are too big and hard to reach, the benefits might take too long to materialize. Maybe creating “small” goals in analogy with “small” user stories might help sustain the community longer.

Like all good things, communities of practice take a lot of time and effort to get right.

View a full list of the books I have taken notes on, in the Library page.

Quality Indicators

One of the questions I get very often as a quality coach is “how can we measure quality?” If I were to reply right off the bat, my answer would be “No clue!” as I find it very hard to quantify something as abstract and ever changing as quality. Nevertheless I think the premise of the question is useful, as I interpret it to be “how do we know we are doing a good job?”

Over time I have noticed that certain practices, processes or habits seem to serve as gauges pointing to whether things go right or wrong. I call them quality indicators and here are some that I find useful.

Number of reported bugs that are not “real” bugs

Some bugs reported by customers are as “real” as they can get. Orders gone missing, deleted items from their lists, system down, unable to log in and so on. Once they are reported, everybody agrees that they need to be fixed. But there are also those bugs that start the “bug or feature?” debate. These are the bugs that users report, just because they didn’t understand how to use your software. These “fake” bugs are a good indication that you are implementing something different than what your users expect. They hint to problems in user experience or help documentation. If you observe a surge in such bugs, analysing them and finding patterns in their content might help reduce them significantly.

Time interval between customer reporting bugs and developer committing a fix

Looking at the lifecycle of a reported bug can be an interesting exercise in software development as well as a beneficial one. You can think of all the ways a customer can report an issue. Are issues reported through one channel fixed faster than those reported from another? If yes, why is that? How long does it take to decide that a change is actually required to address the issue? If a change is decided, how easy is it to assign it to anyone to fix or is there a need to contact the right set of people to do so? How much time does it take to identify the right people? You see, even if you are doing continuous deployment, you might still lose time in trying to figure out how to get started on working on a solution and that is something to keep an eye on.

Time it takes to fix automated tests

You solved the customer issue in 5 minutes, ran unit and component integration tests successfully and pushed your changes through the deployment pipeline. You are royalty and you deserve a hot cup of chocolate while you wait for the “Thank you!” call from the customer. But alas! 3 integration tests are failing, so you do the honourable thing and start fixing them. 5 hours later you are done! If you have ever had a similar experience or heard someone in your team complain about it, it is a good idea to take the time and have a look at your automated tests and how easy they are to maintain.

If anybody asked me for solid evidence that these indicators indeed improve the quality of a product, I would unfortunately have none to offer. Nevertheless, the time interval between customer reporting bugs and developer committing a fix can be used to signal changes of your lead time. The number of  “fake” bugs and the time it takes to fix automated tests may have an impact on the measures of your software delivery performance as described in Accelerate: Building and Scaling High Performing Technology Organizations by Forsgren, Humble and Kim.

Measuring quality is hard – if not impossible -, but identifying and monitoring areas that can be improved is at least, a good start.