My First Change Experiment: Bumps In The Road

“Everyone has a plan until they get punched in the mouth.” — Mike Tyson

Love that quote from the greatest boxer of my childhood. Tyson punched the first 19 plans he faced in the mouth, 12 in the first round. Those opponents were  left wondering what happened to all the months of training and preparation. In the end, he left 50 men in his wake; shaking their heads at what just happened.

I’m just going to let that hang here while you identify feeling like that before with teams.

While it would be easy to dismiss the analogy due to over-used sports metaphor, let’s stay here for a moment. Plans are made with the best intentions. When tasks are estimated, ideal man hours are used. User journeys tend to involve happy paths first. Negative test cases aren’t the first ones written. Those were just the first ones I thought of. To a degree, we’re all masters of well-intentioned.

We plan very optimistically because we don’t expect to be punched in the mouth. It’s got to be frustrating the older we get, because that’s almost never the case.

When we last chatted, I laid the groundwork for change experiments through a case study I presented at this year’s Agile Alliance conference on theme of what happens when we move way too fast. It was an amazing learning opportunity for this coach, and I use the lessons with every new team I encounter today.

Didn’t spend too much time planning. Just identified some potential outcomes and came up with a hypothesis of reaching them. True to the scientific method, we also crafted our own way to measure our progress along the way.

Easy, peezy right?

It’s not you…actually it is you.

When you’re first entering a team’s space, you have to know it’s pretty awkward for them. This outsider is “here to help” and that’s hardly a comfort to many. I was fortunate to be an internal employee, so most knew me from around the office already. Ratchet the discomfort up a few octaves if the team is looking at a complete stranger.

And there’s the feedback loops. I wasn’t going to just stand there silently all day, right?

Sometimes, it got in their way — which was an important learning for me. Other times I was a safe place for team members to feel heard. Cards continued to move, albeit in haphazard fashion, so we were succeeding in spite of ourselves. The whole way, everyone is waiting for you to tell them what’s wrong and how they will turn the ship around.

As much as you can prepare for your plan to fall apart, there’s no way to be ready for 50 people looking at you like you’re an idiot.

It can be a challenge to keep the troops focused on the goal when things go wrong. Every fumbled team event can be an excuse to scrap the entire thing. The best part of the teams in early iterations was their enthusiasm for retrospectives. We all managed to agree that they were our greatest opportunity to change.

Where does this survey come into play?

My immediate desire was to delve into the long-form data the survey yielded, because there would be evidence of our troubles to dissect. I didn’t think the numbers would really help in any way other than to assist teams in quantifying how they felt and facilitate better answers in the open text fields.

I would later learn that I was wrong.

After the first five sprints, I decided to average the numbers. My survey partner encouraged me to compare the averages to each other, and not view them in a vacuum. This was because when people answer these types of surveys, they answer all the questions in relation to each other.

The lowest average was in overall satisfaction, which wasn’t a surprise to me because you could see it on their faces. The second lowest number was quality of deliverable, which was a surprise to me because this company prided itself on amazing quality.

Remember that the data you collect can have several types of context. Don’t just use the numbers to tell whatever story you want. Let it drive more conversation and new experiments.

So, what were we doing wrong?

Gathering together the leads again, I presented some of the survey data as well as anecdotal evidence of the dip in quality. Turns out, the QA team was doing too much context switching between apps on a daily basis. The team also had too much work in progress because of a desire to speed things up.

All the work was in progress at once, making daily work feel more like spinning plates. We’ve seen how that ends: broken plates.

Everyone agreed on doing two pivots. The first was to either staff up QA team members to equal the number of app releases in progress, or limit the number of apps in progress. That allows for each release to have a subject matter expert and drill deep on quality.

The second involved having a definition of ready. All of the teams had a documented definition of done, but hadn’t done the same for ready. As such, work was being pushed by clients and leads without any ability of the team to push back. Tasks would be added to the sprint that everyone knew wouldn’t be completed, but they just happened to be up next. Asking all teams to adhere to a DoR allowed only the work that was ready to be committed to.

Did it work?

After five more sprints, I took a look at the numbers. With just those small tweaks, every metric rose! The two greatest jumps were in satisfaction and quality. We all did the happy dance and put together a deck for leadership. It wasn’t the end of our transformation, just the beginning of things getting better.

Here’s the final part, where I review what this entire process taught us about change.


2 thoughts on “My First Change Experiment: Bumps In The Road

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s