Back to vistaly.com

Breaking Business Outcomes down into Product Outcomes with Ant Murphy

Steve Klein
Steve Klein

Doing Continuous Discovery and using Opportunity-solution trees (OSTs) are where most teams start their journey to becoming Outcomes-focused.

Unfortunately, many teams try to use OSTs with unspecific, lagging "Business" Outcome like "Increase MRR" or "Get to PMF" as the Outcome at the top of their OST.

This is a recipe for disappointment.

To use opportunity-solution trees effectively, you must break those "Business Outcomes" down into Product Outcomes. But this process can be confusing and difficult!

I sat down with Ant Murphy (coach to PMs at companies like Atlassian, Miro, Microsoft, StackOverflow) to talk about how to break Business Outcomes down into Product Outcomes.

The Problem With Starting With "Feature Ideas"

Steve Klein

This whole thing started for me when I saw a video that you had put out on your YouTube channel about breaking down product outcomes. It resonated so much with how we think about this whole process at Vistaly.

I don't want to spend a ton of time on this because I think we're probably preaching to the choir here a little bit, but you started that video by talking about how product teams often start the whole process with ideas for features and some of the downsides of that.

Can you quickly rehash that and set the table here for us a little bit?

Ant Murphy

Yeah, I think the premise of the video, is something I've talked about a lot and written articles on. We break things down and work in an iterative way, which is great because agile is pretty dominant. But the problem is that we're often focused on breaking down the solution. We focus on breaking down a really big solution.

That's a good thing to do, but what we neglect, and what I don't see happening often enough, is that we don't break the outcome or the problem space down enough. If you think about it, it's an order of events. It's not really an either-or. People get this wrong and they hear me talk about this, and they say, "No, you have to break down solutions," and so on.

But what I'm saying is it's not an either-or. Before we take a big outcome and then create a big solution for that big outcome, which is what we often do, we should probably look at that problem or outcome and ask if there is a way to decompose this into something smaller and more incremental. We should apply that same incremental iterative idea and first principles to a problem and an outcome. Once we make that problem space small, the work we can do to influence it becomes really small, and we can break that work down as well.

Steve Klein

Yeah, I think a lot of times this happens with good intentions. These ideas are often born out of a customer problem that they hear about. But that problem, or even the intended outcome, kind of gets forgotten and the focus becomes on the idea itself. This comes with all those cognitive biases people have. You get super far into researching it and maybe even building part of it, and you develop sunk costs. If the focus is the idea, you have this sunk cost fallacy and confirmation bias. You're going to cling to finishing it, no matter what.

Business Outcomes Are Hard To Work With

Steve Klein

I've talked to tons of PMs and heads of product at companies that sign up for Vistaly. We'll hop on a call to have them demo their workspace, walk me through their OST, their opportunity solution tree, and the outcomes they have at the top of that.

I'll see things like "increase MRR" or "get to product-market fit," and it's a huge problem. Talk to me about your perspective, especially for companies that are new to trying to be more outcomes-focused.

Ant Murphy

Yeah, I see this a lot. I think it's because we haven't built up that muscle, that habit of breaking down the problem space. We think, you know, we build an opportunity solution tree or OKRs, and we want to achieve something like increasing MRR, monthly recurring revenue. And that's true; that's probably the thing we actually want. But the challenge is that it becomes really big for them to influence.

If you're a small startup, that's probably different. You can probably directly influence that. But most people I work with, and most of the people here, work in slightly larger organizations where you're one of many teams. If you think about something like revenue or recurring revenue, that's something that everybody in the organization influences to some degree. You're one of many teams influencing that, so there's a distance between you and what you can influence and that outcome. It becomes hard because you're like, "What am I doing to influence that?"

What I generally advise, and what I end up workshopping and helping with, is trying to break that down. How do we contextualize it? Revenue is something that maybe at an organizational level is the goal, but I need to contextualize that for my product area. What do I own? What do I have influence over? And what's my hypothesis that if I influence this, it's going to influence the revenue?

A simple example: let's say you own the top of the funnel, like the onboarding process. You could help increase conversion. Your hypothesis, which is probably a sound one, and you should do some discovery and research around this, is that if we reduce drop-off rates and increase conversion, we can help increase revenue because we get more people through the funnel. That's what you can control. You can break that down further, but you see how we're starting to contextualize it. I've gone from recurring revenue to increased conversion or reduced drop-off rate. You can get more micro than that if you want to, but it's that process.

Steve Klein

Yep, that makes sense. I think some of the bigger problems around focusing on those bigger business metrics are that they seem too lagging. It takes too long to see if the things you're working on are having an effect on the higher-level outcome you're looking to drive. As a product team, one of the things we need to develop is a fast iteration cycle. We need to quickly learn if something is helping or not. If it's not, we need to reorient and readjust. If that feedback cycle is too long, we're kind of kneecapped a little bit.

Ant Murphy

Yeah, and just to touch on that, Steve, you made me think of something really pertinent here. Not only is it lagging, but that's why your feedback cycles are absolutely important. If I want to rapidly experiment, I'm probably coming up with a lot of proxy metrics and doing experiments to see if they work or not. But I still might not see an impact on that ultimate outcome, or at least it's lagging.

The other big challenge, especially as things get bigger, is you might be one team out of many. Let's say we have five teams. This quarter, five teams did a bunch of stuff. At the end of the quarter, let's say revenue went up. Great, everyone celebrates. However, how do I know that it was the work I did that contributed to that?

How do I not know that one team just smashed it out of the park and contributed to a 20 percent increase in our recurring revenue? The other four teams might have actually made revenue drop, but their drop was so minor, like 2-3 percent, that the overall net at the end of the quarter is still 12 percent up. We're celebrating, but four out of five teams actually had a negative impact on revenue. We don't know that because we haven't broken these things down. We don't know where to double down. As a product manager, should I double down on this stuff or do these things next quarter? I don't know. I think I'm going to just assume it was successful and double down, but we might double down on the wrong thing, which is really dangerous.

Using KPI Trees To Visualize The Inputs Into Business Metrics

Steve Klein

Yep, I feel like your ability to drive these higher-level business outcomes is like one of those horrible multivariate math problems where there are so many factors. When you have a complex problem like that, the answer for any good problem solver is to break it down into its constituent parts. We've talked about KPI trees as a tool to help visualize inputs, like lower-level inputs, into higher-level business metrics. Can you talk a little bit about your experience with that and how you've worked with clients around this?

Ant Murphy

Totally. For anyone unfamiliar with KPI trees, I'd encourage you to Google it and read more about it. The idea is essentially a tree of key performance indicators or metrics. It comes from pre-OKR times and such. You might have a more lagging or higher-level metric KPI, and then you start to build this tree off metrics that are inputting into or influencing that. You draw this relationship, so you end up with this one metric and then this big web.

What you're trying to illustrate is a few things. One, you're showing that some metrics are more lagging than others. You're also trying to show your hypothesis and the causality or correlation between different levers inside the company. For example, revenue might turn into conversion, which might turn into click-through rate, which might turn into the experience at a certain point, and so on.

While you're documenting and visualizing your hypotheses and how these things interrelate, it's important to remember that just because you increase conversion doesn't mean you will increase revenue. We work in a complex system with complex problems, but we're trying to break it down into variables we can influence. KPI trees are just a tool to help you visualize this. I like any type of tree diagram because it's a good way to visualize complex relationships and how things play off each other. It's okay that one thing might influence many things. It's not a one-to-many mapping; it's almost one-to-many-to-many mapping.

Using KPI trees, you can see how revenue is up here, what influences revenue, and where you should focus. Some things that influence revenue might be fine and there might not be anything we can do about them, but it helps us focus on what we can influence.

Steve Klein

Yeah, one thing that's interesting about KPI trees is that at the top, the way they break down is mostly math. Like a normal SaaS company has recurring revenue at the top, and one way we often talk about is breaking it down into new revenue plus retained revenue - just math. But at some point, you need to start making the jump to customer actions or behaviors that influence these business metrics.

If you think about it intuitively, people don't wake up and think, "I'm going to go slap down a credit card on a B2B SaaS product today." People pay for products that they get value out of. Down the tree, you can highlight or detail those moments of value that lead customers to convert to paid from a trial or be retained or choose a higher plan. It helps you visualize the actions down lower in the tree that ultimately layer up to more business value for us.

Coming up with that set of things can be daunting. We've talked about user journey mapping as a tool to help get a sense of the actions and moments of value that customers get. Talk to me a little bit about your experience doing that with customers.

Ant Murphy

Yeah, journey mapping is great for that. If you're struggling to decompose revenue and break it down for your area, you can map out the user journey for your area. Then look at the user journey and ask what points in this journey have some relationship with revenue. If you own the beginning of the funnel, the point where they convert and put their credit card down is a logical point.

If you backtrack from that, you start to see the events that happened before. These are potential things you could measure and look at. They are also potential problems because any impediments in that flow to get to that point are good indicators. It doesn't need to be quantitative data; it could be qualitative. Customers might be telling you that a part of the process sucks, or maybe you're using the product yourself and find something annoying or not working. These are good indicators.

Breaking the problem down, you might need to do a little more discovery to validate these things. If you solve this problem, you improve the steps leading up to that point, which should increase the number of people who put their credit cards down. I believe there's a correlation to revenue. User journey mapping is great, observing your customers or using the product is great. Anything that maps down a series of steps is valuable.

The other thing is not getting too dogmatic about metrics and outcomes. If you don't know what it is or can't put a number on it, just write down what you think it is and what you think the relationship is. Even if it's just plain English, that's fine. We can come back later and look at how to measure that and what we really mean by that.

Steve Klein

Yep, I love that. One thing that's great about user journey mapping is that you can interview some of your best customers, those who signed up, onboarded well, invited colleagues, and were retained. You come up with this narrative of the steps along the way that led them to be a successful customer. Then you can start putting numbers around those milestones in the app.

You can get more clarity on what it means to do each of those things. This helps you define the product outcomes you want to focus on, leading to more business value for us.

Gaining Confidence In Your Product Outcomes

Steve Klein

Talking a bit more about doing this in a qualitative way, there are certainly more quantitative ways of thinking about this, like regression analysis. What experience do you have around that? Any tips or thoughts?

Ant Murphy

I think you should get a mix of qualitative and quantitative data. We talked about qualitative stuff, and I want to emphasize that qualitative data is often where we lean when we don't have access to much data. It's easier to start with narratives and then work from there to measure and turn them into something like a KPI tree.

The first point of contact is to see what data you have available. This depends on your maturity and how much instrumentation you have on your product. Some teams have a lot of rich quantitative data on their customers, making it easier for them. It becomes more of a data analysis activity to extract and look at the numbers and draw relationships between them.

If you have the data, the first port of call is to go through those numbers, get into your analytics software, and pull things out. If things are missing or you have other questions, tools like regression analysis can help. I don't have a huge amount of experience with regression analysis, but I've done simple linear regression analysis in the past. The idea is to get more data to inform the relationship between certain actions and outcomes, taking a more quantitative approach.

You can run surveys, do feature audits, analyze the most-used features, run jobs-to-be-done surveys, and use Kano analysis. These tools help you get quantitative data on the relationship between things in your product. You're trying to understand if certain actions cause churn, drive purchases, or whatever outcome you're interested in. These tools are great to help you build this relationship out. The question becomes what data you have available, what questions you're trying to answer, and doing it with intention.

Steve Klein

Yep, totally agree. The important thing is having that narrative around the points or moments of value driving customers to exhibit these business outcomes. Over time, you can get more scientific or rigorous around this, learning which actions done how many times in what window lead to higher-level business outcomes.

You don't have to start there, but it's a great place to get to.

Ant Murphy

Just to make it real, let's take the likelihood of churning. We're not even getting into churn yet, just early indications that people are likely to churn versus stick around. You would run regression analysis to look at the types of events and actions people are doing who stick around versus those who churn.

For example, if someone hasn't watched Netflix in a couple of weeks, the likelihood of them unsubscribing increases. Knowing these things is important because if our outcome is to reduce churn or increase revenue, understanding the correlation helps. You might run an experiment to see if sending a notification or email after a week of inactivity makes a difference.

Q&A Session Begins

Steve Klein

We'd love to talk a ton more about all this, but let's get to questions. Matt, our producer, is going to throw some questions on the screen. I'll read them out, and Ant can take a shot at them.

Understanding KPIs and OKRs

Steve Klein

Can you touch on persistent product KPIs for time-based outcomes or OKRs?

Ant Murphy

Yeah, this is a super common question because we're always grappling with this. Here's how I frame it. KPIs are key performance indicators and can be anything. An OKR could be a KPI.

I think about it this way: imagine being a pilot on a plane. There's a whole bunch of dials and metrics in front of you. If you're only looking at one dial, like decreasing altitude, but you don't see the dial next to it that says your altitude is only 200 feet off the ground, you're in trouble. The danger of only having OKRs and not persistent KPIs is that you're only looking at one dial.

KPIs are the health of your product, like your blood pressure or heart rate. They include churn, revenue, and other health variables contextual to your product. Persistent KPIs should always be there to monitor the health of your product. OKRs, on the other hand, are the things you want to influence or change. A KPI can become an OKR if it becomes the biggest problem you need to solve, like fixing churn.

Steve Klein

Yeah, my take is that KPIs are the things we persistently measure. When you have those KPIs in a KPI tree, it's easy to visualize across your whole business and identify the biggest bottlenecks to growth. OKRs can feel like you're defining outcomes ahead of time, which might lead to pressure to stick to them even if they're not the most impactful. But for bigger companies, they can help with coordination.

Getting Executive Leadership to Agree on Key Outcomes

Steve Klein

What techniques do you suggest for getting executive leadership to agree on key outcomes?

Ant Murphy

This is more of a stakeholder management thing. First, are we clear on our strategy? Often, the disagreement on outcomes is actually a misalignment on strategy. If we're not clear on strategy, we need to create that clarity and get alignment.

If you're not empowered to own the strategy and you're trying to set outcomes without it, it becomes more about alignment around those outcomes. This involves understanding where the misalignment is. Spend time with stakeholders one-on-one, be curious, and understand why they think the outcome should be different.

A great technique is to work out all the assumptions about the different outcomes. Then determine which assumptions need more information to make a decision. Go get the data, do research, and let the data speak for itself.

Steve Klein

Yeah, a KPI tree is a great tool for this. You can map out how the company grows from the top down to the different things people do in the app. Ask your boss to tell you where this is wrong or how they see it differently. This way, you can all get on the same page and have a productive discussion about where to focus.

Ant Murphy

Exactly. Something visual and on paper is powerful. It gets everyone on the same page. People love to disagree, so if you put something down and ask them to tell you where it's wrong, you'll get valuable data and insights.

Defining Outcomes for Feature Teams

Steve Klein

When feature teams are tasked with outputs, what steps can they take to start defining outcomes?

Ant Murphy

I have a post on this, which we can include in the notes. Working backwards is what we always try to do, but it doesn't work with everyone, especially those used to thinking in solutions. Instead, try working forwards.

Ask, "Let's imagine we've done this and it's live. What is true now? What has changed for the customer and the business? What can customers now do that they couldn't before? What has happened to the business?" They will list off their assumptions and beliefs about why this idea is good.

You can then interrogate these a bit and get more data about the working backwards. In

the post, I break down how I workshop this with a product manager, going from solution to outcomes and identifying assumptions to test.

Steve Klein

I love that. It's very "yes and." It's a way to not be confrontational but still get people thinking in outcomes. Once you agree on the outcome, you can ask if this is the best solution for getting there. If not, you've dodged a bullet.

Ant Murphy

Exactly. You'll uncover a whole heap of assumptions. Write those down and identify which ones are risky. You might need to validate these before diving into the solution.

Balancing Short-term Wins and Long-term Strategic Goals

Steve Klein

What should I consider to balance short-term wins and long-term strategic goals?

Ant Murphy

First principles, we know we should balance these things. The balance between short-term and long-term is contextual. It's not about a percentage split but about considering your context.

Think big, work small. Always have a relationship and alignment with the bigger thing, whether it's a strategy or vision. If you're doing something small, have clarity on how it ties to the bigger picture. This way, even when you're chasing short-term wins, they are contributing to the long-term strategy.

Steve Klein

Yep, that makes sense. It depends on your stage. When you're finding product-market fit, you might do slightly larger, more exploratory things. As your company grows, a higher percentage of your time is spent optimizing.

Vistaly the operating system for Outcomes-focused Product Teams

Identify and assign Outcomes, discover and prioritize opportunities (with OSTs), track assumption testing, now/next/later roadmap designed around Outcomes

Try Free

No CC required

Steve Klein
Steve Klein