Skip to content

How to Build a 360 Degree View of Your Product Market Fit

Close up of camera lens

Understanding how well your product satisfies market demand is probably the single most important job for product managers, CTOs, and software executives. All product managers and companies talk about being “customer centric”. But what exactly do you do? What practices result in actionable feedback that can make products better? Too many product teams rely on intuition, “founder brilliance”, and sales performance only later to discover they have missed the mark. While sales data is the ultimate product market fit measure it is a lagging indicator. Very lagging actually. The 360 Degree Market Fit Framework is designed to help you create a proactive and comprehensive suite of leading indicators that help you tune your product to drive sales. The framework is based on four dimensions of customer feedback.

Image for post

Qualitative and Quantitative: This dimension looks for both measurable indicators and in-depth narratives. Quantitative feedback tells you that 5 out of 10 users never use “feature xyz”. Qualitative feedback tells you why. Without the former you don’t ask why and without the later you have unactionable data.

Subjective and Objective: This dimension gathers both what users say they believe, subjective, and what they actually do, objective.

Human-Computer Interaction (HCI) and Market Fit: A failure on either side of this dimension sinks the product which is why it is critical to purposefully pull feedback from both. HCI encompasses all aspects of the user’s experience while using the software. Is it fast, easy, enjoyable, and intuitive? On the other side of the equation is the degree to which your solution scratches the intended itch. Whether your product is candy, a vitamin, or a pain killer. This side of the dimensions tells you if it is effective.

Decision Maker and User: The final dimension applies to business-to-business (B2B) software. B2B software product teams need feedback from both daily users and the executives that write the checks. In B2C this dimension is ignored as the user is the decision maker.

Of the hundreds of methods, tools, and techniques we’ve focused on five tools that together hit every dimension of product feedback. No single tool or technique can do the job. Every product team should implement and iterate on each of these tools to build a 360 degree perspective of their customer’s experience and satisfaction. The full picture that is painted by this suite of tools will ensure you are continuously learning and improving your product to hit product market fit.

#1 User Experience (UX) Research

Image for post

UX research projects are designed to get user feedback on specific features, workflows, and designs for distinct jobs-to-be-done (JTBD) in the product. These are most often used early in the development cycle but are also effect post release. It is important that UX research projects are conducted with actual users of the product not their managers or executives. There are numerous articles to guide you on the fundamentals of UX research but foundationally focus on goal oriented scenarios. Users are hiring your software to do a job. In our experience leveraging JTBD scenarios as the basis for research works best. For example: “You received this email. Your goal is to pay the attached invoice.” Then let them run. We always ask users to “say what they are thinking” as they navigate and we always record the sessions. As a bonus UX research is super effective remote leveraging screen sharing and video. Although you don’t need 3rd party tools to conduct UX research we’ve listed some below that can help automate the process.

Strengths

  • The single best tool for improving usability. Nothing is quite as humbling as sitting quietly and watching a user not click on the call-to-action button that you think is so perfectly placed and obvious.
  • Good for culture. It is highly inspiring to hear and see direct customer feedback.

Weaknesses

  • Getting users to participate can be tough. It often takes 5–10 calls to get one yes.
  • Time consuming to conduct
  • Possibility of poor interpretation of interviews. The “why” behind what the users did and didn’t do is subjective. Having 3 or more team members participate in the sessions helps.
  • Tough to scale

Signs When It is Working

  • Product teams build interviews into their feature release plans
  • Fresh customer quotes inform product engineering
  • Cross-functional team participate in the discovery process

Signs When It is Not Working

  • Insights informing product design are stale (older than six months)
  • Testing is used on high profile projects and skipped everywhere else
  • Testing is done but changes are overruled or not acted on

Effort: 8/10

Tool Options: UserbrainProductBoard Dscout

#2 Brand Promise Interviews

Image for post

Brand Promise interviews are 1:1 interviews held with clients after they have been using your product for a period of time. These are 20–40 minute sessions with open-ended questions designed to understand if you are living up to what your customers expect your product to deliver. Where UX research projects focus exclusively on user’s interaction with the product, brand promise interviews are a level higher. Are you meeting the user’s expectation of value. “Is our product improving your life/business in the way you expected?” Brand promise interviews are geared towards the decision makers in B2B software. You want both in the interview but if you can only get one, get the decision maker.

If you don’t yet have a clearly defined brand promise the exercise of creating it and getting internal alignment on it is an inspiring and focusing exercise that all companies should do. Your brand promise should be differentiated, relevant, credible, and irreproducible. Examples:

  • Intuit promises “More money, more time, more confidence.”
  • TreviPay promises “We help your business grow”

We’ve found that building our question template for the interview around the customer journey is highly effective.

Customer Journey
Using the Customer Journey to Guide Interviews

Strengths

  • Raises the conversation up a level from tactical to purpose. You can have the perfect UX but if the product doesn’t deliver on why the executive or user wrote you a check it doesn’t matter.
  • Unlike surveys where you tend to get the extremes (users that love you and users that have something to complain about) you can target the “silent majority”.
  • Unexpected feedback. We rarely conduct an interview that we don’t discover something that surprises us.
  • No tools required

Weaknesses

  • Very time consuming
  • The interview team needs to be skilled and trained not to justify and qualify feedback.
  • Talking to the “leaders” when the “soldiers” actually use the product. This is both a weakness and a strength. Our most successful brand promise interviews have included a mix of participants from the client’s team.

Signs When Used Well

  • All functional areas in your company know the brand promise and are actively taking action to raise the bar
  • Analysis informs strategy and investment
  • Teams takes pride in promise

Signs When Not Used Well

  • Gap analysis of promise deficiencies stay in the report and is not actioned
  • Product roadmaps do not reflect the promise

Effort: 8/10

# 3 In-app Surveys and Feedback Forms

Image for post

In-app surveys are extremely popular and useful but they can be difficult to get right. There are dozens of tools and hundreds of articles published on in-app survey best practices. Which users to target, when to target them, visual design, and question design all have impacts on participation and the quality of the feedback. Surveys are much easier to scale and result in larger volumes of data. Every software product should have an in-app survey strategy.

Strengths

  • Quantitative feedback can be easily evaluated and trended
  • Minimal time investment for users and you
  • A lot of tools and best practices available

Weaknesses

  • Feedback lacks depth and context
  • Feedback can be skewed toward the extremes
  • Users feedback is what they say they feel rather than what they actually do

Signs When Tool is Used Well

  • Surveys are targeted at certain users or timed campaigns that focus on a specific stage of the journey
  • Results are reviewed consistently and categorized
  • Results are broadly published and discussed on product steering meetings

Signs When Not Used Well

  • Generic questions
  • Dull results
  • No one can remember when results were last used to influence product roadmap

Effort: 4/10

OptionsHotjarForeseeTypeformUserReportAppcuesPendoQualtrics

# 4 Application Analytics

Image for post

Collecting, analyzing, and reporting on usage data is critical to understanding how users actually interact with your software. Knowing how users navigate your application helps you identify usability issues, unused features, and the most used features. Leading tools in this space will incorporate Artificial Intelligence (AI), and Machine Learning (ML) to derive insights us mortals would otherwise miss. The most valuable tools will also give you click path analysis and heatmaps.

Strengths

  • Usage data doesn’t lie
  • Easy to implement
  • Options to fit every org size and budget

Weaknesses

  • Analysis paralysis
  • While easy to implement it can be tough to derive insights without tools

Signs When Tool is Used Well

  • Key features are matched to outcome measure
  • Product teams review data as part of weekly routine

Signs When Not Used Well

  • Data only looked at when there is a problem or it’s time to create the quarterly report

Effort: 6/10

Options: Google AnalyticsTwilio SegmentPendoHotjarClickyAmplitudeMixpanel

# 5 Net Promoter Scores (NPS)

Image for post

While technically a subset of surveys we’ve included it as a distinct tool because of the wide usage and specific implementation details. NPS became a staple of customer satisfaction because it measures the intensity of satisfaction. This ability to gauge if your product is creating organic growth is probably the most powerful element of the NPS metric. Are users spreading the word on their own? You can find many articles and tools on how to properly administer and interpret NPS results.

Strengths

  • Tested, proven, trusted. Studies have show a correlation between NPS score and revenue
  • Many implementation options
  • High participation rate due to single question

Weaknesses

  • By itself NPS lacks context. Teams have to speculate about what is driving the scores.
  • All of the weaknesses inherent in surveys

Signs When Tool is Used Well

  • Used consistently and trended
  • Data it is combined with other tools to give context
  • Broad understanding in the organization about the meaning of NPS

Signs When Not Used Well

  • Executives devising plans to improve NPS using intuition
  • Scores are so bad (or so good) that they are not used to inform strategy
  • Scores are lip service for recruiting or investors but do not inform strategy

Effort: 4/10

OptionsWootricDelightedQualtricsSurveyMonkeyInMoment

Maximizing Learning

The final and part of implementing the framework is integrating the feedback across all tools. The outcome of a UX research project leads to the creation of a specific application usage dashboard. Application usage data leads to specific questions to be asked in the brand promise interviews. NPS results lead to a specific survey which drives a new UX research project. Making it a mission of your product management team to have a 360 degree view of product market fit will speed learning and give you a clear competitive advantage.

Most product teams will have some of these tools already implemented. But most will also find themselves identifying with some the “signs when not used well” and probably aren’t using all five tools in an integrated why. To get started with the framework start closing those gaps. If you haven’t done a deliberate brand promise interview I highly recommend creating a set of questions based on your brand promise and get one setup as fast as you can.

Special thanks to the following contributors: Kirby Montgomery, Danny Cates, Teresa Cain, Tapas Samantaray, Steffan Karagianis, John Kille.

Originally posted on Medium.

Stay up-to-date with the latest from TreviPay