You want to gather user feedback to improve your product. Great, you’ve acknowledged how important that is, and taken the first step!
But out of all your users, who should you listen to?
It would make sense to pick one problem (like building new features intuitively), and stick to the most relevant cohort (like power users).
This approach seems be efficient, but it might easily leave you stuck with just one perspective.
You should be open to interviewing a diverse set of users, because then you’re more likely to learn something new and unveil blind spots you previously had.
Which users, then, should be considered? It varies case by case, but you can learn from real life examples. First, we’ll list cohorts that are commonly used and show each one's unique value. Then, we’ll show real-life examples of cohorts we personally picked for our user interviews and what we learned from each.
Commonly Chosen Cohorts and Their Value
While there is no one-fit-for-all answer to who you should interview, you might get some ideas from trends. Below are 3 common cohorts that companies use for feedback.
1) Recently Churned Users: Spotting Roadblocks
Recently churned users have stopped using your product, but their experience is still fresh in their minds. They might have stopped engaging with your product because of usability issues, unmet expectations, among other reasons.
What they offer: Honest feedback on what didn’t work for them. This group of people can show you the barriers that prevented them from continuing to use your product, allowing you to focus on removing those obstacles for others in the future.
2) Power Users: In-Depth Insights
Power users are highly engaged individuals who interact with most, if not all, of your product’s features on a regular basis (weekly or daily). They also have the deepest understanding of the ins and outs of your product.
What they offer: Insight into your product’s strengths and its most valuable features. They help you identify the parts of your product that shouldn’t be changed drastically, ensuring you don’t alienate loyal customers. Since power users often explore more specialized functionalities than casual users, they’re also a key source of feedback on niche features.
3) New Users: Evaluating the Effectiveness of Onboarding
New users are individuals who have recently joined and are in the early stages of exploring your product. This cohort has just completed onboarding, giving them fresh perspectives on initial interactions and any friction they experienced. They’re not necessarily yet familiar with everything your product has to offer.
What they offer: Immediate, candid feedback on the onboarding process. They can identify areas where the user journey feels confusing or intuitive, allowing you to improve the first-time user experience and increase early retention.
4) Users Who Churned but Came Back
Users who disengaged for a while but later returned to your product bring unique insights. These users may have initially stopped using the product due to unmet needs, usability issues, or a lack of perceived value but decided to return after a certain period. For example, someone might have left your product after the free trial ended but returned months later due to an improved feature or an updated pricing plan that better suited their needs.
What they offer: Valuable insights into both past friction points and current strengths. Their feedback sheds light on what previously led them to leave, while also highlighting the features or improvements that motivated them to return. This dual perspective allows you to address weaknesses that may cause churn and strengthen the aspects that reinforce your product’s core value proposition.
How We Did It: Real-Life Examples of Lessons Learned from Different Cohorts
In the summer of 2024, we changed Wudpecker (our AI Notetaker SaaS product) quite drastically. So, we needed lots of feedback for different aspects of our tool.
Below are some of the most valuable cohorts we ended up getting a lot of value from. For context, you’ll also see how we defined each cohort. Note that the way you should define cohorts in your situation might look very different.
Recently Churned Users
Our definition of this cohort: Users who had tried our product for at least two weeks but stopped using Wudpecker entirely for at least one week (and at maximum, a few weeks would’ve passed since the churn).
This group helped us uncover both anticipated and unexpected pain points. Sometimes, we had hunches about certain feedback, but receiving concrete examples of how users experienced specific issues helped us gauge how urgently we should act on them.
At other times, churned users mentioned specific reasons for leaving that we hadn’t considered, like wanting certain integrations or workflows that our product didn’t fully support.
Lesson Learned: Listening to recently churned users gave us both expected and surprising insights. We learned to differentiate between feedback to act on and feedback to deprioritize. For example, certain features requested by churned users might only cater to highly specific workflows and weren’t relevant to our core audience, while other feedback flagged gaps we knew we needed to address.
We strengthened our understanding of important issues we should consider, as well as the most relevant demographic of users.
Power Users
Our definition of this cohort: Users who engaged with Wudpecker at least once per day for three out of the last seven days (and who, based on the interviews, used at least one optional feature regularly)
Power users provided insights that went beyond minimal usage, revealing both core strengths and areas for improvement. Within this cohort, we identified at least two distinct subgroups, each with unique needs and feedback. Some people had completely discarded Wudpecker’s automated meeting summaries and instead started to use Ask Wudpecker to quickly find specific answers from past meetings. There were also others that deemed the summaries very important and wanted to customize their structure and content a lot, not necessarily utilizing the AI chat as much.
Lesson Learned: We learned what keeps people using our product on a regular basis, which features are most important, which actions felt repetitive, and so on. Within this cohort, there were many different subgroups and use cases. Each one gave us more understanding into what can take our product to the next level.
Light Engagement Users
Our definition of this cohort: Users who engaged with Wudpecker at least once per day for three out of the last seven days (and who, based on the interviews, did not engage with any features but were instead happy merely documenting meetings and relying on the default summaries)
These users were happy with only documenting and storing meeting notes, without exploring any additional features or AI interaction. There were many different reasons for this simple usage. For instance, some users were unfamiliar or uncomfortable chatting with AI, which suggested a possible onboarding improvement. Others used Wudpecker solely for legal reasons, recording all meetings “just in case.”
Lesson Learned: This cohort helped us distinguish between users who would likely always have minimal feature engagement and those who needed more guidance to discover our product’s potential. For the latter, we improved our onboarding process to better communicate the benefits of advanced features.
Conclusion
Focusing on a single user group can give a limited perspective of your product. Instead, embracing diverse cohorts like churned users, power users, new users, and even light engagement users allows you to address different types of feedback that build a fuller picture of user needs and behaviors.
By doing this, you create a more versatile, user-driven product that not only appeals to various types of users but also grows stronger from their insights.