The Discovery Space in Vistaly is where product teams explore customer needs, generate solution ideas, and validate assumptions with testing.
Vistaly’s discovery space is heavily inspired by Teresa Torres’ Continuous Discovery Habits. While not a prerequisite for Vistaly, this is a fantastic resource if you’re new to product discovery,
Opportunity Cards are at the heart of Vistaly’s discovery process. They represent the voice of your customer by capturing their needs, pain points, and desires in a structured format that helps your team prioritize effectively.
Opportunity Cards document customer problems, needs, or desires worth solving. They are created based on:
Customer conversations and interviews
User feedback collection
Whereas product usage metrics in your Value Exchange Model tell you what’s happening, Opportunities tell you why it’s not happening more (or less, or faster, etc.)
Opportunity Cards in Vistaly are typically child cards of:
KPI cards – A product metric that would improve if you addressed the Opportunity. This allows you to keep a structured backlog.
Goal cards – A time-bound Goal/Outcome/Key Result the team is actively working to achieve.
Parent Opportunities – Larger opportunities that can be broken down into more specific sub-opportunities
This hierarchical structure allows for transforming complex problems into manageable component, showing a clear relationship between business goals and customer needs. Additionally, the hierarchy provides a structure that makes prioritization more impactful. Instead of priortizing a flat list of features, you move closer to the problems worth solving and prioritize those first.
Imagine you’re working on a mobile app that delivers content to users. Your app includes a handful of ways for users to find the content they what watch, read, or listen to.You determine and set your primary outcome to be improving the number of searches that return a successful result.Then after some customer discovery, you:
When adding insights from customer interviews or feedback, an insight count will display on the face of each card. The insight count can be used as a proxy to frequency, reach, and impact potential.
Pro Tip 💡**:** Insight count will only display when is enabled in card settings
For more robust analysis, custom fields allow you to evaluate opportunities across multiple dimensions.Default custom fields for Opportunity Cards include:
Sizing Criterion
Description
Frequency
How often do customers experience this pain/need?
Reach
How many customers experience this pain/need?
Severity
How painful will it be to customers if left unaddressed?
Differentiation
Is it table stakes or a differentiator?
These criteria help teams quantify and prioritize opportunities based on customer impact. Using a consistent evaluation framework allows for more objective comparison between different opportunities.
Pro Tip 💡: Find the right balance with custom fields. Too many can slow down your process, while too few might not provide enough context for decision-making.
Pro Tip 💡: Custom fields will only display when they are enabled in card settings. Some custom fields will only display with data.
Once opportunities have been prioritized, use statuses to communicate when and if the team will address them. Opportunities support the following status:
Stage
Commitment
Description
Identified
No/Unknown commitment
Aware that the Opportunity exists but have made no commitment to solving it. This is how all Opportunities start.
Not now
No commitment
Recognized as an opportunity but deliberately chosen not to address it at this time, possibly due to resource constraints or competing priorities.
Later
Committed (Moderate Confidence)
Committed (for now) to solving the Opportunity. Confidence is relatively high that it’s an opportunity worth addressing, but this is subject to change.
Next
Committed
Committed and will begin work soon to start addressing the opportunity. It may be the case that the opportunity is currently being researched or solutions being identified.
Now
Committed
Committed and work is actively underway to address it. It’s best if there is an identified solution associated with an opportunity in Now - if not, solutioning should be underway.
Avoid solutions disguised as outcomes (Binary Outcomes)
A common pitfall is creating “binary outcomes” that are actually solutions in disguise. For example, “Launch the AI Chatbot” is a solution, not an outcome.This matters because starting with solutions bypasses discovery and prevents finding the best ways to solve customer problems.
Pro Tip 💡: When you catch yourself proposing a solution as an outcome, ask “Why do we want this?” to uncover the underlying value you want to provide.
When you’re working in a low data environment you may not have the data you need to even set a measurable outcome.You don’t need perfect metrics to get started:
Begin with a directional outcome like “improve search discovery”
Start customer conversations and analyze available data (but timebox it)
Refine your measurement approach as you learn more
This approach works particularly well for new products with limited usage data, legacy systems without proper instrumentation, and novel customer workflows.
Top-level Opportunities sit directly under an outcome in your opportunity space. They frame your discovery work and can be broken down into more specific sub-opportunities.Teams often apply segmentation techniques to these top-level opportunities, creating a structured layer that makes it easier to organize and prioritize customer problems.Customer Journey StepThis approach segments opportunities according to the steps in your customer’s journey when using your product or service.How to implement:
Map out the key steps in your customer journey from beginning to end
Create a top-level opportunity for each significant step
Validate your journey map with actual customer input
Job (Jobs To Be Done)The Jobs To Be Done (JTBD) framework focuses on understanding what “job” customers are trying to accomplish when they use your product.How to implement:
Frame top-level opportunities around the core jobs your customers need to get done
Focus on the progress the customer is trying to make in a particular circumstance
Actor / RoleThis approach organizes opportunities based on the different roles or personas that interact with your product.How to implement:
Identify all key stakeholders who engage with your product
Create top-level opportunities for each distinct role
Consider both primary users and secondary stakeholders
Solutions are actions taken to address an Opportunity. This can be launching a new feature, improving an existing feature, improving documentation, and so on.Popular Terms:
Once solutions have been ideated, use statuses to communicate when and if the team will implment them. Solutions support the following status:
Stage
Commitment
Description
Idea
No/Unknown commitment
Initial concept for addressing an Opportunity. No commitment to implementation has been made.
Not now
No commitment
Solution evaluated but intentionally deprioritized due to resource constraints, technical limitations, or strategic considerations.
Later
Committed (Moderate Confidence)
Planned for a future timeframe beyond the current planning horizon. Recognized as valuable but with lower immediate priority than Now/Next items.
Next
Committed
Solution approved with implementation details being finalized. Awaiting resource allocation or developer availability.
Now
Committed
Solution actively under development. Implementation work is in progress.
Note: Using “Later” for Solutions is generally discouraged. The “Later” status is more appropriate for Opportunities (problems) where you’ve committed to solving them but haven’t yet fully defined the problem space or solution approach.
An assumption is a belief accepted as true without definitive proof. Every proposed idea inherently contains multiple assumptions that should be identified and validated. Below are common categories of assumptions that teams should consider:
Assumption Type
Description
Desirability
Does this solution solve a meaningful problem for users, and will they want it?
Usability
Will users intuitively understand and successfully use this solution?
Feasibility
Can this solution be delivered with the current team, resources, and capabilities?
Viability
Does this solution align with strategic goals and generate measurable business value?
Additional Considerations:
Assumption Type
Description
Ethical Considerations
Does this solution align with ethical standards and avoid potential harm to users, society, or the environment?
Scalability
Can this solution grow to meet increasing demand without degrading quality or performance?
Legal
Are there legal considerations that might impact implementation or adoption?
While the first four assumption types are foundational to most product development efforts, the additional types should be evaluated when relevant to the specific solution context.
Experiments provide evidence to de-risk high-risk assumptions. As a result, they save time, money, and energy by preventing investment in a solution that will give little to no return on investment.Tip: Consider running experiments on assumptions with weak evidence AND would run a high risk of project failure if not correctly understood.
The goal of assumption tests are to provide a lightweight way to derisk risky assumptions. Here are some common types of tests:
Test Type
Description
One Question Survey
A targeted, single-question instrument designed to validate a specific assumption with users. Provides quick, focused feedback on a narrow hypothesis.
Visual Prototype
Low or medium-fidelity visual representation of a solution to test user reactions and validate assumptions about usability, desirability, or comprehension.
Research Effort
Structured investigation involving interviews, contextual inquiry, or usability testing to understand user behaviors, needs, and pain points related to an assumption.
Data Analysis Effort
Examination of existing user data, metrics, or analytics to validate assumptions based on actual behavior patterns rather than stated preferences.
Painted Door Test
A technique that presents users with an interface element (button, link, etc.) for a non-existent feature to gauge interest before building the actual feature.
Note: These test types can be combined or customized based on the specific assumption being tested and the required level of confidence.
Once solution assumptions have been identified, use experiment (assumption test) statuses to communicate how tests are progressing. Experiments support the following statuses:
Stage
Commitment
Description
Developing
In preparation
Experiment design is being created or refined before execution. Methods and metrics are being established.
Pending
Ready for execution
Experiment is fully designed and waiting to be run. All prerequisites are in place.
Running
In progress
Experiment is currently active and collecting data. Results are not yet available.
Failed
Completed (invalidated)
Experiment has concluded and results indicate the assumption was incorrect or not supported by evidence.
Passed
Completed (validated)
Experiment has concluded and results support the original assumption, providing validation.
Once again, there is no hard and fast rule. At each level you should have more than one. If you have too many straight lines of opportunities, they are typically one opportunity and can be consolidated. Going broad with the opportunity space can help you see a fuller picture
Yes. The opportunity space is dyanmic, customer needs are constantly changing. This is why it’s helpful to connect customer quotes and other insights to opporutnities. When it’s time to revisit them, those insight will bring you back to the original context. Use them to determine relevancy. Many times you may also find when looking at quotes that an opportunity is really two or more opportunities.
No. Focus on testing assumptions that are both high-risk (would cause project failure if incorrect) and have weak existing evidence. Testing every assumption would be inefficient and delay progress. Prioritize experiments for assumptions where the cost of being wrong outweighs the cost of running the test.