Hypothesis — An assumption or guess. In the context of product development, it is a reasonable assumption about the increase in product performance that is put forward by the team for evaluation and verification through experimentation.
Thus, the hypothesis generates the task of conducting the experiment, but is the task itself.
Hypothesis | Task |
---|---|
Finds a new cause-and-effect relationship and refutes the null hypothesis | Not intended to confirm or deny |
It has a standard "failure" result, which provides new knowledge | It has only the goal of realization |
Requirements for hypotheses
A working hypothesis is a hypothesis that the team is ready to use. For a hypothesis to become a working hypothesis, it must:
- Be evaluated by the team as real,
- Provide significant growth or new valuable knowledge,
- Meet the requirements of the production
- In case of failure, it should suggest the direction of developing a stronger hypothesis
Staging requirements
- The hypothesis must have a formal author and "owner"
- The author must formulate the hypothesis before proposing it.
- The hypothesis must be fundamentally verifiable.
- The hypothesis must refute the "null hypothesis" - the default assumption that there is no relationship between two observed events.
- The hypothesis should lead to a closer approach to the goal, not a further one.
- A hypothesis must contain a condition and a consequence. The consequence of the hypothesis should be related to the metric and the amount of its change ("will increase conversion by 10%"). This will allow you to run the hypothesis through the economic model and correlate it to the current strategy.
- The hypothesis should contain a premise or other rationale for success* (e.g., "analytics show that people on large monitors scroll better than people on small monitors, so they probably don't see the content below").
- It should not contradict the previously established facts it is intended to explain.
- Should be applicable to the widest possible class of phenomena. For example,* "the poor contrast of the phone significantly impaired calls, so it is likely that poor contrast in other places could also cause problems. "*
Examples of hypothesis generation
Ways to test hypotheses
An experiment is a practical test of a hypothesis in an accessible way. The value of the experiment is determined by the value of the knowledge gained, so the maximum amount of knowledge should be obtained from the experiment.
Prioritised ways to test hypotheses from the most economical to the most expensive:
- ROI modeling
- Data analysis
- Talk with users (User Research. Customer Development)
- MVP
- Usability research
- A/B test
Levels of testing and verification
- Specific implementation of the feature (UI version)
- The relationship between the mechanism and key metrics (push → retention)
- The direction of product development (for example, a test of the impact of speed on retention)
Unconfirmed hypotheses.
Any hypothesis should carry knowledge. Even if the hypothesis is not confirmed, it is also new knowledge, for example: "segment X is not sensitive to price changes within 20%" or "all segments can handle the card entry interface".
Usable information.
- Useful techniques for working with hypotheses
- To get tangible results, you should increase the number of working hypotheses to 5 per week. According to various estimates, even in strong teams, only 20% of hypotheses are successful. Given that there are about 40 productive work weeks in a year, this means that only 40 hypotheses per year will be successful. If 1 successful hypothesis generates a 5% increase in revenue, then you can expect a 200% increase in revenue per year.
- How to distinguish a hypothesis from a task? If an idea implies failure and a rollback to the previous state, it is a hypothesis. If it's "done and forgotten," it's a task.
- Occam's Razor:** If a phenomenon can be explained in two ways: for example, the first way is by using entities (terms, factors, facts, etc.) A, B, and C, or the second way is by using entities A, B, C, and D, and both ways give the same result, the first explanation should be preferred. Entity D is redundant in this example, and its involvement is excessive.
- Hanlon's Razor:** the presumption that human error should be assumed first and foremost when looking for causes of unpleasant events, and only secondarily for deliberate malicious actions. It is usually expressed by the phrase: "Never attribute to malice what can be explained by stupidity."
- Hitchens' Razor:** The burden of proof for a statement is on the person who claims it to be true. If the claimant fails to provide evidence, the statement is considered unfounded and opponents should not argue further.
- What contributes to the implementation of growth methods in the company
- Google OKR and goal synchronization - otherwise, there is a conflict of interest
- Flat organizational structure - so as not to break through the hierarchy
- Ambitious goals - otherwise there is no motive to change the approach
- Steady growth stages - suddenly they are already growing faster
- Agile methodology - to quickly change the planned
Useful articles
Experiment Design: Success Metric vs. Fail Condition
Last week I wrote about the difference between an assumption and a hypothesis and Steven Diebold wrote a great response. Steve schooled me on a few topics and pointed out my lack of clarity in some areas. One thing he brought up deserves more debate: The Success Metric.
grasshopperherder.com
Good Experiment, Bad Experiment - Reforge
Fareed Mosavat is a former Dir of Product at Slack where he focused on growth, adoption, retention, and monetization of Slack's self-service business. Prior to Slack he worked on product and growth at Instacart, RunKeeper, and Zynga. Fareed recently joined Reforge as an EIR (Executi
www.reforge.com
The one line split-test, or how to A/B all the time
Split-testing is a core lean startup discipline, and it's one of those rare topics that comes up just as often in a technical context as in a business-oriented one when I'm talking to startups. In this post I hope to talk about how to do it well, in terms appropriate for both audiences.
www.startuplessonslearned.com
Hypothesis Checklist
read.realstartupbook.com
Experimentation Hub: free tools & advice for A/B Testing
Including: hypothesis kit, power analysis calculator, two-tailed p-value excel calculator and more. A free collection of resources to help you run better A/B tests, be more confident in your results, and put you on the road to becoming an experimentation expert!
experimentationhub.com

Great One-Pagers
I have always been a fan of one-pagers - short, space-constrained, descriptions of a proposed product bet. A single page is something that you can put up on a wall for everyone to see. It takes between 3-6 minutes to read. If you use GDocs, you can invite commenting and suggestions.
medium.com
23 UX Case Studies To Improve Your Product Skills
See exactly how companies like Tinder, Airbnb, Trello, Uber and Tesla design products that people love. One new user experience case study every month.
growth.design
