Как проверить гипотезу в продукте

Карьерник — квиз-тренажёр в Telegram с 1500+ вопросами для собесов аналитика. SQL, Python, A/B, метрики. Бесплатно.

Зачем это знать

A/B-тест — gold standard, но overkill для каждой идеи. Часто проще / faster validate via different methods.

На собесах PM / аналитик могут спросить: «как check этой идеи без full A/B?».

Формулировка hypothesis

Framework

«If we [do X], then [Y will happen] because [Z]».

Example

«If add one-click checkout, conversion rate increase by 5% because reduced friction».

Elements

  • Specific change
  • Predicted outcome (measurable)
  • Mechanism (why will work)

Типы проверок

1. Data analysis

Existing data — does hypothesis support?

Example: «adding filter product X will help». Check:

  • How many users search current filters?
  • Do they drop off when не find?

If yes — hypothesis plausible.

2. User research

  • Interviews
  • Surveys
  • Usability testing

Qualitative insights.

3. MVP / prototype

Build minimum viable feature. Test real users.

Example: «users want notifications». Build basic email. See engagement.

4. A/B test

Formal. Best для measurable impact.

5. Beta / dogfood

Internal usage first. Discover issues.

6. Cohort comparison

Without randomization — compare users с vs без.

Selection bias risk.

Выбор метод

Quick validation

Data analysis + small user test.

Confidence needed

A/B test.

Qualitative

Interviews.

Risky / irreversible

A/B mandatory.

Practical workflow

Step 1: Refine

Question: «Is it true что X correlates с retention?»

Refine: «Is it true, что users кто used feature F в first 7 days имеют > 20% higher D30 retention?»

Measurable.

Step 2: Pre-existing data

Query data:

SELECT
    used_feature_f_in_7d,
    AVG(retained_d30) AS retention_rate
FROM users
GROUP BY 1;

Observational: feature users — 40% vs non-users — 20%. 2× lift.

Step 3: Skepticism

Correlational. Is causation?

Maybe engaged users use feature AND retain. Feature itself doesn't cause retention.

Can't distinguish from observational.

Step 4: Confirm via experiment

A/B: randomly recommend feature F к subset. Check D30.

If treatment's retention up significantly → causal.

Step 5: Scale

Once validated — roll out. Measure ongoing impact.

Без A/B: observational methods

Diff-in-diff

Before/after + control/treatment.

Подробнее.

Propensity matching

Match users similar demographics. Compare outcomes.

Instrumental variables

Natural experiment.

Regression с controls

Control за confounders.

All имеют assumptions. Not A/B gold, но часто best available.

Pitfalls

Correlation vs causation

Classic trap. Be skeptical.

Confirmation bias

«I want feature ship» → cherry-pick data support.

Challenge yourself — search disconfirming.

Small sample

20 users interviewed. Small. 1000 — better.

Cherry-picking metrics

«Metric A supports, B contradicts, C neutral. Ship by A».

Be honest.

Post-hoc rationalization

After results, story changes.

Pre-register hypothesis.

Rapid validation

Fake door

Landing page promising feature. Measure interest (clicks).

Don't actually build yet.

Wizard of Oz

Manually deliver service что будет automated.

Tests demand без investment.

Concierge

Personally serve few customers.

Learn before build.

Methodology

Decide question

Actually important? Value если answered?

Choose validation depth

  • Trivial question: gut feel
  • Normal: data check
  • High-stakes: A/B
  • Risky: multiple validations

Document

Hypothesis, method, result, decision.

Learn

Post-mortem. What would do differently?

Когда skip A/B

Not measurable

Qualitative («более friendly») — user research better.

Cannot randomize

Geography, legal, network effects.

Low traffic

< 1000 users / day — stat significance hard.

Clear direction

«Fix crash» — obviously should do. No need A/B.

Ethical

Sometimes test itself problematic (pricing discrimination).

Как убедить PM / team

Data first

«Here's what data shows...»

Framework

«I'd validate это via [method] because [reason]».

Alternatives

If A/B — «alternative is X. Trade-off: we lose Y».

Expected result

«If hypothesis true, we'd see Z. If false, we'd see W».

Clear prediction.

Communicating results

Short

Executive: 3 bullets.

Evidence

Charts, numbers.

Recommendation

«Ship / don't / iterate».

Caveats

Limitations honest.

На собесе

«Validate product idea без A/B?»

Walk through:

  1. Existing data analysis
  2. User research
  3. Small-scale MVP
  4. Beta testing
  5. Quasi-experiments

Show flexibility.

«When is A/B required?»

  • Measurable impact
  • Sufficient traffic
  • Can randomize
  • Decision depends on precise estimate

Частые ошибки

A/B everything

Overkill. Some ship-without-A/B ok.

Nothing validated

Ship на gut. Often wrong.

Mix methods poorly

A/B + post-hoc segmentation → biased.

Confirmation in one side

Only validate positive outcomes. Miss negatives.

Связанные темы

FAQ

MVP опытный?

Smaller better. Just enough test hypothesis.

Сколько user interviews?

5-10 surface patterns. 20+ confidence.

Quantitative vs qualitative?

Both. Complement.


Тренируйте — откройте тренажёр с 1500+ вопросами для собесов.