Как выбрать правильную метрику для фичи

Карьерник — квиз-тренажёр в Telegram с 1500+ вопросами для собесов аналитика. SQL, Python, A/B, метрики. Бесплатно.

Зачем это знать

Wrong metric → wrong decisions. «Ship feature» because clicks up — но retention падает.

Metric selection — core skill product analyst. На собесах regularly asked.

Framework

  1. Business goal — зачем feature?
  2. User value — что получает user?
  3. Primary metric — captures value
  4. Secondary — context
  5. Guardrails — не break нужное

Пример: search улучшение

Business goal

User finds product faster.

User value

Less time searching → more time browsing / buying.

Primary

«Avg searches before purchase».

Secondary

  • Search CTR
  • Time-to-purchase
  • Revenue per search

Guardrails

  • Total revenue не dropped
  • Cart abandonment не increased
  • Error rate не worse

Критерии metric

Measurable

Can we get data? Accurate?

Sensitive

Detects effect?

Predictive

Leads к business outcome?

Actionable

Team может влиять?

Aligned

Reflects user value + business goals?

Not gameable

Can't trivially cheat?

Common traps

Vanity metrics

Total signups (growing?) — users не active.

Better: active users, retention.

Proxy confused

CTR ≠ revenue. Click ≠ purchase.

Short-term bias

«Engagement up today» — but retention down?

Aggregation hiding

Overall +5%, но premium segment -20%?

Gameable

«Reduce support tickets» → agents close prematurely.

Counter-metric needed.

Categories

Engagement

DAU, session length, actions. «Using more?».

Adoption

% users feature. «Finding it?».

Conversion

Funnel step CR. «Moving through?».

Retention

Coming back? Churn?

Revenue

ARPU, MRR, purchase rate.

Satisfaction

NPS, CSAT, ratings.

Each useful different questions.

Specific features examples

Checkout redesign

Primary: checkout CR. Secondary: cart abandonment, time-to-purchase. Guardrails: revenue, refund rate.

Recommendation

Primary: CTR на recs. Secondary: conversion post-rec, watch / usage time. Guardrails: diversity, user satisfaction.

Onboarding

Primary: activation rate (key action completed). Secondary: time-to-activate, drop-off points. Guardrails: D7 retention.

New feature launch

Primary: adoption rate (% users who tried). Secondary: engagement (repeat usage), retention impact. Guardrails: other metrics не degraded.

Pricing change

Primary: revenue per user. Secondary: conversion rate, churn. Guardrails: NPS, support volume.

Leading vs lagging

Leading

Early indicators. Actionable.

Examples: feature adoption (predicts retention).

Lagging

Outcomes. Final truth.

Examples: revenue, retention Q4.

Track both. Optimize leading.

North Star alignment

Feature metric → team input → company NSM.

Example:

  • Company NSM: paying customers
  • Team metric: activation rate
  • Feature metric: onboarding completion rate

Hierarchy consistent.

Guardrails important

Отдельно отмечайте guardrail metrics.

«Don't optimize one metric while breaking another».

Classic: improve X but Y degraded.

Pre-register

Before experiment / launch:

  • Primary metric
  • Success threshold
  • Guardrails + thresholds
  • Analysis plan

Prevents post-hoc rationalization.

Long-term vs short-term

Beware: feature boosts short-term но hurts long-term.

Example: aggressive notifications → engagement up short, churn up long.

Solution: holdout group, long-term tracking.

User segments

Do check

  • Power users vs casual
  • Paid vs free
  • New vs returning
  • Geography

Heterogeneity common.

«Feature helps Premium users, hurts free» — significant finding.

Anti-goals

Some teams define anti-goals:

«We WON'T track metric X, because incentives wrong».

Example: «Not tracking session length» because could encourage engagement farming.

Consensus

Metric selection often controversial:

  • PM wants user engagement metric
  • Business wants revenue
  • Engineering wants reliability

Discuss, agree. Pre-commit.

Evolution

Metric might change over product lifecycle:

Early

Activation, user acquisition.

Growth

Retention, engagement.

Mature

Monetization, efficiency.

Adapt.

Specific decision

Continuous vs binary

«Revenue» — continuous. Precise. «Converted» — binary. Simpler.

Trade-off: precision vs sensitivity.

Count vs percentage

«Total purchases» grows с traffic. «Conversion rate» normalized.

Rate для comparisons.

Sum vs per user

«Total revenue» includes outliers. «ARPU» per user.

Per-user usually more stable.

Mean vs median

Mean affected outliers. Median robust.

Depends distribution.

Metric definition

Formally document:

Metric: D7 retention
Definition: unique users с event «session» on day N+7 after signup
Filters: exclude internal users, exclude tests
Source: events table
Refresh: daily
Owner: @analyst

Single source of truth.

На собесе

«Metric для feature X?»

Walk through:

  1. Clarify feature
  2. Business goal
  3. User value
  4. Propose primary
  5. Secondary
  6. Guardrails

Structured.

«Why этот metric?»

Justify:

  • Measurable
  • Sensitive
  • Predictive
  • Actionable

Show rigor.

«Alternatives?»

Consider several. Discuss trade-offs.

Связанные темы

FAQ

Single metric possible?

For feature — yes primary. Но always guardrails.

Proxy OK?

Sometimes necessary (revenue — lagging). Check correlation.

Как agree disagreement?

Framework, data, pre-commit. Senior arbitrates.


Тренируйте — откройте тренажёр с 1500+ вопросами для собесов.