Analytics Fundamentals · 16 min read
Product analytics is the practice of collecting, measuring, and interpreting user behaviour data to improve how a product acquires, activates, retains, and monetises users. It is not the same as business intelligence (which typically measures financial and operational metrics), not the same as A/B testing (which is a decision-making method that uses analytics data), and not the same as market research (which measures potential users, not actual behaviour).
The goal of product analytics is to answer three types of questions: diagnostic ("why did our Day 7 retention drop 5 points last month?"), predictive ("which users are at risk of churning in the next 30 days?"), and prescriptive ("what should we change to improve activation rate?"). Most teams are good at descriptive questions ("how many users did X yesterday?") but struggle to progress to diagnostic and prescriptive analysis — which is where the actual product decisions come from.
Daily Active Users ÷ Monthly Active Users. Measures "stickiness" — how often retained users actually use the product. Above 0.2 is healthy for most apps; above 0.5 for daily-use products like payments.
% of new users who complete the defined "activation event" — the first action that predicts long-term retention. The most misunderstood metric in product analytics.
% of a user cohort still active 1, 7, and 30 days after their first visit. The most important set of product health metrics. Flat curves beat declining ones regardless of absolute level.
Median minutes from signup to first activation event. Shorter is almost always better. Reducing time-to-activation from 20 min to 5 min typically improves activation rate significantly.
% of active users who use a specific feature. Low adoption of an important feature signals discovery or UX problems. Removing unused features reduces complexity.
% of users who leave at each step of a key flow (onboarding, checkout, KYC). Identifies the exact step where friction is highest and where to focus optimisation.
% of free/trial users who convert to a paid plan. Most relevant for SaaS and freemium models. 2-5% is typical for B2C; 10-25% for well-qualified SaaS trials.
Average Revenue Per User and Lifetime Value. LTV = ARPU × average months retained. The LTV:CAC ratio must be >3 for a sustainable business model.
Net Revenue Retention — revenue from existing customers including expansions minus churn. Above 100% means existing customers are growing revenue without new acquisition. The SaaS health metric.
Average new users each existing user brings through referral. K = (invites/user) × (invite conversion %). K>1 is viral growth. Even K=0.3 meaningfully reduces effective CAC.
The right analytics stack depends on your stage, team size, and product type. At the earliest stage (pre-product-market fit), use the simplest tool that gives you funnel and retention data. Complexity is the enemy — you want answers, not dashboards.
| Tool | Best For | Pricing (2026) | Indian teams use it for |
|---|---|---|---|
| Mixpanel | Event-based analytics, funnels, cohorts | Free to 1M events/mo | Retention curves, funnel analysis |
| Amplitude | Behavioural analytics, pathways | Free to 10M events/mo | User path analysis, growth charts |
| PostHog | Self-hosted, open source, all-in-one | Free self-hosted | Data privacy, full control |
| Clevertap / MoEngage | Analytics + engagement (India-built) | Paid, India pricing | Combined analytics + push/email |
| Firebase | Mobile apps, lightweight | Free | Mobile event tracking, A/B testing |
Recommended stack by stage: 0-10K MAU: Firebase + Google Analytics (free, sufficient). 10K-500K MAU: Mixpanel or PostHog (event-level analytics essential). 500K+ MAU: Amplitude or a data warehouse solution (Segment → BigQuery/Snowflake → Looker) for cross-team analytics.
Event tracking is the foundation of product analytics — logging specific user actions as events with properties that let you analyse them. Good event tracking makes everything else in analytics possible; bad tracking creates months of unreliable data that leads to wrong decisions.
Events to always track: User signed up, User completed onboarding, User completed activation event, User returned after N days (retention event), User triggered any key flow (checkout started, payment initiated, KYC started), User paid (with revenue amount), User referred another user, User churned / cancelled.
Event naming conventions matter more than you think: Use a consistent verb-noun format: payment_initiated, kyc_completed, course_module_viewed — not a mix of KYCDone, viewed course module, payment. Inconsistent naming creates unmergeable data across platforms and teams. Establish the naming convention in a tracking plan document before you start, and enforce it in code reviews.
Properties to always add to events: user_id, user_segment (new/returning/paid), platform (iOS/Android/web), feature version (so you can A/B test), timestamp. For financial events: amount, currency, method.
Funnel analysis shows the conversion rate at each step of a sequential flow. The onboarding funnel (signup → profile → activation) tells you where users drop off. The payment funnel (cart → checkout → payment confirmed) tells you where revenue is lost. Set up funnels for every critical flow in your product and review them weekly.
The most useful funnel insight is not the overall conversion rate but the step with the biggest drop. If 80% of users move from step 1 to step 2, but only 30% from step 2 to step 3, step 2→3 is your priority regardless of what's happening elsewhere in the funnel.
Cohort analysis groups users by when they first used the product (acquisition cohort) and tracks what % are still active over time. Cohort curves reveal whether your product is improving: if the June cohort's Day 30 retention is higher than the March cohort's, your product is getting better at retaining users. If it's declining, something got worse — a product change, a shift in acquisition channel quality, or a seasonality effect.
1. Tracking everything, analysing nothing. Teams implement comprehensive event tracking then never build the dashboards or run the queries to use the data. Track the events tied to the 5-6 metrics you'll review weekly. Add more as you have questions.
2. Confusing DAU with retention. DAU going up doesn't mean retention is healthy — if you're running acquisition campaigns, DAU rises even as cohort retention declines. Always separate new user activity from returning user activity in your dashboards.
3. Measuring activation wrong. Defining activation as "completed onboarding" rather than "completed the action that predicts retention" means optimising for the wrong event. Run the correlation analysis: which Day 1 actions correlate with Day 30 retention? That's your activation event.
4. Not segmenting by acquisition channel. Blended retention and activation metrics hide huge differences between user quality by channel. Organic users might have 3x the activation rate of paid users. Without segmentation, you're making decisions on averages that don't describe any actual user cohort.
5. Celebrating averages. Average session time, average ARPU, average activation rate — averages obscure the bimodal distributions that tell the real story. Your top 20% of users might account for 80% of revenue. Understanding the segment composition is more useful than the average.
6. No data quality checks. Duplicate events, missing user IDs, timezone mismatches, and SDK sampling errors are endemic in production analytics pipelines. Build data quality checks (event count alerts, duplicate detection, property validation) before your data reaches dashboards or you'll make decisions on wrong numbers.
The test of a good analytics setup is not the quality of the dashboards but whether your team makes better product decisions because of them. The decision-making process with analytics data: (1) Identify the metric you want to improve (e.g., Day 7 retention). (2) Diagnose why it's at its current level using funnel analysis, cohort comparison, and user path analysis. (3) Form a hypothesis about what change would improve it ("if we send a Day 2 re-engagement push with a personalised continuation prompt, Day 7 retention will improve by 3 points"). (4) Test the hypothesis in an A/B test if you have enough volume (>500 users per variant), otherwise run it as a sequential experiment. (5) Measure the result against the specific metric, not proxies. (6) Ship if positive, learn if neutral or negative.
The most important discipline: pre-commit to the metric and the decision threshold before you run the test. Teams that look at results and decide post-hoc what to measure (p-hacking) consistently make bad product decisions even when they have excellent analytics infrastructure.
We help product teams build analytics infrastructure from event taxonomy to dashboards to decision processes. Book a free strategy session.
Book Free Strategy Call