Statistics & Experimentation Interview Questions

Review this list of 73 statistics & experimentation interview questions and answers verified by hiring managers and candidates.
  • "Use Normalization when: When using pixel values (0-255) into a Neural Network. âž” Normalize the data between [0,1] to avoid huge input values that could slow down training. When using k-Nearest Neighbors (kNN) or K-Means Clustering. âž” Because distance metrics like Euclidean distance are highly sensitive to magnitude differences. You are building a Recommender System using Cosine Similarity.âž” Cosine similarity needs data to be unit norm. Use **Sta"

    Abhinav J. - "Use Normalization when: When using pixel values (0-255) into a Neural Network. âž” Normalize the data between [0,1] to avoid huge input values that could slow down training. When using k-Nearest Neighbors (kNN) or K-Means Clustering. âž” Because distance metrics like Euclidean distance are highly sensitive to magnitude differences. You are building a Recommender System using Cosine Similarity.âž” Cosine similarity needs data to be unit norm. Use **Sta"See full answer

    Statistics & Experimentation
  • Statistics & Experimentation
  • "Null hypothesis (H0): the coin is fair (unbiased), meaning the probability of flipping a head is 0.5 Alternative (H1): the coin is unfair (biased), meaning the probability of flipping a head is not 0.5 To test this hypothesis, I would calculate a p-value which is the probability of observing a result as extreme as, or more extreme than, what I say in my sample, assuming the null hypothesis is true. I could use the probability mass function of a binomial random variable to model the coin toss b"

    Lucas G. - "Null hypothesis (H0): the coin is fair (unbiased), meaning the probability of flipping a head is 0.5 Alternative (H1): the coin is unfair (biased), meaning the probability of flipping a head is not 0.5 To test this hypothesis, I would calculate a p-value which is the probability of observing a result as extreme as, or more extreme than, what I say in my sample, assuming the null hypothesis is true. I could use the probability mass function of a binomial random variable to model the coin toss b"See full answer

    Statistics & Experimentation
  • Video answer for 'What is a p-value?'

    "It is the smallest level of significance at which the null hypothesis gets rejected"

    Farza S. - "It is the smallest level of significance at which the null hypothesis gets rejected"See full answer

    Statistics & Experimentation
  • 🧠 Want an expert answer to a question? Saving questions lets us know what content to make next.

  • "A/B testing is used when one wishes to only test minor front-end changes on the website. Consider a scenario where an organization wishes to make significant changes to its existing page, such as wants to create an entirely new version of an existing web page URL and wants to analyze which one performs better. Obviously, the organization will not be willing to touch the existing web page design for comparison purposes. In the above scenario, performing Split URL testing would be beneficial. T"

    Sangeeta P. - "A/B testing is used when one wishes to only test minor front-end changes on the website. Consider a scenario where an organization wishes to make significant changes to its existing page, such as wants to create an entirely new version of an existing web page URL and wants to analyze which one performs better. Obviously, the organization will not be willing to touch the existing web page design for comparison purposes. In the above scenario, performing Split URL testing would be beneficial. T"See full answer

    Statistics & Experimentation
  • "Type I error (typically denoted by alpha) is the probability of mistakenly rejecting a true null hypothesis (i.e., We conclude that something significant is happening when there's nothing going on). Type II (typically denoted by beta) error is the probability of failing to reject a false null hypothesis (i.e., we conclude that there's nothing going on when there is something significant happening). The difference is that type I error is a false positive and type II error is a false negative. T"

    Lucas G. - "Type I error (typically denoted by alpha) is the probability of mistakenly rejecting a true null hypothesis (i.e., We conclude that something significant is happening when there's nothing going on). Type II (typically denoted by beta) error is the probability of failing to reject a false null hypothesis (i.e., we conclude that there's nothing going on when there is something significant happening). The difference is that type I error is a false positive and type II error is a false negative. T"See full answer

    Statistics & Experimentation
  • "Look for the main variables and see if there differences in the distributions of the buckets. Run a linear regression where the dependent variable is a binary variable for each bucket excluding one and the dependent variable is the main kpi you want to measure, if one of those coefficients is significant, you made a mistake. "

    Emiliano I. - "Look for the main variables and see if there differences in the distributions of the buckets. Run a linear regression where the dependent variable is a binary variable for each bucket excluding one and the dependent variable is the main kpi you want to measure, if one of those coefficients is significant, you made a mistake. "See full answer

    Statistics & Experimentation
  • "A confidence interval gives you a range of values where you can be reasonably sure the true value of something lies. It helps us understand the uncertainty around an estimate we've measured from a sample of data. Typically, confidence intervals are set at the 95% confidence level. For example, A/B test results show that variant B has a CTR of 10.5% and its confidence intervals are [9.8%, 11.2%], this means that based on our sampled data, we are 95% confident that the true avg CTR for variant B a"

    Lucas G. - "A confidence interval gives you a range of values where you can be reasonably sure the true value of something lies. It helps us understand the uncertainty around an estimate we've measured from a sample of data. Typically, confidence intervals are set at the 95% confidence level. For example, A/B test results show that variant B has a CTR of 10.5% and its confidence intervals are [9.8%, 11.2%], this means that based on our sampled data, we are 95% confident that the true avg CTR for variant B a"See full answer

    Statistics & Experimentation
  • "The central limit theorem tells us that as we repeat the sampling process of an statistic (n > 30), the sampling distribution of that statistic approximates the normal distribution regardless of the original population's distribution. This theorem is useful because it allows us to apply inference with tools that assume normality like t-test, ANOVA, calculate p-values hypothesis testing or regression analysis, calculate confidence intervals, etc."

    Lucas G. - "The central limit theorem tells us that as we repeat the sampling process of an statistic (n > 30), the sampling distribution of that statistic approximates the normal distribution regardless of the original population's distribution. This theorem is useful because it allows us to apply inference with tools that assume normality like t-test, ANOVA, calculate p-values hypothesis testing or regression analysis, calculate confidence intervals, etc."See full answer

    Statistics & Experimentation
  • "To speed up A/B tests results with limited sample sizes, we can apply advanced techniques like CUPED to reduce variance for faster statistical significance, interleaving to gather more comparative data per user (e.g., ranking), MAB to dynamically allocate traffic to winning variations for quicker optimization (e.g., campaigns), and Bayesian A/B testing which offers probabilistic conclusions that can be reached earlier. Each method, when appropriately applied, allows you to gain m"

    Lucas G. - "To speed up A/B tests results with limited sample sizes, we can apply advanced techniques like CUPED to reduce variance for faster statistical significance, interleaving to gather more comparative data per user (e.g., ranking), MAB to dynamically allocate traffic to winning variations for quicker optimization (e.g., campaigns), and Bayesian A/B testing which offers probabilistic conclusions that can be reached earlier. Each method, when appropriately applied, allows you to gain m"See full answer

    Statistics & Experimentation
  • "E(VAR(X))= VAR(X) VAR(X)= E[(X-E(X))^2] = E[X^2]-E[X]^2"

    Mark S. - "E(VAR(X))= VAR(X) VAR(X)= E[(X-E(X))^2] = E[X^2]-E[X]^2"See full answer

    Statistics & Experimentation
  • "Range captures the difference between the highest and lowest value in a data set, while standard deviation measures the variation of elements from the mean. Range is extremely sensitive to outliers, it tells us almost nothing about the distribution of the data, and does not extrapolate to new data (a new value outside the range would invalidate the calculation). Standard deviation, on the other hand, offers us an insight into how closely data is distributed towards the mean, and gives us some pr"

    Mark S. - "Range captures the difference between the highest and lowest value in a data set, while standard deviation measures the variation of elements from the mean. Range is extremely sensitive to outliers, it tells us almost nothing about the distribution of the data, and does not extrapolate to new data (a new value outside the range would invalidate the calculation). Standard deviation, on the other hand, offers us an insight into how closely data is distributed towards the mean, and gives us some pr"See full answer

    Statistics & Experimentation
  • "I'd recommend to adjust p-values because of the increased chance of type I errors when conducting a large number of hypothesis. My recommended adjustment approach would be the Benjamini-Hochberg (BH) over the Bonferroni because BH strikes a balance between controlling for false positive and maintaining statistical power whereas Bonferroni is overly conservative while still controlling for false positives, it leads to a higher chance of missing true effects (high type II error)."

    Lucas G. - "I'd recommend to adjust p-values because of the increased chance of type I errors when conducting a large number of hypothesis. My recommended adjustment approach would be the Benjamini-Hochberg (BH) over the Bonferroni because BH strikes a balance between controlling for false positive and maintaining statistical power whereas Bonferroni is overly conservative while still controlling for false positives, it leads to a higher chance of missing true effects (high type II error)."See full answer

    Statistics & Experimentation
  • "Because testing many engagement metrics at once increases the risk of finding effects that aren't real (the 'multiple comparisons problem'), you must adjust your criteria for statistical significance. For social media data, the Benjamini-Hochberg procedure is often a practical choice as it controls the rate of false discoveries (FDR) while still allowing you to detect genuine changes; however, the ideal adjustment method will vary depending on your specific number of metrics (e.g., use Bonferron"

    Lucas G. - "Because testing many engagement metrics at once increases the risk of finding effects that aren't real (the 'multiple comparisons problem'), you must adjust your criteria for statistical significance. For social media data, the Benjamini-Hochberg procedure is often a practical choice as it controls the rate of false discoveries (FDR) while still allowing you to detect genuine changes; however, the ideal adjustment method will vary depending on your specific number of metrics (e.g., use Bonferron"See full answer

    Statistics & Experimentation
  • "This video is a duplicate of the other video in this lesson, "Design A/B test for New Campaign""

    Connor W. - "This video is a duplicate of the other video in this lesson, "Design A/B test for New Campaign""See full answer

    Statistics & Experimentation
  • "Ask a follow up question : What is the primary goal of expanding into a new vertical ? Food vertical company may want to expand to a new vertical (say Grocery) for the following reasons : Attract new customers interested in grocery delivery instead of food delivery Increase usage/order frequency from existing customers Increase revenue and LTV of existing as well as potentially new customers Benefit from synergies between existing delivery engine by improving utilization of their network"

    Saurabh K. - "Ask a follow up question : What is the primary goal of expanding into a new vertical ? Food vertical company may want to expand to a new vertical (say Grocery) for the following reasons : Attract new customers interested in grocery delivery instead of food delivery Increase usage/order frequency from existing customers Increase revenue and LTV of existing as well as potentially new customers Benefit from synergies between existing delivery engine by improving utilization of their network"See full answer

    Statistics & Experimentation
  • McKinsey logoAsked at McKinsey 

    "The cases where data is under heavy outlier influence. Since mean fluctuates due to the presence of an outlier, median might be a better measure"

    Himani E. - "The cases where data is under heavy outlier influence. Since mean fluctuates due to the presence of an outlier, median might be a better measure"See full answer

    Data Scientist
    Statistics & Experimentation
Showing 21-40 of 73