How to Ace A/B Testing Interview Questions

Product Management
Anthony PellegrinoAnthony PellegrinoLast updated

A/B testing is one of the most important parts of user experience research that is conducted by most companies.

For many product managers and data scientists, it may be crucial to experiment with product changes and decide which to move forward on.

If you have upcoming product manager or data science interviews, you should expect to be asked several questions on A/B testing and be prepared to answer them fully.

Analytical Interview Questions Prep Course

Analytical questions involve reasoning about metrics, and quantitative analysis to improve and ship excellent products. In this section, we'll break down the major types of analytical questions, and show you detailed examples of how to ace the analytical PM interview from the PM interviewers who ask the questions

Start Learning

Quite frankly, if you do your homework, there's no reason you can't ace the A/B interview questions in your following PM interview.

We even have a lesson dedicated to A/B testing in our PM interview course; check it out here!

But that's what this article is also going to help you with. Here's a guide on what A/B testing is and how you can ace your A/B interview questions.

Table of Contents:

  • Examples of A/B Testing Interview Questions
  • What Is A/B Testing?
  • Why Is A/B Testing Important?
  • What Is The Goal of A/B Testing?
  • What Every PM Should Know About A/B Testing
  • The Different Types of A/B Test Design
  • How to Properly Run A/B Tests
  • A/B Test Example - From Beginning to End

Examples of A/B Testing Interview Questions

Nobody can tell you precisely what A/B testing interview questions will be thrown your way during your upcoming PM or data science interviews.

However, here are some common ones that we’ve compiled below from previously asked at Big Tech companies like Google, Microsoft, Amazon, and others.

So long as you get a good handle on the fundamentals of A/B testing (which we’ll discuss in greater detail later in this article), you should have the tools to answer any unforeseen or off-the-cuff A/B questions.

📊
Devise A/B tests to improve user frustration with Google Maps.

First and foremost, during a product manager or data scientist interview, A/B testing questions may be structured around something specific, such as one of the company’s products or features.

Here, if you were interviewing at Google, a hiring manager could ask you this question about Google Maps.

Our co-founder Stephen (who used to work as a product manager at Google for several years) answers this question in great detail here.
📊
How would you design an A/B test for an email marketing campaign?

A/B testing is a commonly used technique in email marketing, as it allows marketers to compare the performance of two different versions of an email and determine which one is more effective. This can help marketers improve the effectiveness of their email campaigns and better engage with their audience.

A/B testing is often used to test different aspects of an email, such as the subject line, the content of the email, the layout or design, or the call to action. By comparing the results of the two versions, marketers can determine which elements of the email are most effective and use that information to optimize future campaigns.

Here’s a sample answer to this question here:

We can start out with a control and test group (with email campaigns). Variables about the users (demographics, location, etc.) should be controlled for to ensure there’s no deviance in the different sets of users.

After assuring that the two groups are even with respect to those variables, we can run an A/B test as follows. Our variable of interest will be the conversion rate for UberEats within each group. Since signups are a binary event per user, then we can model our variable of interest per user as a Bernoulli random variable (a coin flip, with probability p of signup happening). Therefore, the total number of signups in each group, assuming a group size of n users, will be of a Binomial distribution with parameters n and p. From the Central Limit Theorem, the overall conversion rate will tend towards a normal distribution with enough users.

Check out the Exponent Data Science Interview Course for the full solution here.

What Is A/B Testing?

Source: Seobility

Unsurprisingly, the best way to ace the A/B portion of your PM interview is to understand what A/B testing is.

Nobody can tell you precisely what A/B questions you'll face on interview day. Nor is that actually necessary.

A comprehensive understanding of the methodology and its use will be the best way to adequately prepare for any random questions you may have to answer on the fly.

So what is A/B testing anyway?

Well, quite simply, A/B testing, (also called split testing or bucket testing) is the process of testing two versions, A and B, of a web page, product design, layout, etc., to compare each performance under a chosen metric.

This is done by randomly showing one of the versions to users to see which is better suited under a selected metric. Of course, the metrics can vary - page views, conversions, bounce rate, etc.

Whether it's the language on a landing page, the style of a button, or the color of some element, A/B testing can help PMs make data-driven decisions resulting in objective improvements.

For example, imagine you have a call-to-action on the bottom of a landing page calling visitors to sign-up for an email list.

If a PM wanted to choose between two phrases that would best push visitors to subscribe to that list, they could run an A/B test.

They would formulate two variants.

For example, let's say "Sign up today!" or "Join our newsletter!" run both simultaneously and then measure which produces more sign-ups.

That's A/B testing in a nutshell—the measuring and comparing of two variants to see which performs better under a specific metric.

Why Is A/B Testing Important?

Data reporting dashboard on a laptop screen.
Photo by Stephen Dawson / Unsplash

While that's all well and good, why are A/B tests necessary, really?

Ultimately, this form of experimental design is one of the many ways to objectively compare and contrast aspects of product choices that may otherwise be difficult to quantify.

Many elements of a product may traditionally be thought to be subjective. For instance, whether a button is red or blue could be chalked up to a preferential choice.

Data Science Interview Course

Worried about data science interview questions? Practice common data science questions in this course.

Start Learning

Thus, differentiating which would be better from a conversion perspective may be difficult without a data-driven method like an A/B test.

In this way, A/B testing is essential because it provides objective and quantifiable data regarding elements of a product that may otherwise be qualitative.

What Is The Goal of A/B Testing?

Il s’agit d’une photographie réalisée pour illustrer notre page des statistiques de l’agence web Olloweb Solution
Photo by Agence Olloweb / Unsplash

As such, the goal of A/B testing is to objectively measure which version of a product performs better.

The purpose of A/B testing would be to provide actionable data so the most prudent decisions can be made.

For example, should this button have rounded or square corners?

Should it be flat or have a drop-down shadow?

Should it be this color or that?

A/B testing can be used to help objectively answer such questions.

While everything we've covered is relatively straightforward, aspiring PMs should be aware of some more complexity behind the scenes.

What Every PM Should Know About A/B Testing

Success is no accident. It is hard work, perseverance, learning, studying, sacrifice and most of all, love of what you are doing or learning to do.
Photo by Ruthson Zimmerman / Unsplash

Here are some fundamental things every PM should know about A/B testing to prepare for their upcoming PM interview.

The Different Types of A/B Test Design

First and foremost, a product manager or data scientist should understand the difference between the related testing methodologies to determine if an A/B test is the best course of action.

There are at least two other U/X test designs that PMs should know.

These are:

A/B/N

Another similar kind of test is the A/B/N test.

In this kind of test (usually done for web pages), more than two versions are tested, whereas an A/B test only tests two versions against each other.

The N in A/B/N stands for "number," meaning the number of tested versions.

A/B/N tests are similar to multivariate tests, except multivariate tests involve testing all possible combinations of the different variables at once, whereas A/B/N does not.

Instead, A/B/N is used to try several different versions against each other.

These tests are best used for major layout or design decisions rather than testing the individual differences between specific elements.

Multivariate

Each experimental method is different from the others. More often than not, your interviewer will probably ask a question about when to use each and why.

A/B tests, as the name suggests, only compare two variables. If too many variables are included in an A/B test, it becomes difficult to discern why one version outperformed the other.

When multiple variables are needed to be tested, multivariate testing is the way to go.

Multivariate testing is when all possible combinations of versions and all their variables are tested simultaneously.

This test design is best used when several product changes are to be decided.

Rather than running dozens of A/B tests on every design change, a multivariate test can be performed in which every possible combination is tested against each other.

How to Properly Run A/B Tests

Source: Wikimedia

Aspiring PMs must also know how to successfully conduct an A/B test.

Not every A/B test is made equally for a few reasons.

Make sure you understand:

How to Evaluate Worthwhile Metrics

Before starting your A/B test, you must evaluate valuable and relevant metrics.

Well, what are the usual suspects in the metric department?

Generally speaking, A/B tests experiment with one of the following:

  • Impression count
  • Click-through rate
  • Button hover time
  • Time spent on page
  • Bounce rate on the button's clickthrough link (assuming the button leads to a new webpage)

Ask yourself beforehand which of these conveys the most valuable and relevant information for the product and engineering teams?

Of course, it'll depend on the thing that's being tested.

If you're testing a landing page with valuable information on your product, time spent on the page and bounce rate may be the wisest choices.

If you're testing a CTA, a clickthrough rate is probably the way.

The impression count will be the most valuable if you compare two versions of a social media ad.

In the worst-case scenario, you spend all this time and effort conducting an A/B test, and your chosen metrics aren't that insightful. As such, the results would ultimately be useless.

How to Interpret Your Results

By and large, the software you'll use for A/B testing will come with many tools and features to help you understand and measure the results of your tests.

For example, after the conclusion of your tests, the software will show you the difference in conversion rates, or whatever other metrics you've chosen to measure, between the two versions, along with a margin of error.

More likely than not, the A/B testing software will make the results relatively easy to interpret.

You'll see the total number of test users, the measure of your chosen metric, possibly the device type of the tested users, and the particular uplift of the versions.

Deducing the winning version is as easy as comparing the conversion rates. Whichever is highest is the better performing version.

Understand your P-Value

Because A/B testing falls into the category of statistical analysis, PMs must understand the importance of their P-value and the null hypothesis.

The P-Value is a number between 0 and 1 that indicates if their test results are statistically significant rather than just a product of randomness.

This is very important, as a successful A/B test must actually demonstrate which version performs better, not which randomly performs better during the tests.

The actual number indicates the probability of your test results occurring by chance rather than for a proposed hypothesis.

So, for instance, if version A converts more website visitors than version B, you may think that it's because version A had a more straightforward layout, for example.

However, the p-value will tell you if that hypothesis is true or if it just happened that version A performed better by chance.

In other words, p-values measure the likelihood of the null hypothesis being true. Any p-value higher than .05 is considered strong evidence for the null hypothesis.

Read more about p-values and how to calculate them in A/B testing here.

Each of these aspects can determine the success and effectiveness of an A/B test. Therefore, some of the A/B testing interview questions you encounter will likely test your understanding of them.

Not only that, but U/X and experimental design can be time-consuming and expensive.

Time and money are a business's most valuable assets, neither of which can be wasted.

Therefore, your interviewer's questions will most likely be focused on whether you know how to successfully and efficiently conduct A/B tests.

A/B Test Example - From Beginning to End

Trading from a cheap AirBnB
Photo by Adam Nowakowski / Unsplash

If that's how you properly run an A/B test, let's look at an example from start to finish.

Let's imagine we have a banner for a software product featuring a call-to-action. Now, let's say we had two versions we'd like to test, one with a very minimal amount of words-A and one with a little more information-B.

First of all, we must design our A/B test for these banners.

Given that the difficulty in question features a CTA, we can figure that the most insightful metric, in this case, is click through rate.

The version that pushes more people to click is the clear winner.

Next, we must decide on an adequate sample size for our experiment.

The sample size of your tests has a lot to do with the p-value in your results.

Ultimately, you need a large enough number of users to demonstrate statistical significance after your experiment. Otherwise, the results may not be that trustworthy or accurate.

Remember that the p-value is the measure of this significance. If the sample size is too small, chances are that your p-value will be high, which says that your results are insignificant and, thus, probably a byproduct of randomness.

However, suppose there is a significant difference in the variants.

In that case, you can be confident in your results with a smaller sample size.

In other words, if our CTA version A covers 60% more users than version B after testing a sample of 1,000 users, we can be confident that version A performs better.

This is because the sample size of 1,000 users is adequate.

However, suppose version A only had a 10% difference. In that case, we'd probably need to increase the sample size substantially to be confident in the significance of our results.

Again, the sample size necessary has everything to do with the p-value of your results.

The size must be large enough to demonstrate that your results are statistically significant, not simply random.

As we mentioned earlier, your p-value is .05 or less, indicating a 5% or less chance that the outcomes could be a product of randomness and chance.

Ultimately, the sample size in your A/B tests should be increased, and the runtimes should continue until your results demonstrate this 95% confidence level.

Determining the size of your sample is the most statistically heavy part of the A/B testing process. We understand it can be a little confusing, especially if you don't have much experience in statistics. This article can be beneficial in determining what sample size you'd need for your A/B experiments.

Calculate Sample Size and Run Times for A/B Test | Apptimize
While the basic concept of planning an A/B test is pretty straight forward, it’s important to understand how to calculate sample size and run times.

After we've run our test on these CTA banners, we must interpret our results.

Generally speaking, the A/B software of your choice will show you the difference in measured metrics and a margin of error.

For example, let's imagine that version A converted 18% of its visitors and version B 10% with a margin of error of 2.3%.

Therefore, implementing version A as the chosen banner results in an 8% "lift" or difference between the two versions.

It should be no surprise that version A is the winner in this A/B test.

Examples of A/B Testing Interview Questions

Design meeting
Photo by Charles Deluvio / Unsplash

Nobody can tell you precisely what A/B testing interview questions will be thrown your way during your upcoming PM interview.

However, there are some common ones that we've compiled below from previously asked product manager and data science interview questions. Not only that, but if you get a good handle on everything we've mentioned so far, you should have the tools to answer any unforeseen or off-the-cuff A/B questions.

A/B tests, as the name suggests, only compare two variables. If too many variables are included in an A/B test, it becomes difficult to discern why one version outperformed the other.

Remember that when multiple variables are needed to be tested at once, multivariate testing is the way to go.

Data Science Career Path: Your Complete Guide
Trying to decide if a career in data science is right for you? Find out what skills you need and what job prospects look like long-term.

The primary issue is not that these tests couldn't tell you which version would outperform the other.

Instead, it becomes impossible to extrapolate the relationship between versions and metrics if too many factors are involved.

For an A/B test to be successful and insightful, they need to be as focused as a possible-one winning metric with one variable.

Obviously, you could still measure multiple metrics simultaneously, but how would you then choose which version wins?

Which metric should you look at to determine the winner?

This is the primary issue with several different metrics during an A/B test.

How do you properly form an A/B testing hypothesis?

A successful A/B testing hypothesis follows a simple yet essential formula with three crucial components:

  1. You have the variable.
  2. There's the result.
  3. The rationale on why the variable produced the given result.

Typically speaking, you can structure your A/B testing hypotheses in this way:

If <variable>, then <result> because of <rationale>.

So, how would this look in reality? Let's take our previous example with the CTA banners.

One version, A, has a CTA with fewer words, whereas the other, B, has more detailed information. Before your experiments, you can postulate the following hypothesis:

Suppose the CTA contains fewer words (variable). In that case, more visitors will be converted (result) because the CTA will be easier to digest and requires less reading on the part of the customer (rationale).

Using this formula, you can reliably create solid and successful hypotheses for your A/B tests.

Construct three A/B tests to improve user frustration with the Google Maps blue dot GPS icon

A detailed video answer can be found in our PM interview course, here.

Complete PM Interview Prep Course

Our product management interview course teaches you the essential skills you need to ace your PM interview, with hours of example questions, videos, and interview tips.

Start Learning

Why do some A/B tests fail to provide insights, outcomes, or value? When do A/B tests provide the most value to a business?

An A/B test can fail for many reasons. There are many potential reasons.

Some common ones are choosing the wrong metrics, not understanding a statistically insignificant p-value, not having the proper sample size of users, or forming a misguided hypothesis.

A/B tests provide the most value when:

  • The sample size is large enough to produce statistical significance,
  • Relevant metrics are chosen,
  • A single variable is measured,
  • The test results in a clear winner.

Why is A/B Testing Important for Businesses?

Ultimately, A/B testing is critical for businesses to learn how to improve their operations, products, websites, and, most importantly, their bottom lines.

A/B testing provides actionable insights for companies so that they may reduce risks, improve customer engagement, convert more customers, and increase sales.

In addition, A/B tests will demonstrate precisely how businesses can maximize the use of their resources in the most efficient way possible, which, in turn, will improve their ROI.

Given that the most successful companies are the ones that can maximize the most profits, A/B testing is a crucial piece of any business's overall strategy to do just that.

Consult with An Exponent Coach

Here at Exponent, we know better than anyone that it may be exhilarating and nerve-wracking when you have a PM or data science interview.

So, to help you boost your chances, we've designed several Interview Prep Courses for Product Management, Software Engineering, Data Science, Product Marketing Management, Technical Program Management, and Product Design.

Not only that, but we also offer industry-leading interviewing coaching to help you seal the deal. Book a session with an Exponent coach to:

  • Get an insider's look from someone who's been interviewed, got the offer, and worked at the companies you're applying at.
  • Receive an objective evaluation of where you stand as a job candidate.
  • Obtain personalized feedback and coaching to help improve and get more job offers.

We've partnered with dozens of industry insiders and career experts in product management, program management, product design, software engineering, and data science fields who can help you ace your interviews and nail your dream job.

Check out our list here and book a session today!

💬 Review more commonly asked sample PM interview questions.

📖 Read through our company-specific Product Manager interview guides

👯‍♂️ Practice your behavioral and leadership skills with our mock interview practice tool.

👨‍🎓 Take our complete Product Management interview course.

Learn everything you need to ace your product management interviews.

Exponent is the fastest-growing tech interview prep platform. Get free interview guides, insider tips, and courses.

Create your free account