Test Questions Flashcards
From third-party study guides and Facebook's MarSci practice exam
See the image below:
A: Marketing Mix Modeling
A
D
C
D
B
C
A & C
See the image below:
Location
An advertiser is running a Conversion Lift test on a new media platform. In discussing how the campaign will be measured, the media platform’s team says that it will compare the conversion rates for the test group exposed to ads vs. a pre-selected control group. The platform does have the ability to run a public service announcement instead, but feels that its methodology is valid.
What concern should the analyst have with the test design?
Randomization is required for valid experimental design
Pre and post measurements will provide a more accurate assessment
There may be contamination between the test and control groups
An A/B test will provide a more accurate assessment
Randomization is required for valid experimental design
An advertiser’s primary product offering is a series of subscription boxes. The advertiser wants to increase user retention. Approximately 25% of customers who buy a three-month subscription do NOT buy another subscription the following year.
A key objective is to reduce the churn rate by 5%. A data scientist develops a model to identify users who have a high probability of churning and to create exclusive offers designed to entice these users to buy another subscription.
Which type of model should the data scientist use?
Logistic regression
Linear regression
Multinomial regression
Support vector regression
Logistic regression
An online shoe brand ran two Conversion Lift studies. Test 1 resulted in the control group showing a 20% conversion rate while the test group showed a 25% conversion rate. The results were significant at 60%. Test 2 resulted in a 20% conversion rate in the control group while the test group showed a 30% conversion rate. These results were 95% significant.
What can the analyst conclude?
Test 2 results are more reliable than Test 1
Test 1 results are more reliable than Test 2
Both tests are equally reliable
A comparison cannot be made between Test 1 and Test 2
Test 2 results are more reliable than Test 1
A brand runs a multi-cell experiment to confirm whether its campaign is generating sales lift. The results were as follows.
- Cell A
- sales lift = 4.0%
- 90% confidence interval = (0.010,0.070)
- Cell B
- sales lift = 4.5%
- 90% confidence interval = (-0.010, 0.084)
- Difference between Cell A and Cell B
- sales lift = -0.5%
- 90% confidence interval = (-0.023, 0.001)
What is the correct interpretation of the results?
Cell A performed positively, Cell B did not perform positively, Cell A performed better than Cell B
Cell A performed positively, Cell B performed positively, Cell B performed better than Cell A
Cell A performed positively, Cell B performed positively, and it is not possible to tell which was better
Cell A performed positively, Cell B did not perform positively, and it is not possible to tell which was better
Cell A performed positively, Cell B did not perform positively, and it is not possible to tell which was better
The correct answer was “Cell A performed positively, Cell B did not perform positively, and it is not possible to tell which was better”.
A confidence interval containing zero means we’re uncertain whether there is an effect. Having zero in the confidence interval implies that the effect could have a positive or negative effect on the outcome of interest. This means we can’t conclude that cell B resulted in a positive outcome, nor can we conclude there was any difference between cells A and B.
Refer to the chart.
An analyst in a technology company aggregates 42 Brand Lift studies that contained favorability questions. The analyst plots these studies against the engagement rate that the ads received. The engagement rate is a summation of likes, comments, and shares over the total reach in each campaign.
Each dot in the chart is a separate campaign.
What is the correct interpretation from this chart regarding the correlation between the engagement rates and favorability lift?
No correlation
Weak positive correlation
Weak negative correlation
Strong positive correlation
No correlation
An automotive manufacturer launches a campaign to sell its newest model. The manufacturer uses a last-click attribution model with a 90-day conversion window to evaluate its media investment.
The attribution results show lower-than-expected online conversions on Facebook. The manufacturer needs to validate these results.
In order to validate the results, what should they be compared to?
The results from last-click attribution model with a conversion window of 30 days
The number of offline conversions
The results from Facebook Data-Driven Attribution
The results from Facebook Brand Lift tests that were run during the campaign
The results from Facebook Data-Driven Attribution
An ecommerce advertiser sells activewear. The advertiser needs to assess which proportion of the advertising budget should be spent on re-targeting previous buyers versus targeting consumers who visited its website but did NOT make a purchase. To make this decision, the advertiser needs to measure the proportion of sales generated by each targeting strategy to learn which strategy generates the largest proportion of additional sales.
The advertiser pauses its usual campaigns and sets up an A/B test using two ad sets. Both ad sets are optimized for purchases. The only difference is the targeting strategy used. The advertiser runs the ad sets for seven days. The A/B test results are based on attributed conversions using a last-touch attribution model to identify the winning strategy.
This measurement approach does NOT accurately measure causality.
What is causing this measurement issue?
The test should be set up to use an even credit attribution model
The test should be set up to prevent contamination to the test group
The test should run for a minimum of three weeks to collect adequate data
The test needs to run randomized test and control groups
The test needs to run randomized test and control groups
The correct answer was “The test needs to run randomized test and control groups”.
Conducting an A/B test will not measure causality; instead, an RCT (lift test) should be used to more accurately measure tthe causal impact of each strategy.
An online fashion retailer wants to test the Video Views objective against the Brand Awareness objective. The goal is to see which objective generates better purchase intent among the target audience.
Which method should the retailer use to achieve this?
Multi-cell Conversion Lift test to test Video Views versus Brand Awareness
A/B test to measure Video Views versus Brand Awareness
Multi-cell Brand Lift test to test Purchase Intent with Brand Awareness versus Video Views
A/B test to measure estimated ad recall for each objective
Multi-cell Brand Lift test to test Purchase Intent with Brand Awareness versus Video Views
An ecommerce company wants to understand the impact of reach on their incremental ROAS results when running randomized control trial experiments using their ads.
The company gathers their historical campaign performance and creates a scatterplot to compare reach and ROAS for each campaign run in the past year. Their marketing strategy has changed frequently over the past year, with adjustments to targeting, bidding strategies and optimization metrics. The marketing team decides to increase reach as much as possible to increase their incremental ROAS.
Refer to the chart.
What conclusion should the analytics team make in respect to these findings?
These results are correlative, not causal
There is not a discernible relationship between reach and ROAS
Reach should be optimized for 2.5 million unique users because that drove the highest incremental ROAS
Higher reach causes higher ROAS
These results are correlative, not causal
An advertiser is planning on running a campaign that includes TV, Facebook and search, with the goal of increasing sales. The attribution results from that campaign will determine how they run their future campaigns.
What should the advertiser do based on the attribution results?
Reallocate budget to the channel driving the highest number of conversions
Stop spend entirely on the channel with the lowest number of conversions
Allocate budget evenly across all three channels
Conduct a multi-cell Brand Lift test to validate the attribution results
Reallocate budget to the channel driving the highest number of conversions
A cruise brand advertises on Facebook, Instagram and YouTube. The brand is evaluating performance of these channels in Ads Manager and Google Analytics to determine how to allocate its increased budget for the upcoming season. The brand suspects the impact of Facebook platforms is over-represented in Ads Manager and under-represented in Google Analytics.
How should the company measure the causal contribution of the Facebook platforms accurately?
Adopt multi-touch attribution models for all platforms
Use Ads Manager results to evaluate Facebook platforms
Use Google Analytics to evaluate Facebook platforms
Run an ad-account level randomized control trial on Facebook
Run an ad-account level randomized control trial on Facebook
The correct answer was “Run an ad-account level randomized control trial on Facebook”.
Google Analytics can only show you correlated measures, whereas running a randomized control trial (RCT) can show you causal impact.
A TV advertiser needs to attract more sports fans to tune into their upcoming live events. According to a global index survey, 90% of sports fans online report that they use another device while watching TV. More than 50% of sports fans reported that their top activities were using social media and chatting with friends.
To maximize ad reach, which hypothesis should the advertiser test?
Placing ads on popular social media platforms is the most cost efficient way to generate conversions
Placing ads on top sports channels is the most cost efficient way to generate reach
Messaging apps provide incremental conversions
Social media platforms provide incremental reach to TV
Social media platforms provide incremental reach to TV
A large gaming company wants to know if its Facebook campaign is increasing purchase consideration due to ad exposure on Facebook.
What measurement solution should be used?
Facebook Conversion Lift
Facebook Brand Lift
Facebook A/B test
Facebook Analytics
Facebook Brand Lift
A financial services company is launching a month-long Facebook campaign to promote its new credit card. The audience strategy is to reach people who have previously applied for one of the company’s current credit cards but did not get approved.
The campaign has been set up with a single-cell Conversion Lift test to determine how effective the campaign is in getting people to apply. The company would like a post-campaign report that highlights the total budget spent and number of approved applications.
What should the analyst recommend for the campaign?
The schedule of the campaign should be longer than one month
A multi-cell Conversion Lift test should be used instead
The KPI should be the number of applications submitted
The KPI should be the number of people reached
The KPI should be the number of applications submitted
The correct answer was “The KPI should be the number of applications submitted”.
Since the goal of the campaign is for people to apply for the new credit card, the metric the company should focus on is the number of applications submitted.
An online food brand that ships high-quality meat has a loyal audience of people, ages 35+. The brand conducts market research and learns that younger consumers are starting to shop for food online more frequently, but are extremely health conscious. The brand is worried they will struggle to be top-of-mind among this new prospective young audience.
The brand runs a multi-cell Brand Lift test for eight weeks to compare how its current creative performs against people, ages 35+ versus people, ages 18-34. Both cells had the same budget and used the same optimization.
The results are in the image.
What action should the client take to improve performance with the younger audience?
Develop new creative
Keep serving the existing creative
Rerun the test
Increase the spend
Develop new creative