Special Topics
The Fabric Academy has tailored this best practices content to an ad agency in the pitch process. It could be the agency is in the early stage (e.g., made the long list, getting ready for a chemistry meeting with the client), mid stage (e.g., in the midst of strategy and/or creative development) or in the final stage (e.g., preparing the final presentation).
If you’re in a creative pitch for an advertising account, here are two sets of question types, depending on what you are trying to figure out:
In this type of study, the researcher is looking for diagnostic-style feedback from consumers where the emphasis is on the work itself (and less so on the brand first).
Start with broad questions, then work towards the more specific:
In this approach, the researcher is less concerned with consumer’s reactions of the ads themselves, and more focused on the impact of the advertising on the brand. Bookend the study with pre/post brand perceptions.
The ultimate goal for any agency in the pitch process is to land recurring, project-based revenue. While there are a lot of variables in play (competitors, pricing, chemistry, conflicts, people), in our experience, clients usually buy a team of people whom they trust and can envision working with.
An appealing aspect of the team dynamic for clients is the agency’s openness to research. Clients look for agency talent who will take responsibility and make intelligent judgments, but who also welcome the POV of the consumer in the process.
In new business pitches, showing how the agency approaches strategic, creative and even media problem-solving using consumer input is a potent recipe for success.
This content is designed to help provide researchers with the knowledge and confidence to best leverage Fabric in all stages of the advertising pitch process, including:
Preparing for your study
Developing study content
Analyzing & presenting study results
Working with Fabric
One of the most important aspects of any market research project is setting the right research objective. Researchers often mistake the questions they want answered (which are of course important) for the research objective itself.
Establishing a research objective can be done in various ways. We recommend articulating the research objective in one single sentence. This will provide a clear focus for your team.
If you list a number of questions, neither you nor the team will know exactly what it is the study is aiming to achieve.
Below are some examples of clearly articulated research objectives:
Because qualitative market research is more of an art than a science, it’s important to think through study design—or as we call it, the research strategy.
Broadly, the study preparation logic should follow this arc: 1) Research objective; 2) Research strategy; and 3) Research tactics.
1. Research objective
Determine the research objective. Ask, what is the singular, overall objective of the study? Usually, your answer will take the form of a single sentence.
2. Research strategy
Articulate your research strategy. Broadly speaking, given the objective, what is the strategy for the study’s architecture? For example, if the study objective is to understand how a new product concept resonates, the research strategy could be “Compare the ideal to the actual.”
3. Research tactics
What are the research tactics? Given the objective and strategy, determine which specific tactics should be employed.
For example: we wanted to understand how people felt about the fact that when they are in public, chances are very good they were being recorded on video. We asked several questions involving how they felt about the proliferation of video cameras in public, to tap into the left brain where the speech center is located. Then we had them then answer one question using only gestures and body language—no words—to tap into the more emotional right brain.
By way of example, imagine this scenario:
The #1 question we get asked by researchers is this: How many people should I include in my study? We’ll answer that in two ways:
What constitutes an adequate sample size has been debated extensively in the market research industry for many years, in part because quantitative sample sizes are statistically easier to measure, including variance.
Often, quantitative research mindsets get applied to qualitative research because quant norms are concrete, whereas qualitative sample sizes are difficult to prove mathematically.
Studies have been published over the years on the topic of adequate sample sizes for qualitative research. Here are a few:
Broadly, the academic research suggests that a sample size in the 30–50 participants range achieves what experts call “the point of saturation” where adding another participant doesn’t add materially to the insights generated. We agree with that analysis.
Qualitative research is more art than science.
Typically, clients turn to qualitative methods to understand deeper meaning: beliefs, attitudes, perceptions, feelings, emotion and more—deep intangibles that help add up to the “why.” So while we believe that minimum sample sizes should be employed in qualitative research, getting to the “why” requires more nuance. For this we recommend taking 10 critical variables into consideration.
We have found that there are 10 critical variables to consider for determining the right sample size for a qualitative research study.
1. Where you are in the process
Earlier stage projects (e.g. exploratory or generative studies) can generally employ smaller sample sizes, because more iteration and development will be conducted as the project progresses.
2. Business impact
The higher the business impact of the research, the greater the sample size should be. For high impact research, we would likely recommend a hybrid quant/qual approach so you’re not relying solely on qualitative research.
3. Geography
One essential variable we consider is the geographic diversity of the target audience. It can be as easy as needing domestic and international respondents, or it may be more complex. In the US, most national brands have very broad distribution, so making sure to include the coasts as well as the interior is not only great practice, it’ll signal to clients that you don’t favor one type of demographic audience over another.
4. Research design
How the study is designed can have a huge impact on results. Which questions to ask in what order changes how respondents answer, affecting a study’s insights and conclusions.
5. Research platform
When you leverage a qualitative market research platform, you are buying a tech stack. One important variable is how well equipped that resource is technologically to identify the right recruits, field the study in a relevant way, and interpret the results. In the case of Fabric, our proprietary AI is an industry first, employing sentiment and emotion analysis to help researchers rapidly make sense of video responses.
6. Quality of recruits
It’s very important that the people in the study be the right people, regardless of how many respondents are included in the study size. Beyond the issue of whether they technically qualify, the people included in the study need to “feel” like the right consumer.
Using Fabric’s video-centric platform is particularly useful in this regard. Seeing the target in motion, on screen, telegraphs a great deal of information, versus seeing numbers in a graph or having a faceless/voiceless respondent using text replies.
7. Analysis
Who is interpreting the data your study will generate? All insight is qualitative, so considering how equipped you or your research partner is in the process has a tremendous influence on both sample size and study design.
8. Methodology
A variety of methodologies are available to the researcher, and each will suggest its own approach to sample size. Qualitative techniques to choose from include IDIs (in-depth interviews), focus groups, in-homes, friendship pairs, small group interviews, intercepts, observational research, ethnographies and/or digital (online, mobile).
In Fabric’s case, the methodology is unique in that it’s effectively “1-on-none”—meaning it’s asynchronous, and there is no moderator present. Much has been written about the effects of group-think within focus groups, where an ‘alpha’ respondent will influence others; Fabric has none of that. Fabric also removes the moderator from the study; therefore there is no moderator bias.
Instead, the methodology and technology employed by Fabric free up respondents to behave in a more open manner. Fabric’s confessional style enables what researchers have called the “online disinhibition effect” where respondents are more open to express themselves because there is no fear of disagreement or conflict with a moderator or fellow panelists.
9. Company culture
Some organizations are more comfortable with small sample sizes, whereas some look for larger samples because distribution is wide and/or global. For example, we have worked with Nike’s Innovation Kitchen on early exploratory studies using a small sample size. When we work with Xbox, having 10 markets each with 10 respondents, the sample size can drift to 100+ easily.
10. Segments
Often we’ll see that a client has a number of segments to understand. In that case, determining sample size depends on whether or not you’re looking for a rollup of all segments. If you’re going for a rollup, the 30–50 number is fine.
However, if you’re seeking to understand similarities and differences amongst segments, we would recommend 15 respondents per segment. This enables your study to take advantage of our AI, which kicks in at a minimum of 15 respondents.
Conclusion: What is the right sample size for your qualitative study?
If you’re looking for a short answer on the recommended sample size, it’s 30–50. But keep an open mind to key variables that may influence higher or lower numbers.
There are two ways to bring recruits into your Fabric study: provide us with the right information to write the screener for you, or bring your own screener.
Since respondents aren’t paid to answer screeners, try to limit the number of screening questions to ten data points (including age, gender and geography).
1. Let us build the screener for you
When building your DIY study, select “Let Us Recruit”, and then on the following page define the recruits you’re looking for responses from. This may include:
2. Provide us with your screener
The content of your screener can be cut and pasted into the “Define Your Recruit” field within the “Let Us Recruit” option, or you can paste a link to a document containing your screener for our team to program.
3. Fabric will accept/reject your criteria within 24 hours at the most
Once you have submitted your recruitment request through the Fabric platform, the team will review it for feasibility within 24 hours. Most requests are approved within 1–2 hours. If there are any questions or potential sticking points, we will reach out via email.
4. Once the screener is approved, finish building your study and launch
Once your screener request is approved, you will receive a notification. In your dashboard, the study status will now read “Approved – Awaiting Payment”. From there, the next step is to do a final review on your study questions.
Then proceed with launching your study. Your recruits will populate the study dashboard, with the first respondents usually coming in within 24 hours of launch.
Fabric studies include a total of up to 10 questions per respondent. Respondents have 60 seconds to answer each question.
Below are guidelines to help you think through how to ask questions on the Fabric platform to yield the richest, most captivating and emotional responses.
Start with the broadest possible context. You may want to start by asking respondents about their relationship to the culture that your product or service exists within. Then drill down into the brand, product and/or ad landscape.
Example:
How would you describe the culture of home furnishings in Madison, Wisconsin?
If you give consumers something to cling to, they will cling to it. Instead, let consumers create the story for you by asking questions that are specific to your area of interest, but that don’t lead the witness.
Example:
Show us any object in your home that defines luxury for you; explain in detail why you consider it luxury.
A great way to understand habits is to ask about how <blank> is changing for them. Be sure to be specific about the time frame, though.
Example:
Is Adidas a brand on the rise—or the decline—over the past two years? Why?
Sometimes, asking a very foundational question about a definition of something can really open up avenues for consumers. Marketers or product designers might think they know how consumers think, but hearing how they define something can be transformative.
Example:
How do you define competition within your athletic life?
Respondents will gravitate to gray areas; don’t let them. Ask them what they love or what they hate. Force them to choose A or B, and explain why. If they struggle to answer, that can be telling too. If you want them to answer a number scale (and elaborate on the score), force them to choose 0, 5 or 10 out of 10. 3’s or 7’s won’t tell you much.
Example:
What do you love most about your hair? What do you hate?
One of the simplest and often overlooked questions is “why?”. That can be about their motivation, their reward, their product use, their behavior—or even as a projective technique.
Example:
Why do you use FaceTime?
Have them bring you to the environment that makes the most sense for your mobile video survey. Beyond the actual response, you get a glimpse into their brand and product assortment.
Example:
Please show us all the audio, video and other A/V devices that are part of your home entertainment ‘ecosystem.’
Imagine we toss you a single tennis ball. Easy to catch, right? But what if we toss you three? Or five? Not so easy. Stick to one question, or they will focus only on one of the questions; and that one might not be the most important one for your study.
Example:
Instead of “How do you feel when you wear high heels? When do you wear flats or sandals?” zero in on a single question: “How do you feel when you wear high heels?”
The best insight comes when people talk about things that they really care about, whether it is something that they love or a secret pet peeve of theirs. Deprivation works. Creating tension can help.
Example:
How do you feel emotionally when you feed your baby something super healthy?
Your data will be much richer if you can see the respondent interact with the product on their video. Have them capture a living example of what works well and what frustrates them.
Example:
Show us your cat, and introduce them on camera to us.
Show us your favorite sports bra for racing a 10k, and tell us how it feels different from the one you typically wear to the gym.
Use language that the respondents are comfortable with, and would use if they were talking to a friend. For instance, a respondent might not know what an “asset” is.
Example:
What is the difference between online content that is sponsored versus online content that is not sponsored?
Giving respondents the freedom to share open-ended thoughts can lead to even more novel insights.
Example:
[Company Name] is listening: how can they make your buying experience better?
Put your respondents in hypothetical situations, use similes and metaphors, or ask a question that is completely “out there.” The more creative your question is, the more creative (and interesting) your responses will be.
Example:
Write a love letter to IKEA and read it on camera.
Whether it’s a new product design, the development of a new ad campaign, or the iteration of new UX or features, we often get asked about how to test different types of stimulus materials.
You can add a link to any individual question or multiple questions. Just highlight the word you want to link, then (as with Google Drive) paste in the link.
Users will click on that link and be directed to the destination. We generally recommend using Google Drive because most people are familiar with it, but links can take users to:
Researchers use Fabric to test a broad range of assets. Some examples:
The kinds of stimulus that have been tested include:
Since each respondent is served up 10 questions, there are a number of ways to leverage the Fabric platform for testing stimulus. As a general rule of thumb, if you have four different pieces of stimulus to test, here’s how the arc of the study might look:
*Note: for Q10 in the above example, it’s a good idea to include a rollup of all the stimulus to remind them of everything they have already seen. Otherwise they might have trouble recalling the first few concepts.
When testing product descriptions or positioning statements which can run longer in text form, do your best to keep the concepts highly differentiated. Present 3–5 concepts max. If there is significant overlap in the concepts and/or the statements are long, consumers will have trouble distinguishing one from the others. In that case, we recommend that your wrapup include a rollup PDF of all of the statements/concepts. The rollup will refresh the respondent’s memory after they’ve seen each individually.
To avoid your entire sample seeing the stimulus in the same order, therefore biasing their reaction depending on the sequence in which they see the stimulus, break your studies down into smaller sample sizes and switch up the order.
For example, with a sample size of 15 people (n=15) and three pieces of stimulus, a suggested approach would be to structure it like this:
As with everything in an online environment, confidentiality can be compromised. A few notes on how to protect your ideas:
By far the hardest part of the qualitative research study process is analyzing the incoming data because it’s unstructured compared to quantitative data. Researchers appreciate our platform’s unique tools, which speed up the process and give researchers a head start. Leverage our six key features and learn how our proprietary Fabric AI can help.
1. Video responses are organized by question
2. Enhanced transcript experience
3. Comments can be added to individual study responses
4. Respondents can be rated
5. Respondent names can be made anonymous
6. Unlimited tags can be created
One of the top frustrations of qualitative data analysis is the time it takes to digest the information, make sense of it, and bubble up key insights.
So we developed our own proprietary sentiment and emotion-based AI that provides three primary data analytic sources that free up researchers to spend more time developing insights that link to their brand, message, product or design.
Fabric AI includes:
The paragraph summary
The paragraph summary encapsulates the responses to any individual question in a study. To be clear: this does not summarize metadata across the entire study, only question by question.
The paragraph is written in plain English and it identifies:
Sentiment, sources of the sentiment, themes and top emotions are all hyperlinked. The videos and verbatims surfaced by our AI respond to the topic being explored in the hyperlinks.
Total mentions & emotional mentions
Fabric AI shows a count of total mentions, as well as mentions with emotional intent behind them (either positive or negative).
Additional features
Researchers also benefit from our AI’s ability to surface videos and verbatims with the highest degree of emotion, and there is a search field to quickly direct researchers to specific terms of interest.
Video highlight reels can be incredibly powerful tools in a debrief session. Standing in front of a group of people, telling them what consumers said—it’s not uncommon for attendees to get defensive or dismiss insights they don’t agree with.
Here’s where your highlight reel comes in. Press “play” and let consumers say it in their own words. Not only does the room go quiet for a few minutes, it puts any naysayers on their heels.
In our years of market research experience, we’ve learned how to create videos that pack a punch.
Here’s our method:
1. Keep it short
Videos should be kept under 2–3 minutes. Anything longer won’t hold the viewers’ attention in a culture dominated by TikTok-length videos. Long videos also become a mini documentary film requiring too much production effort.
2. Tell a singular story
How do you go from a dataset of dozens or hundreds of individual video responses to a 2–3 minute video?
The most important part of the process is telling a singular story. This means distilling down the ideas and statements found in your survey respondents’ videos. Sum up the singular story of your video highlight reel in one definitive sentence.
Our rule of thumb: if you can’t articulate the story of the video in one sentence, it’s probably not a story. Viewers are accustomed to video highlights that tell a single, coherent story, rather than a disconnected set of facts.
Asking the video to do the work of a slideshow presentation generally fails, too. We know for a fact that trying to do PowerPoint in video format does not work.
The video highlight reel and the presentation are two different narrative forms. Your reel should have its own internal narrative that tells a singular story.
3. Bring the story to life
Now that you have a singular story sentence, bring the story to life.
As a starting point, try structuring the story using three chapters: beginning, middle and end.
As you go through the process, keep the end in mind; what is the key message you want viewers to take away when they have finished watching the story?
Within those three chapters, consider using an effective storytelling device called a pivot point. Is there a moment where you are leading viewers down a path, and the unexpected gets revealed? For example, is there a surprising attitudinal, perceptual or behavioral insight that challenges convention? A new way to see the world? We think about it as the equivalent of the needle scratching across the record on a vinyl LP.
In classic ad agency pitch theater, the arc of a story might look something like:
4. Make a paper edit
A paper edit is a written outline of your video. It provides structure, tightens your focus, and helps you keep your video the right length. Think of it as a kind of script, storyboard or shot list, but with time codes. The time codes show exactly which interview clips you plan to use.
To make your life easier, use our paper edit tracking sheet where you can enter the key quotes you want to include, and it will automatically calculate the run time length. Note that it’s based on averages, so there may be some variance between the estimated and actual video length.
5. Edit video & post-production
With a paper edit prepared, the video editor can now download the relevant clips. Transitions, title cards, music and even B roll can be added.
Some of the more compelling video highlight reel stories we have seen or been part of:
Including individual clips in your debriefs can be powerful. But creating a 2–3 minute video highlight reel with a clear underlying story will help put your final presentation over the top. Our five steps will help get you there.
Below is a short list of cases where the Fabric platform has been used successfully in the advertising pitch process:
Researchers from across categories use the Fabric platform to develop and launch meaningful studies. We bring specific value to advertising agencies in the ad pitch process:
Discover the speed and power of Fabric’s AI and easy to use platform below, featuring a real study conducted by 3 PhDs at MSFT in 2021!