The Ad Pitch
The Fabric Academy has tailored this best practices content to an ad agency in the pitch process. It could be the agency is in the early stage (e.g., made the long list, getting ready for a chemistry meeting with the client), mid stage (e.g., in the midst of strategy and/or creative development) or in the final stage (e.g., preparing the final presentation).
If you’re in a creative pitch for an advertising account, here are two sets of question types, depending on what you are trying to figure out:
- Reactions to your concept(s)
- The impact of your concept(s) on the brand
Reactions to creative concepts
In this type of study, the researcher is looking for diagnostic-style feedback from consumers where the emphasis is on the work itself (and less so on the brand first).
Start with broad questions, then work towards the more specific:
- Platform questions: “What is the overall campaign message YOU take away?”
- Key message questions: “What is the main message YOU take away from this particular concept?” Wording is important: avoid asking what they think the main message takeaway is because respondents will tend to put on their marketer hat and project what others may say or what they think the company is trying to say. You want them to be clear that you are interested in what message they personally get out of it.
- Specific element questions: These can be about tone, personality, music, characters, style, animation, endorsers, voice talent and more. Pick one specific element level per question.
Impact of creative concepts on the brand
In this approach, the researcher is less concerned with consumer’s reactions of the ads themselves, and more focused on the impact of the advertising on the brand. Bookend the study with pre/post brand perceptions.
- Start with: Baseline perception(s) of brand
- Move to: Exposure to stimulus; could be big ideas, manifestos, campaign platforms, individual executions
- End on brand impact: Resulting perception changes in the brand
The ultimate goal for any agency in the pitch process is to land recurring, project-based revenue. While there are a lot of variables in play (competitors, pricing, chemistry, conflicts, people), in our experience, clients usually buy a team of people whom they trust and can envision working with.
An appealing aspect of the team dynamic for clients is the agency’s openness to research. Clients look for agency talent who will take responsibility and make intelligent judgments, but who also welcome the POV of the consumer in the process.
In new business pitches, showing how the agency approaches strategic, creative and even media problem-solving using consumer input is a potent recipe for success.
This content is designed to help provide researchers with the knowledge and confidence to best leverage Fabric in all stages of the advertising pitch process, including:
Preparing for your study
- Setting the right research objective
- Developing the right study design
- Determining the right sample size
- Writing the recruitment screener
Developing study content
- Writing the best study questions
- Testing different kinds of creative stimulus materials
- Timing the study
Analyzing & presenting study results
- Analyzing the incoming responses, a.k.a. qualitative data analysis
- Working with our proprietary Fabric AI
- Creating a video highlight reel, including how to create a paper edit for the video
Working with Fabric
- Key benefits
- Case studies
Preparing for your study
Setting the right research objective
One of the most important aspects of any market research project is setting the right research objective. Researchers often mistake the questions they want answered (which are of course important) for the research objective itself.
Establishing a research objective can be done in various ways. We recommend articulating the research objective in one single sentence. This will provide a clear focus for your team.
If you list a number of questions, neither you nor the team will know exactly what it is the study is aiming to achieve.
Below are some examples of clearly articulated research objectives:
- Understand how the current messaging campaign resonates in different countries
- Understand if/how existing users understand our key point of differentiation
- Bring to life the muse target audience for the brand
- Understand which concept resonates most with consumers and why
- Identify drivers of churn
Determining the right study design
Because qualitative market research is more of an art than a science, it’s important to think through study design—or as we call it, the research strategy.
Study preparation arc
Broadly, the study preparation logic should follow this arc: 1) Research objective; 2) Research strategy; and 3) Research tactics.
1. Research objective
Determine the research objective. Ask, what is the singular, overall objective of the study? Usually, your answer will take the form of a single sentence.
2. Research strategy
Articulate your research strategy. Broadly speaking, given the objective, what is the strategy for the study’s architecture? For example, if the study objective is to understand how a new product concept resonates, the research strategy could be “Compare the ideal to the actual.”
3. Research tactics
What are the research tactics? Given the objective and strategy, determine which specific tactics should be employed.
For example: we wanted to understand how people felt about the fact that when they are in public, chances are very good they were being recorded on video. We asked several questions involving how they felt about the proliferation of video cameras in public, to tap into the left brain where the speech center is located. Then we had them then answer one question using only gestures and body language—no words—to tap into the more emotional right brain.
Study design example
By way of example, imagine this scenario:
- The research objective is to understand core consumer perceptions of the client’s brand.
- For the research strategy, we recommend that the study deconstruct all of the different elements that make up the core brand perceptions.
- Hence, the research tactics would involve breaking down the top most important associations people might have with the brand, and focusing each question on one aspect (what they sell, who they think buys it, who they associate with the brand, what imagery they connect with it, who works at the company, what does the company’s brand say about the people who wear or use it, if there is one word that describes the brand, what would that one word be and why).
Determining the best sample size
The #1 question we get asked by researchers is this: How many people should I include in my study? We’ll answer that in two ways:
- What the academics say
- What our experience has taught us
The academic POV
What constitutes an adequate sample size has been debated extensively in the market research industry for many years, in part because quantitative sample sizes are statistically easier to measure, including variance.
Often, quantitative research mindsets get applied to qualitative research because quant norms are concrete, whereas qualitative sample sizes are difficult to prove mathematically.
Studies have been published over the years on the topic of adequate sample sizes for qualitative research. Here are a few:
- Creswell, Glaser, Morse (the recommendation: 30–50 participants)
- Springer (Springer puts forth the argument that anywhere from 5–50 is adequate, but that 25–30 is considered to be the right number)
- InterQ (recommends 20–30)
Broadly, the academic research suggests that a sample size in the 30–50 participants range achieves what experts call “the point of saturation” where adding another participant doesn’t add materially to the insights generated. We agree with that analysis.
What our experience has taught us
Qualitative research is more art than science.
Typically, clients turn to qualitative methods to understand deeper meaning: beliefs, attitudes, perceptions, feelings, emotion and more—deep intangibles that help add up to the “why.” So while we believe that minimum sample sizes should be employed in qualitative research, getting to the “why” requires more nuance. For this we recommend taking 10 critical variables into consideration.
The 10 critical variables
We have found that there are 10 critical variables to consider for determining the right sample size for a qualitative research study.
1. Where you are in the process
Earlier stage projects (e.g. exploratory or generative studies) can generally employ smaller sample sizes, because more iteration and development will be conducted as the project progresses.
2. Business impact
The higher the business impact of the research, the greater the sample size should be. For high impact research, we would likely recommend a hybrid quant/qual approach so you’re not relying solely on qualitative research.
One essential variable we consider is the geographic diversity of the target audience. It can be as easy as needing domestic and international respondents, or it may be more complex. In the US, most national brands have very broad distribution, so making sure to include the coasts as well as the interior is not only great practice, it’ll signal to clients that you don’t favor one type of demographic audience over another.
4. Research design
How the study is designed can have a huge impact on results. Which questions to ask in what order changes how respondents answer, affecting a study’s insights and conclusions.
5. Research platform
When you leverage a qualitative market research platform, you are buying a tech stack. One important variable is how well equipped that resource is technologically to identify the right recruits, field the study in a relevant way, and interpret the results. In the case of Fabric, our proprietary AI is an industry first, employing sentiment and emotion analysis to help researchers rapidly make sense of video responses.
6. Quality of recruits
It’s very important that the people in the study be the right people, regardless of how many respondents are included in the study size. Beyond the issue of whether they technically qualify, the people included in the study need to “feel” like the right consumer.
Using Fabric’s video-centric platform is particularly useful in this regard. Seeing the target in motion, on screen, telegraphs a great deal of information, versus seeing numbers in a graph or having a faceless/voiceless respondent using text replies.
Who is interpreting the data your study will generate? All insight is qualitative, so considering how equipped you or your research partner is in the process has a tremendous influence on both sample size and study design.
A variety of methodologies are available to the researcher, and each will suggest its own approach to sample size. Qualitative techniques to choose from include IDIs (in-depth interviews), focus groups, in-homes, friendship pairs, small group interviews, intercepts, observational research, ethnographies and/or digital (online, mobile).
In Fabric’s case, the methodology is unique in that it’s effectively “1-on-none”—meaning it’s asynchronous, and there is no moderator present. Much has been written about the effects of group-think within focus groups, where an ‘alpha’ respondent will influence others; Fabric has none of that. Fabric also removes the moderator from the study; therefore there is no moderator bias.
Instead, the methodology and technology employed by Fabric free up respondents to behave in a more open manner. Fabric’s confessional style enables what researchers have called the “online disinhibition effect” where respondents are more open to express themselves because there is no fear of disagreement or conflict with a moderator or fellow panelists.
9. Company culture
Some organizations are more comfortable with small sample sizes, whereas some look for larger samples because distribution is wide and/or global. For example, we have worked with Nike’s Innovation Kitchen on early exploratory studies using a small sample size. When we work with Xbox, having 10 markets each with 10 respondents, the sample size can drift to 100+ easily.
Often we’ll see that a client has a number of segments to understand. In that case, determining sample size depends on whether or not you’re looking for a rollup of all segments. If you’re going for a rollup, the 30–50 number is fine.
However, if you’re seeking to understand similarities and differences amongst segments, we would recommend 15 respondents per segment. This enables your study to take advantage of our AI, which kicks in at a minimum of 15 respondents.
Conclusion: What is the right sample size for your qualitative study?
If you’re looking for a short answer on the recommended sample size, it’s 30–50. But keep an open mind to key variables that may influence higher or lower numbers.
Writing recruitment screeners
There are two ways to bring recruits into your Fabric study: provide us with the right information to write the screener for you, or bring your own screener.
Since respondents aren’t paid to answer screeners, try to limit the number of screening questions to ten data points (including age, gender and geography).
1. Let us build the screener for you
When building your DIY study, select “Let Us Recruit”, and then on the following page define the recruits you’re looking for responses from. This may include:
- Demographic information like age, gender, geographic location, household income and education
- Behavioral criteria like product usage, recency of purchase, competitive products owned, brand affinity, amount spent, and frequency
- Psychographic criteria such as personality questions, agreement to attitudinal statements, general preferences, or other intangibles
- Our team recommends keeping recruit criteria simple to allow for the quickest possible responses and the highest likelihood of approval from the recruiting team.
2. Provide us with your screener
The content of your screener can be cut and pasted into the “Define Your Recruit” field within the “Let Us Recruit” option, or you can paste a link to a document containing your screener for our team to program.
3. Fabric will accept/reject your criteria within 24 hours at the most
Once you have submitted your recruitment request through the Fabric platform, the team will review it for feasibility within 24 hours. Most requests are approved within 1–2 hours. If there are any questions or potential sticking points, we will reach out via email.
4. Once the screener is approved, finish building your study and launch
Once your screener request is approved, you will receive a notification. In your dashboard, the study status will now read “Approved – Awaiting Payment”. From there, the next step is to do a final review on your study questions.
Then proceed with launching your study. Your recruits will populate the study dashboard, with the first respondents usually coming in within 24 hours of launch.
Developing study content
Writing study questions
Fabric studies include a total of up to 10 questions per respondent. Respondents have 60 seconds to answer each question.
Below are guidelines to help you think through how to ask questions on the Fabric platform to yield the richest, most captivating and emotional responses.
Start broad, then get specific.
Start with the broadest possible context. You may want to start by asking respondents about their relationship to the culture that your product or service exists within. Then drill down into the brand, product and/or ad landscape.
How would you describe the culture of home furnishings in Madison, Wisconsin?
Ask specifically vague questions.
If you give consumers something to cling to, they will cling to it. Instead, let consumers create the story for you by asking questions that are specific to your area of interest, but that don’t lead the witness.
Show us any object in your home that defines luxury for you; explain in detail why you consider it luxury.
Ask about shifts in behavior.
A great way to understand habits is to ask about how <blank> is changing for them. Be sure to be specific about the time frame, though.
Is Adidas a brand on the rise—or the decline—over the past two years? Why?
Ask them to define something.
Sometimes, asking a very foundational question about a definition of something can really open up avenues for consumers. Marketers or product designers might think they know how consumers think, but hearing how they define something can be transformative.
How do you define competition within your athletic life?
Use polarizing questions.
Respondents will gravitate to gray areas; don’t let them. Ask them what they love or what they hate. Force them to choose A or B, and explain why. If they struggle to answer, that can be telling too. If you want them to answer a number scale (and elaborate on the score), force them to choose 0, 5 or 10 out of 10. 3’s or 7’s won’t tell you much.
What do you love most about your hair? What do you hate?
One of the simplest and often overlooked questions is “why?”. That can be about their motivation, their reward, their product use, their behavior—or even as a projective technique.
Why do you use FaceTime?
Get respondents in the relevant space.
Have them bring you to the environment that makes the most sense for your mobile video survey. Beyond the actual response, you get a glimpse into their brand and product assortment.
Please show us all the audio, video and other A/V devices that are part of your home entertainment ‘ecosystem.’
Don’t cram three questions into one.
Imagine we toss you a single tennis ball. Easy to catch, right? But what if we toss you three? Or five? Not so easy. Stick to one question, or they will focus only on one of the questions; and that one might not be the most important one for your study.
Instead of “How do you feel when you wear high heels? When do you wear flats or sandals?” zero in on a single question: “How do you feel when you wear high heels?”
Tug at the respondent’s emotions.
The best insight comes when people talk about things that they really care about, whether it is something that they love or a secret pet peeve of theirs. Deprivation works. Creating tension can help.
How do you feel emotionally when you feed your baby something super healthy?
Leverage “Show and Tell”.
Your data will be much richer if you can see the respondent interact with the product on their video. Have them capture a living example of what works well and what frustrates them.
Show us your cat, and introduce them on camera to us.
Show us your favorite sports bra for racing a 10k, and tell us how it feels different from the one you typically wear to the gym.
Use their language (not your client’s).
Use language that the respondents are comfortable with, and would use if they were talking to a friend. For instance, a respondent might not know what an “asset” is.
What is the difference between online content that is sponsored versus online content that is not sponsored?
Optional: Keep the last question open-ended.
Giving respondents the freedom to share open-ended thoughts can lead to even more novel insights.
[Company Name] is listening: how can they make your buying experience better?
Put your respondents in hypothetical situations, use similes and metaphors, or ask a question that is completely “out there.” The more creative your question is, the more creative (and interesting) your responses will be.
Write a love letter to IKEA and read it on camera.
Testing Creative Stimulus Materials
Whether it’s a new product design, the development of a new ad campaign, or the iteration of new UX or features, we often get asked about how to test different types of stimulus materials.
How to attach stimulus
You can add a link to any individual question or multiple questions. Just highlight the word you want to link, then (as with Google Drive) paste in the link.
Users will click on that link and be directed to the destination. We generally recommend using Google Drive because most people are familiar with it, but links can take users to:
- Videos (e.g., YouTube, Vimeo, etc.)
- Cloud storage locations (Google Drive, Dropbox, Box, etc.)
What kind of stimulus can be attached
Researchers use Fabric to test a broad range of assets. Some examples:
Creative stimulus materials can be used to test:
- The platform (e.g., Just Do It, Think Different, I’m Lovin’ It)
- Key message takeaway(s)
- Tag line
- Individual elements (tone, style, personality, actors, music)
- Pitch manifesto
- Video stimulus (e.g., existing spots, animatics, scripts with audio)
- Static stimulus (e.g., print, outdoor)
- Audio stimulus (e.g., radio, podcast)
- Interactive (e.g. UI/UX, websites, apps)
How long should a piece of stimulus be?
The kinds of stimulus that have been tested include:
- Sketches of product ideas from a designer’s notebook
- Descriptions of a new product
- Proposed layouts
- Beta products/apps and finished products
- Tag lines
- Campaign elements
- Positioning statements
- Value propositions
Since each respondent is served up 10 questions, there are a number of ways to leverage the Fabric platform for testing stimulus. As a general rule of thumb, if you have four different pieces of stimulus to test, here’s how the arc of the study might look:
- Q1: baseline perceptions of X
- Q2: first reactions to Stim 1?
- Q3: what resonates with Stim 1?
- Q4-Q9: repeat Q2 + Q3 for the other Stim
- Q10: compare and contrast or pick fave*
*Note: for Q10 in the above example, it’s a good idea to include a rollup of all the stimulus to remind them of everything they have already seen. Otherwise they might have trouble recalling the first few concepts.
Testing statements or paragraphs
When testing product descriptions or positioning statements which can run longer in text form, do your best to keep the concepts highly differentiated. Present 3–5 concepts max. If there is significant overlap in the concepts and/or the statements are long, consumers will have trouble distinguishing one from the others. In that case, we recommend that your wrapup include a rollup PDF of all of the statements/concepts. The rollup will refresh the respondent’s memory after they’ve seen each individually.
Avoiding order bias
To avoid your entire sample seeing the stimulus in the same order, therefore biasing their reaction depending on the sequence in which they see the stimulus, break your studies down into smaller sample sizes and switch up the order.
For example, with a sample size of 15 people (n=15) and three pieces of stimulus, a suggested approach would be to structure it like this:
- Cohort 1 (n=5): Stimulus A,B,C
- Cohort 2 (n=5): Stimulus B,C,A
- Cohort 3 (n=5): Stimulus C,A,B
As with everything in an online environment, confidentiality can be compromised. A few notes on how to protect your ideas:
- Our user Terms and Conditions have built-in confidentiality; but as you know, a lot of folks don’t read them all.
- Serving your concepts up without a logo or brand can help make it brand-blind, eliminating not only security concerns but also may give you a purer read on the relevance and resonance.
- Serving up the same concept with multiple logos on it can help head-fake consumers, and also give you a read on the influence of the brand associated with it.
- Lastly, if the risk of the idea somehow leaking is high, we recommend you NOT use Fabric to test your concepts. You have to do the risk/reward calculus. If a 17-year-old teen can hack into the Pentagon, taking a screen grab of a concept is not beyond the realm of what consumers may do.
Analyzing & presenting study results
Analyzing Video Responses: Qualitative Data Analysis
By far the hardest part of the qualitative research study process is analyzing the incoming data because it’s unstructured compared to quantitative data. Researchers appreciate our platform’s unique tools, which speed up the process and give researchers a head start. Leverage our six key features and learn how our proprietary Fabric AI can help.
Six Fabric features for qualitative data analysis
1. Video responses are organized by question
- You can hyper-target your analysis by reviewing how all respondents answered any one question.
- Unlike focus groups or moderated one-on-ones, the same questions get asked of everyone in the same order, providing consistency.
- Sometimes a specific question is THE question you’re trying to understand, and the other questions lead up to it. In that case, you can skip right to a specific column of responses, to get straight to the heart of the issue.
2. Enhanced transcript experience
- If you upgraded to human + machine transcripts (from machine only), then those transcripts will populate within 24 hours of each respondent’s completion.
- Google Speech transcripts are available to all researchers almost immediately. Note that while machines are good and getting better all the time, they aren’t perfect. We make the transcript field editable so that you can correct any Google Speech errors.
- Transcripts can be downloaded to Google Sheets or Excel.
3. Comments can be added to individual study responses
- As on any social media platform, there is a comment field where collaborators can share feedback, ideas and insights. The field can also be used for note-taking.
4. Respondents can be rated
- Our rating system makes it easy to remember who the best respondents are. It’s also a way to keep track of which respondents the researcher has reviewed.
5. Respondent names can be made anonymous
- In an era of increased sensitivity to PII, our default view is to make users’ names or email addresses anonymous. However, if you bring your own recruits, the default is to show respondents’ email addresses because we presume they gave you access to email them. That field is editable if you would like to make respondent names anonymous yourself.
6. Unlimited tags can be created
- There is no limit on the number of tags that can be created in a study.
- After tags are created, researchers can sort by tags individually or in combination.
Working with our proprietary Fabric AI
One of the top frustrations of qualitative data analysis is the time it takes to digest the information, make sense of it, and bubble up key insights.
So we developed our own proprietary sentiment and emotion-based AI that provides three primary data analytic sources that free up researchers to spend more time developing insights that link to their brand, message, product or design.
Fabric AI includes:
The paragraph summary
The paragraph summary encapsulates the responses to any individual question in a study. To be clear: this does not summarize metadata across the entire study, only question by question.
The paragraph is written in plain English and it identifies:
- Whether responses were positive, neutral or negative
- Up to three sources of the sentiment
- Key themes
- Top emotions
Sentiment, sources of the sentiment, themes and top emotions are all hyperlinked. The videos and verbatims surfaced by our AI respond to the topic being explored in the hyperlinks.
Total mentions & emotional mentions
Fabric AI shows a count of total mentions, as well as mentions with emotional intent behind them (either positive or negative).
- Total mentions is a count of literally how many times a word was mentioned.
- Emotional mentions is a count of how many times a word was mentioned with emotional intent behind it.
- Strength bar indicator shows whether the sentiment was strong or weak.
- Mentions are also correlated to one of 8 primary emotions Fabric AI tracks.
Researchers also benefit from our AI’s ability to surface videos and verbatims with the highest degree of emotion, and there is a search field to quickly direct researchers to specific terms of interest.
Creating a Video Highlight Reel
Why use video?
Video highlight reels can be incredibly powerful tools in a debrief session. Standing in front of a group of people, telling them what consumers said—it’s not uncommon for attendees to get defensive or dismiss insights they don’t agree with.
Here’s where your highlight reel comes in. Press “play” and let consumers say it in their own words. Not only does the room go quiet for a few minutes, it puts any naysayers on their heels.
How to create a powerful video highlight reel
In our years of market research experience, we’ve learned how to create videos that pack a punch.
Here’s our method:
- Keep it short
- Tell a singular story
- Bring the story to life
- Make a paper edit
- Edit video & post-production
1. Keep it short
Videos should be kept under 2–3 minutes. Anything longer won’t hold the viewers’ attention in a culture dominated by TikTok-length videos. Long videos also become a mini documentary film requiring too much production effort.
2. Tell a singular story
How do you go from a dataset of dozens or hundreds of individual video responses to a 2–3 minute video?
The most important part of the process is telling a singular story. This means distilling down the ideas and statements found in your survey respondents’ videos. Sum up the singular story of your video highlight reel in one definitive sentence.
Our rule of thumb: if you can’t articulate the story of the video in one sentence, it’s probably not a story. Viewers are accustomed to video highlights that tell a single, coherent story, rather than a disconnected set of facts.
Asking the video to do the work of a slideshow presentation generally fails, too. We know for a fact that trying to do PowerPoint in video format does not work.
The video highlight reel and the presentation are two different narrative forms. Your reel should have its own internal narrative that tells a singular story.
3. Bring the story to life
Now that you have a singular story sentence, bring the story to life.
As a starting point, try structuring the story using three chapters: beginning, middle and end.
As you go through the process, keep the end in mind; what is the key message you want viewers to take away when they have finished watching the story?
Within those three chapters, consider using an effective storytelling device called a pivot point. Is there a moment where you are leading viewers down a path, and the unexpected gets revealed? For example, is there a surprising attitudinal, perceptual or behavioral insight that challenges convention? A new way to see the world? We think about it as the equivalent of the needle scratching across the record on a vinyl LP.
In classic ad agency pitch theater, the arc of a story might look something like:
- Things were great, everybody loved us
- Then something unexpected or bad happened and we lost them
- But here’s what they love about us or we are overlooking that could bring them back
4. Make a paper edit
A paper edit is a written outline of your video. It provides structure, tightens your focus, and helps you keep your video the right length. Think of it as a kind of script, storyboard or shot list, but with time codes. The time codes show exactly which interview clips you plan to use.
To make your life easier, use our paper edit tracking sheet where you can enter the key quotes you want to include, and it will automatically calculate the run time length. Note that it’s based on averages, so there may be some variance between the estimated and actual video length.
5. Edit video & post-production
With a paper edit prepared, the video editor can now download the relevant clips. Transitions, title cards, music and even B roll can be added.
Some of the more compelling video highlight reel stories we have seen or been part of:
- For a study about kids attending a football combine, for a major footwear brand working on a new shoe: These athletes would give their eye teeth for one tenth of a second improvement.
- In a study for a major national fast casual restaurant: People’s connection to bread goes way beyond taste and texture to near-spiritual associations with ethnicity, ancestral origins and even religion.
- For a study on the redesign of an ultrasound machine: Despite the anxiety physicians experience as part of their jobs, they have personal superhero moments using ultrasound that provide a massive surge of confidence.
Including individual clips in your debriefs can be powerful. But creating a 2–3 minute video highlight reel with a clear underlying story will help put your final presentation over the top. Our five steps will help get you there.
Working with Fabric
Below is a short list of cases where the Fabric platform has been used successfully in the advertising pitch process:
- Early chemistry check-in when clients are short-listing finalists (Deutsch/Green Giant)
- Fueling collaborative strategy development sessions (Ogilvy/IKEA)
- Fleshing out motivators for key target demographics (McGarryBowen)
- Accessing diverse national and international audiences (72andSunny/Instagram)
- Assessing equities of existing campaigns (Wieden + Kennedy/TurboTax)
- Testing creative work in development (BBDO)
- Demonstrating the agency’s initiative (North)
Researchers from across categories use the Fabric platform to develop and launch meaningful studies. We bring specific value to advertising agencies in the ad pitch process:
- The Fabric process matches the frenetic pace of new business pitches; studies can be conducted rapidly.
- After a study is conducted on Fabric, the actual voice—and face—of the consumer will be embedded into the strategic and creative development process and client conversations.
- Stimulus in the form of video responses and quotes helps engage clients in key strategic and creative alternatives.
- The agency’s willingness to go the extra mile on its own initiative is a strong signal to the client that the agency truly values that client.
- The client sees clearly that consumer insights are important to the agency.
- Video highlight reels become fantastic meeting theater in final presentations, helping underpin a strategy and/or specific campaign.