How W+K used Fabric to understand sports fans, quickly

Wieden+Kennedy Opening Day ad

How W+K used Fabric to understand sports fans, quickly

Problem

W+K was invited to pitch MLB, in part for its long time credentials working with Nike, in part because of its experience working for ESPN in the past, and in part because of its ability to understand how to leverage brands in a way that imbues them with meaningful cultural connections.

Like most pitch situations, the agency needed quick-turn qual with a limited budget:

  • A national sample
  • Gen Z – including a subset of Latinx and Spanish speaking respondents
  • Sports fans (who play and watch sports generally) whom the League was trying to bring into the fold
  • Baseball fans (who play and watch baseball regularly) who would provide the core baseball audience POV

 

The agency had some initial strategic themes they wanted feedback on, with the goal being:

  • To understand how the themes resonate with consumers
  • To see if any had an adverse impact
  • To see if any new, unexpected themes surfaced

Solution

The account planner – Anthony Holton – turned to Fabric’s mobile video ethnography platform to get the job done in days versus weeks that traditional qual takes, and leveraged it in a number of ways:

 

Pinpoint accurate recruitment: The platform can recruit respondents from its proprietary database, and to make things simple; all he had to do was type the specs into an open field, and Fabric created the screener, then added qualified participants to his study

 

Emotional AI-Automation: He opted for a sample size of n=15 because that’s the minimum threshold for Fabric AI to serve as an automated research assistant (more on that below)

 

SaaS Platform: Anthony crafted his own study questions (up to 10 Qs/respondent) and typed them in to the study builder

 

Simple stimulus uploading: He added stimulus (links to PDFs…as simple as adding links to a Google Doc) in a few of the study questions to get participant feedback on the strategic territories they were exploring

 

Intuitive Online Dashboard: As respondents completed, he used the study dashboard to review incoming responses:

 

1. Each response is a self-recorded video up to 60 seconds long

 

2. Each response comes with transcripts

 

3. Fabric AI —which is optimized for sentiment and emotion— offers a number of different ways to parse the data at the press of one button:

  • It counts Mentions, Mentions with emotional intent, and identifies which Mentions align with the 8 primary emotions Fabric AI tracks
  • It identifies themes and patterns to drill into
  • Using a combination of voice intonation and the transcripts, it generates a view of how strong each of the 8 primary emotions is, per question
  • It serves up the most emotionally engaged video responses – with a toggle to view verbatims instead

 

4. Each video can be shared (which he did with fellow remote team members, to get their comments), tagged, rated and comments added

 

5. The agency also put together a short highlight reel to bring the insights to life, adding title cards, B roll and music; ultimately bringing additional meeting theater into the final pitch presentation

 

Results

 

W+K was awarded the MLB business, helping it drive not only new top line revenue, but adding a prestigious and iconic brand to the agency roster.

 

The launch campaign for Opening Day used the newly created tagline “Baseball is something else” which was a creative articulation of one of the strategic themes tested on Fabric.

 

The mass media coverage of the campaign and the social media buzz helped elevate MLB’s place in the cultural conversation, and opening day shattered the previous one day viewership records by a whopping 42%!

 

Says Anthony looking back on the pitch:

“Fabric has become a go-to platform for our pitches now.”

Eight Principles for Storytelling in Innovation

Eight Principles for Storytelling in Innovation with Lisa Shufro

In a conversation with Fabric CEO Tom Bassett, Lisa Shufro (Chief Storyteller for What Matters) shares eight of her principles for storytelling, based on her work with John Doerr’s Measure What Matters ongoing movement, and her experience curating conference speakers for innovation forums around the world. She is the former Managing Editor of TEDMED. 

#1: Identify the human problem—not the process problem.

Shufro’s title is no accident. “What is significant about the term Chief Storyteller is the emphasis on the importance of the human connection,” she explains. Measure What Matters is more than instructions on how to use Objectives and Key Results, and so are the stories on whatmatters.com  “I ask what human problems are leadership solving for, rather than what processes keep them all feeling like they’re checking all the boxes. I think that’s a different lens.”

#2: Most innovation stories follow one of three archetypes:

Discovery
“In the innovation space—where I spend most of my curatorial time—I would say there are usually stories of discovery or ongoing inquiry. For example, the discovery of coffee: Why did it take 250 years for Sweden not to arrest Swedes who drink coffee? Or what’s going on with quantum computing and how we got there? That’s another ongoing inquiry/discovery kind of story.”

Challenge

“Florence Nightingale, or the discovery of germs, or turning the telescope around and making it the microscope….that’s challenging a widely held belief. Being among the first to say, ‘Hey, I don’t think that we catch cholera because people smell bad. There’s probably a thing called germs.’ That is equivalent to ‘Hey, the earth is not the center of the universe.’ We did nasty things to people who said that; that’s a challenging a commonly held belief story.”

Call to action

“The call to action can kind of take one of two forms: ‘Hey, come with me, there’s good fortune this way.’ Or there’s, ‘Hey, everybody, we’re heading in a scary direction. Get it together, folks, we’ve got to go over here.’”

The clearest talks tend to focus on only one of these archetypes.

#3: Determine the scale that creates the most relevance.

“The thing that varies the most in my opinion is something I call scale. What a lot of people miss is: when I’m giving you data, it’s at human scale or it’s at business scale or it’s at societal scale or national scale. And then I need to come up with a story that brings it to a scale that’s relevant to the listener.

“So if it’s a story about Fatoumata, the farmer in Mali who needs access to seed and tractors in order to go from sustenance farming to small agri-food business, that’s an individual level story. But maybe what the organization that I’m working with wants to do is change systems, and create entirely new agrarian markets. So, the story of Fatoumata doesn’t reinforce their desire to change systems.”

#4: Establish shared context with audiences.

“A story works if you either establish or reveal shared context. If the story lands, you were successful in building or revealing relevance. 

“So if you tell the wrong audience the wrong joke…let me give you an example of a joke I’ve always liked: ‘How do you eat an elephant? One bite at a time.’ Now, I think this is cute, but I once told it in Africa and their response was, ‘We don’t eat elephants.’ And so the joke failed because it wasn’t relevant to them, there was no shared context.” Stories work the same way. Build or leverage shared context in your stories. 

#5: Understand the relationship delta.

“Where we see both OKRs (Objectives and Key Results) fail to achieve their full potential in an organization—or stories fail to achieve their full potential—is that they fail to elucidate a very important relationship between where we are right now and where we’d like to be.”

#6: Diagnose the hardest part first.

“I was trained as a musician, and I learned very quickly that in order to get onstage, you don’t have the luxury of rehearsing from the beginning to the end of the piece. You have to figure out the hardest part of the piece first. And so you often practice the piece out of order. Diagnosing where to go first is something I’ve been practicing my whole life.”

#7: Work out the story rhythm.

“I call the pattern of tensions ‘rhythm.’ Was it bad, but it all turned out okay? Or It was all great, and then it fell apart? I base the rhythm of tensions to determine which specific example we’re going to use. So a speaker who’s speaking to a general audience—versus a bunch of machine learning technologists—may use a different pattern of tensions to convey the same idea.”

#8: OKRs (Objectives and Key Results) are about transformation, not activities.

“The activities are not what makes a good story, and are not what makes a good OKR. The activities are the things that result in internal transformation or external transformation.”
 

Interview responses may be lightly edited for clarity. 

Fabric AI x ChatGPT: Exploring shifting attitudes

Using Fabric AI + ChatGPT to Explore Shifting Attitudes Toward AI

Full article and case study report coming soon - but in the meantime feel free to check out the brief summary, AI walkthrough videos, and raw response data below!

Click the link here to view the “Shifting Attitudes Toward AI” case study, with Fabric AI 2.0 + ChatGPT analysis on the Fabric.is researcher platform. To see the AI output, just click the “View Fabric AI” button beneath any questions that spark your interest!

You can view the video below for an introduction to Fabric AI, the primary technology featured in this case study!

For this case study, we set out to take a close look at where general perceptions of AI are currently at, how they may have shifted during this recent generative AI tech revolution, and where people see AI going in the near future. Toward this goal, we actually employed ChatGPT to write and edit a number of the study questions, then used a combination of our proprietary Fabric AI alongside new integrations of OpenAI’s ChatGPT API to spearhead a quick-hit analysis on a relatively large qualitative video research study that normally would require a significant number of hours to code and analyze. The methodology we used is the Fabric DIY “Let Us Recruit” pathway, where we built up the study on Fabric, submitted a request for the recruit spec (two age segments of 18-29 and 40-59, then two occupational segments of Tech vs Non-Tech industries, plus an awareness and relatively strong opinion on ChatGPT / generative AI), and our on-platform recruiting team ran screeners to recruit from the Fabric / mindswarms respondent database. The N=60 respondent study closed in just 3 days, with strong response quality and insights.


The full findings of this case study will be coming soon as a complete article, but in the meantime you can view the video snippet below to see a walkthrough of using the AI to pull out some insight overviews from a couple more AI-central questions in this case study!

Quality Research, Quickly: Conducting In-Depth Studies Efficiently with Fabric

Quality Research, Quickly: Conducting In-Depth Studies Efficiently with Fabric

Fabric Customer Testimonial: Michael Cox

Play Video

Problem

When Tombras set out to conduct research for a prominent golf brand, they ran into a problem. Tombras needed both in-depth research to probe for insights from particular demographics AND they needed a quick turnaround for the research.

“A lot of times, we get caught with having research options that are either quick but don’t exactly give us the depth and fidelity that we need, or they’re quality research options and they give us great insight, but they take a lot longer,” said Group Strategy Director at The Tombras Group, Michael Cox.

Luckily, Mike wouldn’t have to compromise on speed or quality thanks to Fabric’s quick strike video ethnography platform.

Fabric: The Sweet Spot of Speed and Quality

For the study, Mike needed to glean consumer feedback on his client’s brand from a mix of experienced and new golfers. 

“We’re working under very short, very compressed timelines. And even though we’re under those time pressures, it’s still really important for us to get really solid consumer opinion so that we can develop great insights for our campaigns,” said Mike. 

With Fabric, researchers have access to a massive database of pre-vetted respondents representing a plethora of demographics. 

Building Questions with Fabric Study Builder

To get truly in-depth responses from his study participants, Mike took the opportunity to get creative with his golf study. The Fabric platform allows users to take advantage of its video capabilities to ask unique questions that call for more than a brief dialogue. For example, prompting participants to show key items to the camera or asking participants to react to stimuli. 

In his study, Mike asked his participants a few “show-and-tell” questions such as prompting participants to show not only their favorite golf equipment, but also to show what golf equipment they were ready to replace. 

Result

In the end, Mike was able to collect the in-depth data he needed in a timely manner to inform his creative strategy for his golf brand client.

Mike adds: “Fabric made getting access to consumers and getting their opinions in a really quick way really efficient for us. So that has been really helpful for our workstream.”

Fabric Recruits Unique Respondents at Lightning Speed

Fabric Recruits Unique Respondents at Lightning Speed

Fabric Customer Testimonial: Gwen Sullivan

Play Video

Problem

Innovation consulting firm sparks+sullivan, set out to conduct market research on consumers fitting a highly specific set of demographic, psychographic, and category-related criteria to inform the development of a new brand strategy for a well-known American-made manufacturing client. 

But there was one problem: she had to fill the unicorn recruit AND collect all the data in one week.

Enter Fabric’s do-it-yourself video ethnography platform, which includes the feature of having the platform recruit for her.

Fabric: A Simple Solution for Research Recruitment

The user experience for researchers recruiting on the Fabric platform is radically simple. 

To recruit for her study, all Gwen had to do was enter her list of requirements in an open text field, and submit it to the Fabric team. Her desired recruit spec was approved within one hour because Fabric has a proprietary database of over 300,000 consumers globally.

According to Gwen, “The real magic of the tool is in Fabric’s respondent database….they could find the exact types of people I was hoping to have conversations with. And did at a velocity that matched the demands of our project timeline.”

From there, Gwen’s recruits appeared in her study dashboard, having each self-recorded 10 video responses to the questions she had entered herself in the study builder. 

As Gwen puts it, “I can enter in who I want to talk to, the questions that I want answered, go to sleep, wake up, next morning I get to go to my dashboard and see it populated with all of these people and their responses.”

Researchers also have the option to bring in their own recruits if they have a pre-existing list of participants they would like to use in their study. They can be uploaded manually, or via CSV file.

 

Cost

The recruitment cost of using our Fabric respondent database begins at $250 per person (including the recruitment fee of $150, platform fee of $50, and incentive of $50). With this comes access to the Fabric study dashboard which includes:

    • A 10-question grid of 60-second video responses per participant
    • Transcripts (machine, and/or premium human transcripts available on demand)
    • The ability to download respondent videos
    • Comment field on each video
    • Tagging to help code themes
    • Sharing capabilities with customizable levels of permission

With access to the Fabric study dashboard, Gwen was able to “dig into specific clips and videos and really get to know people, get to understand their responses in a way that’s so immersive.”

Result

In the end, sparks+sullivan was able to meet its highly specific recruitment specs and was able to make a recommendation to the client on the key strategic issue at hand, all within the compressed time frame.

Gwen’s final report included sparks+sullivan’s recommendations, supported by verbatims, images of the respondents, and short video clips to bring the findings to life. According to Gwen, “I really couldn’t ask for a smoother experience working with Fabric.” 

The Ad Pitch

The ad pitch_icon

The Ad Pitch

Special Topics

7 min read

Purpose

The Fabric Academy has tailored this best practices content to an ad agency in the pitch process. It could be the agency is in the early stage (e.g., made the long list, getting ready for a chemistry meeting with the client), mid stage (e.g., in the midst of strategy and/or creative development) or in the final stage (e.g., preparing the final presentation).

TL;DR

If you’re in a creative pitch for an advertising account, here are two sets of question types, depending on what you are trying to figure out:

    1. Reactions to your concept(s)
    2. The impact of your concept(s) on the brand

Reactions to creative concepts

In this type of study, the researcher is looking for diagnostic-style feedback from consumers where the emphasis is on the work itself (and less so on the brand first).

Start with broad questions, then work towards the more specific:

    • Platform questions: “What is the overall campaign message YOU take away?”
    • Key message questions: “What is the main message YOU take away from this particular concept?” Wording is important: avoid asking what they think the main message takeaway is because respondents will tend to put on their marketer hat and project what others may say or what they think the company is trying to say. You want them to be clear that you are interested in what message they personally get out of it.
    • Specific element questions: These can be about tone, personality, music, characters, style, animation, endorsers, voice talent and more. Pick one specific element level per question.

Impact of creative concepts on the brand

In this approach, the researcher is less concerned with consumer’s reactions of the ads themselves, and more focused on the impact of the advertising on the brand. Bookend the study with pre/post brand perceptions.

    • Start with: Baseline perception(s) of brand
    • Move to: Exposure to stimulus; could be big ideas, manifestos, campaign platforms, individual executions
    • End on brand impact: Resulting perception changes in the brand

Goal

The ultimate goal for any agency in the pitch process is to land recurring, project-based revenue. While there are a lot of variables in play (competitors, pricing, chemistry, conflicts, people), in our experience, clients usually buy a team of people whom they trust and can envision working with.

An appealing aspect of the team dynamic for clients is the agency’s openness to research. Clients look for agency talent who will take responsibility and make intelligent judgments, but who also welcome the POV of the consumer in the process.

In new business pitches, showing how the agency approaches strategic, creative and even media problem-solving using consumer input is a potent recipe for success.

Subjects covered

This content is designed to help provide researchers with the knowledge and confidence to best leverage Fabric in all stages of the advertising pitch process, including:

Preparing for your study

    • Setting the right research objective
    • Developing the right study design
    • Determining the right sample size
    • Writing the recruitment screener

Developing study content

    • Writing the best study questions
    • Testing different kinds of creative stimulus materials
    • Timing the study

Analyzing & presenting study results

    • Analyzing the incoming responses, a.k.a. qualitative data analysis
    • Working with our proprietary Fabric AI
    • Creating a video highlight reel, including how to create a paper edit for the video

Working with Fabric

    • Key benefits
    • Case studies

 

Preparing for your study

Setting the right research objective

One of the most important aspects of any market research project is setting the right research objective. Researchers often mistake the questions they want answered (which are of course important) for the research objective itself.

Establishing a research objective can be done in various ways. We recommend articulating the research objective in one single sentence. This will provide a clear focus for your team. 

If you list a number of questions, neither you nor the team will know exactly what it is the study is aiming to achieve.

Below are some examples of clearly articulated research objectives:

    • Understand how the current messaging campaign resonates in different countries
    • Understand if/how existing users understand our key point of differentiation
    • Bring to life the muse target audience for the brand 
    • Understand which concept resonates most with consumers and why
    • Identify drivers of churn

Determining the right study design

Because qualitative market research is more of an art than a science, it’s important to think through study design—or as we call it, the research strategy.

Study preparation arc

Broadly, the study preparation logic should follow this arc: 1) Research objective; 2) Research strategy; and 3) Research tactics.

1. Research objective
Determine the research objective. Ask, what is the singular, overall objective of the study? Usually, your answer will take the form of a single sentence.

2. Research strategy
Articulate your research strategy. Broadly speaking, given the objective, what is the strategy for the study’s architecture? For example, if the study objective is to understand how a new product concept resonates, the research strategy could be “Compare the ideal to the actual.”

3. Research tactics
What are the research tactics? Given the objective and strategy, determine which specific tactics should be employed. 

For example: we wanted to understand how people felt about the fact that when they are in public, chances are very good they were being recorded on video. We asked several questions involving how they felt about the proliferation of video cameras in public, to tap into the left brain where the speech center is located. Then we had them then answer one question using only gestures and body language—no words—to tap into the more emotional right brain. 

Study design example

By way of example, imagine this scenario:

    • The research objective is to understand core consumer perceptions of the client’s brand.
    • For the research strategy, we recommend that the study deconstruct all of the different elements that make up the core brand perceptions.
    • Hence, the research tactics would involve breaking down the top most important associations people might have with the brand, and focusing each question on one aspect (what they sell, who they think buys it, who they associate with the brand, what imagery they connect with it, who works at the company, what does the company’s brand say about the people who wear or use it, if there is one word that describes the brand, what would that one word be and why).

Determining the best sample size

The #1 question we get asked by researchers is this: How many people should I include in my study? We’ll answer that in two ways:

    • What the academics say
    • What our experience has taught us

The academic POV

What constitutes an adequate sample size has been debated extensively in the market research industry for many years, in part because quantitative sample sizes are statistically easier to measure, including variance.

Often, quantitative research mindsets get applied to qualitative research because quant norms are concrete, whereas qualitative sample sizes are difficult to prove mathematically.

Studies have been published over the years on the topic of adequate sample sizes for qualitative research. Here are a few:

    • Creswell, Glaser, Morse (the recommendation: 30–50 participants)
    • Springer (Springer puts forth the argument that anywhere from 5–50 is adequate, but that 25–30 is considered to be the right number)
    • InterQ (recommends 20–30)

Broadly, the academic research suggests that a sample size in the 30–50 participants range achieves what experts call “the point of saturation” where adding another participant doesn’t add materially to the insights generated. We agree with that analysis.

What our experience has taught us

Qualitative research is more art than science. 

Typically, clients turn to qualitative methods to understand deeper meaning: beliefs, attitudes, perceptions, feelings, emotion and more—deep intangibles that help add up to the “why.” So while we believe that minimum sample sizes should be employed in qualitative research, getting to the “why” requires more nuance. For this we recommend taking 10 critical variables into consideration.

The 10 critical variables

We have found that there are 10 critical variables to consider for determining the right sample size for a qualitative research study.

1. Where you are in the process
Earlier stage projects (e.g. exploratory or generative studies) can generally employ smaller sample sizes, because more iteration and development will be conducted as the project progresses.

2. Business impact
The higher the business impact of the research, the greater the sample size should be. For high impact research, we would likely recommend a hybrid quant/qual approach so you’re not relying solely on qualitative research.

3. Geography
One essential variable we consider is the geographic diversity of the target audience. It can be as easy as needing domestic and international respondents, or it may be more complex. In the US, most national brands have very broad distribution, so making sure to include the coasts as well as the interior is not only great practice, it’ll signal to clients that you don’t favor one type of demographic audience over another.

4. Research design
How the study is designed can have a huge impact on results. Which questions to ask in what order changes how respondents answer, affecting a study’s insights and conclusions. 

5. Research platform
When you leverage a qualitative market research platform, you are buying a tech stack. One important variable is how well equipped that resource is technologically to identify the right recruits, field the study in a relevant way, and interpret the results. In the case of Fabric, our proprietary AI is an industry first, employing sentiment and emotion analysis to help researchers rapidly make sense of video responses.

6. Quality of recruits
It’s very important that the people in the study be the right people, regardless of how many respondents are included in the study size. Beyond the issue of whether they technically qualify, the people included in the study need to “feel” like the right consumer. 

Using Fabric’s video-centric platform is particularly useful in this regard. Seeing the target in motion, on screen, telegraphs a great deal of information, versus seeing numbers in a graph or having a faceless/voiceless respondent using text replies.

7. Analysis
Who is interpreting the data your study will generate? All insight is qualitative, so considering how equipped you or your research partner is in the process has a tremendous influence on both sample size and study design.

8. Methodology
A variety of methodologies are available to the researcher, and each will suggest its own approach to sample size. Qualitative techniques to choose from include IDIs (in-depth interviews), focus groups, in-homes, friendship pairs, small group interviews, intercepts, observational research, ethnographies and/or digital (online, mobile). 

In Fabric’s case, the methodology is unique in that it’s effectively “1-on-none”—meaning it’s asynchronous, and there is no moderator present. Much has been written about the effects of group-think within focus groups, where an ‘alpha’ respondent will influence others; Fabric has none of that. Fabric also removes the moderator from the study; therefore there is no moderator bias. 

Instead, the methodology and technology employed by Fabric free up respondents to behave in a more open manner. Fabric’s confessional style enables what researchers have called the “online disinhibition effect” where respondents are more open to express themselves because there is no fear of disagreement or conflict with a moderator or fellow panelists.

9. Company culture
Some organizations are more comfortable with small sample sizes, whereas some look for larger samples because distribution is wide and/or global. For example, we have worked with Nike’s Innovation Kitchen on early exploratory studies using a small sample size. When we work with Xbox, having 10 markets each with 10 respondents, the sample size can drift to 100+ easily.

10. Segments
Often we’ll see that a client has a number of segments to understand. In that case, determining sample size depends on whether or not you’re looking for a rollup of all segments. If you’re going for a rollup, the 30–50 number is fine. 

However, if you’re seeking to understand similarities and differences amongst segments, we would recommend 15 respondents per segment. This enables your study to take advantage of our AI, which kicks in at a minimum of 15 respondents.

Conclusion: What is the right sample size for your qualitative study?
If you’re looking for a short answer on the recommended sample size, it’s 30–50. But keep an open mind to key variables that may influence higher or lower numbers.

Writing recruitment screeners

There are two ways to bring recruits into your Fabric study: provide us with the right information to write the screener for you, or bring your own screener.

Since respondents aren’t paid to answer screeners, try to limit the number of screening questions to ten data points (including age, gender and geography).

1. Let us build the screener for you
When building your DIY study, select “Let Us Recruit”, and then on the following page define the recruits you’re looking for responses from. This may include:

    • Demographic information like age, gender, geographic location, household income and education
    • Behavioral criteria like product usage, recency of purchase, competitive products owned, brand affinity, amount spent, and frequency
    • Psychographic criteria such as personality questions, agreement to attitudinal statements, general preferences, or other intangibles
    • Our team recommends keeping recruit criteria simple to allow for the quickest possible responses and the highest likelihood of approval from the recruiting team.

2. Provide us with your screener
The content of your screener can be cut and pasted into the “Define Your Recruit” field within the “Let Us Recruit” option, or you can paste a link to a document containing your screener for our team to program. 

3. Fabric will accept/reject your criteria within 24 hours at the most
Once you have submitted your recruitment request through the Fabric platform, the team will review it for feasibility within 24 hours. Most requests are approved within 1–2 hours. If there are any questions or potential sticking points, we will reach out via email.

4. Once the screener is approved, finish building your study and launch
Once your screener request is approved, you will receive a notification. In your dashboard, the study status will now read “Approved – Awaiting Payment”. From there, the next step is to do a final review on your study questions. 

Then proceed with launching your study. Your recruits will populate the study dashboard, with the first respondents usually coming in within 24 hours of launch.

 

Developing study content

Writing study questions

Fabric studies include a total of up to 10 questions per respondent. Respondents have 60 seconds to answer each question.

Below are guidelines to help you think through how to ask questions on the Fabric platform to yield the richest, most captivating and emotional responses.

Start broad, then get specific.

Start with the broadest possible context. You may want to start by asking respondents about their relationship to the culture that your product or service exists within. Then drill down into the brand, product and/or ad landscape.

Example:
How would you describe the culture of home furnishings in Madison, Wisconsin?

Ask specifically vague questions.

If you give consumers something to cling to, they will cling to it. Instead, let consumers create the story for you by asking questions that are specific to your area of interest, but that don’t lead the witness.

Example:
Show us any object in your home that defines luxury for you; explain in detail why you consider it luxury.

Ask about shifts in behavior.

A great way to understand habits is to ask about how <blank> is changing for them. Be sure to be specific about the time frame, though.

Example:
Is Adidas a brand on the rise—or the decline—over the past two years? Why?

Ask them to define something.

Sometimes, asking a very foundational question about a definition of something can really open up avenues for consumers. Marketers or product designers might think they know how consumers think, but hearing how they define something can be transformative.

Example:
How do you define competition within your athletic life?

Use polarizing questions.

Respondents will gravitate to gray areas; don’t let them. Ask them what they love or what they hate. Force them to choose A or B, and explain why. If they struggle to answer, that can be telling too. If you want them to answer a number scale (and elaborate on the score), force them to choose 0, 5 or 10 out of 10. 3’s or 7’s won’t tell you much.

Example:
What do you love most about your hair? What do you hate?

Ask WHY.

One of the simplest and often overlooked questions is “why?”. That can be about their motivation, their reward, their product use, their behavior—or even as a projective technique.

Example:
Why do you use FaceTime?

Get respondents in the relevant space.

Have them bring you to the environment that makes the most sense for your mobile video survey. Beyond the actual response, you get a glimpse into their brand and product assortment.

Example:
Please show us all the audio, video and other A/V devices that are part of your home entertainment ‘ecosystem.’

Don’t cram three questions into one.

Imagine we toss you a single tennis ball. Easy to catch, right? But what if we toss you three? Or five? Not so easy. Stick to one question, or they will focus only on one of the questions; and that one might not be the most important one for your study.

Example:
Instead of “How do you feel when you wear high heels? When do you wear flats or sandals?” zero in on a single question: “How do you feel when you wear high heels?”

Tug at the respondent’s emotions.

The best insight comes when people talk about things that they really care about, whether it is something that they love or a secret pet peeve of theirs. Deprivation works. Creating tension can help.

Example:
How do you feel emotionally when you feed your baby something super healthy?

Leverage “Show and Tell”.

Your data will be much richer if you can see the respondent interact with the product on their video. Have them capture a living example of what works well and what frustrates them.

Example:
Show us your cat, and introduce them on camera to us.
Show us your favorite sports bra for racing a 10k, and tell us how it feels different from the one you typically wear to the gym.

Use their language (not your client’s).

Use language that the respondents are comfortable with, and would use if they were talking to a friend. For instance, a respondent might not know what an “asset” is.

Example:
What is the difference between online content that is sponsored versus online content that is not sponsored?

Optional: Keep the last question open-ended.

Giving respondents the freedom to share open-ended thoughts can lead to even more novel insights.

Example:
[Company Name] is listening: how can they make your buying experience better?

Be creative!

Put your respondents in hypothetical situations, use similes and metaphors, or ask a question that is completely “out there.” The more creative your question is, the more creative (and interesting) your responses will be.

Example:
Write a love letter to IKEA and read it on camera.

 

Testing Creative Stimulus Materials

Whether it’s a new product design, the development of a new ad campaign, or the iteration of new UX or features, we often get asked about how to test different types of stimulus materials.

How to attach stimulus

You can add a link to any individual question or multiple questions. Just highlight the word you want to link, then (as with Google Drive) paste in the link.

Users will click on that link and be directed to the destination. We generally recommend using Google Drive because most people are familiar with it, but links can take users to:

    • Websites
    • Videos (e.g., YouTube, Vimeo, etc.)
    • Cloud storage locations (Google Drive, Dropbox, Box, etc.)

What kind of stimulus can be attached

Researchers use Fabric to test a broad range of assets. Some examples: 

    • PDFs
    • Videos
    • Sites/apps
  •  

Creative stimulus materials can be used to test:

    • The platform (e.g., Just Do It, Think Different, I’m Lovin’ It)
    • Key message takeaway(s)
    • Tag line
    • Individual elements (tone, style, personality, actors, music)
    • Pitch manifesto
    • Video stimulus (e.g., existing spots, animatics, scripts with audio)
    • Static stimulus (e.g., print, outdoor)
    • Audio stimulus (e.g., radio, podcast)
    • Interactive (e.g. UI/UX, websites, apps)

How long should a piece of stimulus be?

The kinds of stimulus that have been tested include:

Product design:

    • Sketches of product ideas from a designer’s notebook
    • Descriptions of a new product
    • Proposed layouts
    • UX/UI
    • Packaging
    • Beta products/apps and finished products

Advertising:

    • Platforms
    • Campaigns
    • Tag lines
    • Campaign elements
    • Manifestos
    • Positioning statements
    • Value propositions

Since each respondent is served up 10 questions, there are a number of ways to leverage the Fabric platform for testing stimulus. As a general rule of thumb, if you have four different pieces of stimulus to test, here’s how the arc of the study might look:

    • Q1: baseline perceptions of X
    • Q2: first reactions to Stim 1?
    • Q3: what resonates with Stim 1?
    • Q4-Q9: repeat Q2 + Q3 for the other Stim
    • Q10: compare and contrast or pick fave* 

*Note: for Q10 in the above example, it’s a good idea to include a rollup of all the stimulus to remind them of everything they have already seen. Otherwise they might have trouble recalling the first few concepts.

Testing statements or paragraphs

When testing product descriptions or positioning statements which can run longer in text form, do your best to keep the concepts highly differentiated. Present 3–5 concepts max. If there is significant overlap in the concepts and/or the statements are long, consumers will have trouble distinguishing one from the others. In that case, we recommend that your wrapup include a rollup PDF of all of the statements/concepts. The rollup will refresh the respondent’s memory after they’ve seen each individually.

Avoiding order bias

To avoid your entire sample seeing the stimulus in the same order, therefore biasing their reaction depending on the sequence in which they see the stimulus, break your studies down into smaller sample sizes and switch up the order. 

For example, with a sample size of 15 people (n=15) and three pieces of stimulus, a suggested approach would be to structure it like this:

    • Cohort 1 (n=5): Stimulus A,B,C
    • Cohort 2 (n=5): Stimulus B,C,A
    • Cohort 3 (n=5): Stimulus C,A,B

Confidentiality

As with everything in an online environment, confidentiality can be compromised. A few notes on how to protect your ideas:

    • Our user Terms and Conditions have built-in confidentiality; but as you know, a lot of folks don’t read them all.
    • Serving your concepts up without a logo or brand can help make it brand-blind, eliminating not only security concerns but also may give you a purer read on the relevance and resonance.
    • Serving up the same concept with multiple logos on it can help head-fake consumers, and also give you a read on the influence of the brand associated with it.
    • Lastly, if the risk of the idea somehow leaking is high, we recommend you NOT use Fabric to test your concepts. You have to do the risk/reward calculus. If a 17-year-old teen can hack into the Pentagon, taking a screen grab of a concept is not beyond the realm of what consumers may do.

Analyzing & presenting study results

Analyzing Video Responses: Qualitative Data Analysis

By far the hardest part of the qualitative research study process is analyzing the incoming data because it’s unstructured compared to quantitative data. Researchers appreciate our platform’s unique tools, which speed up the process and give researchers a head start. Leverage our six key features and learn how our proprietary Fabric AI can help.

Six Fabric features for qualitative data analysis

1. Video responses are organized by question

    • You can hyper-target your analysis by reviewing how all respondents answered any one question.
    • Unlike focus groups or moderated one-on-ones, the same questions get asked of everyone in the same order, providing consistency.
    • Sometimes a specific question is THE question you’re trying to understand, and the other questions lead up to it. In that case, you can skip right to a specific column of responses, to get straight to the heart of the issue.

2. Enhanced transcript experience

    • If you upgraded to human + machine transcripts (from machine only), then those transcripts will populate within 24 hours of each respondent’s completion.
    • Google Speech transcripts are available to all researchers almost immediately. Note that while machines are good and getting better all the time, they aren’t perfect. We make the transcript field editable so that you can correct any Google Speech errors.
    • Transcripts can be downloaded to Google Sheets or Excel.

3. Comments can be added to individual study responses

    • As on any social media platform, there is a comment field where collaborators can share feedback, ideas and insights. The field can also be used for note-taking.

4. Respondents can be rated

    • Our rating system makes it easy to remember who the best respondents are. It’s also a way to keep track of which respondents the researcher has reviewed.

5. Respondent names can be made anonymous

    • In an era of increased sensitivity to PII, our default view is to make users’ names or email addresses anonymous. However, if you bring your own recruits, the default is to show respondents’ email addresses because we presume they gave you access to email them. That field is editable if you would like to make respondent names anonymous yourself.

6. Unlimited tags can be created

    • There is no limit on the number of tags that can be created in a study.
    • After tags are created, researchers can sort by tags individually or in combination.

Working with our proprietary Fabric AI

One of the top frustrations of qualitative data analysis is the time it takes to digest the information, make sense of it, and bubble up key insights.

So we developed our own proprietary sentiment and emotion-based AI that provides three primary data analytic sources that free up researchers to spend more time developing insights that link to their brand, message, product or design. 

 Fabric AI includes:

The paragraph summary
The paragraph summary encapsulates the responses to any individual question in a study. To be clear: this does not summarize metadata across the entire study, only question by question.

The paragraph is written in plain English and it identifies:

    • Whether responses were positive, neutral or negative
    • Up to three sources of the sentiment
    • Key themes
    • Top emotions

Sentiment, sources of the sentiment, themes and top emotions are all hyperlinked. The videos and verbatims surfaced by our AI respond to the topic being explored in the hyperlinks.

Total mentions & emotional mentions
Fabric AI shows a count of total mentions, as well as mentions with emotional intent behind them (either positive or negative). 

    • Total mentions is a count of literally how many times a word was mentioned.
    • Emotional mentions is a count of how many times a word was mentioned with emotional intent behind it.
    • Strength bar indicator shows whether the sentiment was strong or weak.
    • Mentions are also correlated to one of 8 primary emotions Fabric AI tracks.

Additional features
Researchers also benefit from our AI’s ability to surface videos and verbatims with the highest degree of emotion, and there is a search field to quickly direct researchers to specific terms of interest.

 

Creating a Video Highlight Reel

Why use video?

Video highlight reels can be incredibly powerful tools in a debrief session. Standing in front of a group of people, telling them what consumers said—it’s not uncommon for attendees to get defensive or dismiss insights they don’t agree with.

Here’s where your highlight reel comes in. Press “play” and let consumers say it in their own words. Not only does the room go quiet for a few minutes, it puts any naysayers on their heels.  

How to create a powerful video highlight reel

In our years of market research experience, we’ve learned how to create videos that pack a punch. 

Here’s our method:

    1.  Keep it short
    2.  Tell a singular story 
    3.  Bring the story to life
    4.  Make a paper edit
    5.  Edit video & post-production

1. Keep it short
Videos should be kept under 2–3 minutes. Anything longer won’t hold the viewers’ attention in a culture dominated by TikTok-length videos. Long videos also become a mini documentary film requiring too much production effort.

2. Tell a singular story
How do you go from a dataset of dozens or hundreds of individual video responses to a 2–3 minute video?

The most important part of the process is telling a singular story. This means distilling down the ideas and statements found in your survey respondents’ videos. Sum up the singular story of your video highlight reel in one definitive sentence.

Our rule of thumb: if you can’t articulate the story of the video in one sentence, it’s probably not a story. Viewers are accustomed to video highlights that tell a single, coherent story, rather than a disconnected set of facts. 

Asking the video to do the work of a slideshow presentation generally fails, too. We know for a fact that trying to do PowerPoint in video format does not work. 

The video highlight reel and the presentation are two different narrative forms. Your reel should have its own internal narrative that tells a singular story. 

3. Bring the story to life
Now that you have a singular story sentence, bring the story to life. 

As a starting point, try structuring the story using three chapters: beginning, middle and end. 

As you go through the process, keep the end in mind; what is the key message you want viewers to take away when they have finished watching the story?

Within those three chapters, consider using an effective storytelling device called a pivot point. Is there a moment where you are leading viewers down a path, and the unexpected gets revealed? For example, is there a surprising attitudinal, perceptual or behavioral insight that challenges convention? A new way to see the world? We think about it as the equivalent of the needle scratching across the record on a vinyl LP.

In classic ad agency pitch theater, the arc of a story might look something like:

    • Things were great, everybody loved us
    • Then something unexpected or bad happened and we lost them
    • But here’s what they love about us or we are overlooking that could bring them back

4. Make a paper edit
A paper edit is a written outline of your video. It provides structure, tightens your focus, and helps you keep your video the right length. Think of it as a kind of script, storyboard or shot list, but with time codes. The time codes show exactly which interview clips you plan to use.

To make your life easier, use our paper edit tracking sheet where you can enter the key quotes you want to include, and it will automatically calculate the run time length. Note that it’s based on averages, so there may be some variance between the estimated and actual video length.

5. Edit video & post-production
With a paper edit prepared, the video editor can now download the relevant clips. Transitions, title cards, music and even B roll can be added.

Examples

Some of the more compelling video highlight reel stories we have seen or been part of:

    • For a study about kids attending a football combine, for a major footwear brand working on a new shoe: These athletes would give their eye teeth for one tenth of a second improvement.
    • In a study for a major national fast casual restaurant: People’s connection to bread goes way beyond taste and texture to near-spiritual associations with ethnicity, ancestral origins and even religion.
    • For a study on the redesign of an ultrasound machine: Despite the anxiety physicians experience as part of their jobs, they have personal superhero moments using ultrasound that provide a massive surge of confidence.

Conclusion

Including individual clips in your debriefs can be powerful. But creating a 2–3 minute video highlight reel with a clear underlying story will help put your final presentation over the top. Our five steps will help get you there.

 

Working with Fabric

Case studies

Below is a short list of cases where the Fabric platform has been used successfully in the advertising pitch process:

    • Early chemistry check-in when clients are short-listing finalists (Deutsch/Green Giant)
    • Fueling collaborative strategy development sessions (Ogilvy/IKEA)
    • Fleshing out motivators for key target demographics (McGarryBowen)
    • Accessing diverse national and international audiences (72andSunny/Instagram)
    • Assessing equities of existing campaigns (Wieden + Kennedy/TurboTax)
    • Testing creative work in development (BBDO)
    • Demonstrating the agency’s initiative (North)

Key benefits

Researchers from across categories use the Fabric platform to develop and launch meaningful studies. We bring specific value to advertising agencies in the ad pitch process:

    • The Fabric process matches the frenetic pace of new business pitches; studies can be conducted rapidly.
    • After a study is conducted on Fabric, the actual voice—and face—of the consumer will be embedded into the strategic and creative development process and client conversations.
    • Stimulus in the form of video responses and quotes helps engage clients in key strategic and creative alternatives.
    • The agency’s willingness to go the extra mile on its own initiative is a strong signal to the client that the agency truly values that client.
    • The client sees clearly that consumer insights are important to the agency.
    • Video highlight reels become fantastic meeting theater in final presentations, helping underpin a strategy and/or specific campaign.
  •  

Related Articles

28 min read
The Ad Pitch