Determining the Right Study Design

Study design_icon

Determining the Right Study Design

Creating Your Study

7 min read

Because qualitative market research is more of an art than a science, it’s important to think through study design—or as we call it, the research strategy.

Study preparation arc

Broadly, the study preparation logic should follow this arc: 1) Research objective; 2) Research strategy; and 3) Research tactics.

1. Research objective
Determine the research objective. Ask, what is the singular, overall objective of the study? Usually, your answer will take the form of a single sentence.

2. Research strategy
Articulate your research strategy. Broadly speaking, given the objective, what is the strategy for the study’s architecture? For example, if the study objective is to understand how a new product concept resonates, the research strategy could be “Compare the ideal to the actual.”

3. Research tactics
What are the research tactics? Given the objective and strategy, determine which specific tactics should be employed. 

For example: we wanted to understand how people felt about the fact that when they are in public, chances are very good they were being recorded on video. We asked several questions involving how they felt about the proliferation of video cameras in public, to tap into the left brain where the speech center is located. Then we had them then answer one question using only gestures and body language—no words—to tap into the more emotional right brain. 

Study design example

By way of example, imagine this scenario:

    • The research objective is to understand core consumer perceptions of the client’s brand.
    • For the research strategy, we recommend that the study deconstruct all of the different elements that make up the core brand perceptions.
    • Hence, the research tactics would involve breaking down the top most important associations people might have with the brand, and focusing each question on one aspect (what they sell, who they think buys it, who they associate with the brand, what imagery they connect with it, who works at the company, what does the company’s brand say about the people who wear or use it, if there is one word that describes the brand, what would that one word be and why).

Related Articles

Showing 4 of 6

Setting the Research Objective

Setting objective_icon

Setting the Research Objective

Creating Your Study

7 min read

One of the most important aspects of any market research project is setting the right research objective. Researchers often mistake the questions they want answered (which are of course important) for the research objective itself.

Establishing a research objective can be done in various ways. We recommend articulating the research objective in one single sentence. This will provide a clear focus for your team. 

If you list a number of questions, neither you nor the team will know exactly what it is the study is aiming to achieve.

Below are some examples of clearly articulated research objectives:

    • Understand how the current messaging campaign resonates in different countries
    • Understand if/how existing users understand our key point of differentiation
    • Bring to life the muse target audience for the brand 
    • Understand which concept resonates most with consumers and why
    • Identify drivers of churn

 

Related Articles

Showing 4 of 6

Determining the Best Sample Size

Determining Sample Size_icon

Determining the Best Sample Size

Creating Your Study

7 min read

The #1 question we get asked by researchers is this: How many people should I include in my study? We’ll answer that in two ways:

    • What the academics say
    • What our experience has taught us

The academic POV

What constitutes an adequate sample size has been debated extensively in the market research industry for many years, in part because quantitative sample sizes are statistically easier to measure, including variance.

Often, quantitative research mindsets get applied to qualitative research because quant norms are concrete, whereas qualitative sample sizes are difficult to prove mathematically.

Studies have been published over the years on the topic of adequate sample sizes for qualitative research. Here are a few:

    • Creswell, Glaser, Morse (the recommendation: 30–50 participants)
    • Springer (Springer puts forth the argument that anywhere from 5–50 is adequate, but that 25–30 is considered to be the right number)
    • InterQ (recommends 20–30)

Broadly, the academic research suggests that a sample size in the 30–50 participants range achieves what experts call “the point of saturation” where adding another participant doesn’t add materially to the insights generated. We agree with that analysis.

What our experience has taught us

Qualitative research is more art than science. 

Typically, clients turn to qualitative methods to understand deeper meaning: beliefs, attitudes, perceptions, feelings, emotion and more—deep intangibles that help add up to the “why.” So while we believe that minimum sample sizes should be employed in qualitative research, getting to the “why” requires more nuance. For this we recommend taking 10 critical variables into consideration.

The 10 critical variables

We have found that there are 10 critical variables to consider for determining the right sample size for a qualitative research study.

1. Where you are in the process
Earlier stage projects (e.g. exploratory or generative studies) can generally employ smaller sample sizes, because more iteration and development will be conducted as the project progresses.

2. Business impact
The higher the business impact of the research, the greater the sample size should be. For high impact research, we would likely recommend a hybrid quant/qual approach so you’re not relying solely on qualitative research.

3. Geography
One essential variable we consider is the geographic diversity of the target audience. It can be as easy as needing domestic and international respondents, or it may be more complex. In the US, most national brands have very broad distribution, so making sure to include the coasts as well as the interior is not only great practice, it’ll signal to clients that you don’t favor one type of demographic audience over another.

4. Research design
How the study is designed can have a huge impact on results. Which questions to ask in what order changes how respondents answer, affecting a study’s insights and conclusions. 

5. Research platform
When you leverage a qualitative market research platform, you are buying a tech stack. One important variable is how well equipped that resource is technologically to identify the right recruits, field the study in a relevant way, and interpret the results. In the case of Fabric, our proprietary AI is an industry first, employing sentiment and emotion analysis to help researchers rapidly make sense of video responses.

6. Quality of recruits
It’s very important that the people in the study be the right people, regardless of how many respondents are included in the study size. Beyond the issue of whether they technically qualify, the people included in the study need to “feel” like the right consumer. 

Using Fabric’s video-centric platform is particularly useful in this regard. Seeing the target in motion, on screen, telegraphs a great deal of information, versus seeing numbers in a graph or having a faceless/voiceless respondent using text replies.

7. Analysis
Who is interpreting the data your study will generate? All insight is qualitative, so considering how equipped you or your research partner is in the process has a tremendous influence on both sample size and study design.

8. Methodology
A variety of methodologies are available to the researcher, and each will suggest its own approach to sample size. Qualitative techniques to choose from include IDIs (in-depth interviews), focus groups, in-homes, friendship pairs, small group interviews, intercepts, observational research, ethnographies and/or digital (online, mobile). 

In Fabric’s case, the methodology is unique in that it’s effectively “1-on-none”—meaning it’s asynchronous, and there is no moderator present. Much has been written about the effects of group-think within focus groups, where an ‘alpha’ respondent will influence others; Fabric has none of that. Fabric also removes the moderator from the study; therefore there is no moderator bias. 

Instead, the methodology and technology employed by Fabric free up respondents to behave in a more open manner. Fabric’s confessional style enables what researchers have called the “online disinhibition effect” where respondents are more open to express themselves because there is no fear of disagreement or conflict with a moderator or fellow panelists.

9. Company culture
Some organizations are more comfortable with small sample sizes, whereas some look for larger samples because distribution is wide and/or global. For example, we have worked with Nike’s Innovation Kitchen on early exploratory studies using a small sample size. When we work with Xbox, having 10 markets each with 10 respondents, the sample size can drift to 100+ easily.

10. Segments
Often we’ll see that a client has a number of segments to understand. In that case, determining sample size depends on whether or not you’re looking for a rollup of all segments. If you’re going for a rollup, the 30–50 number is fine. 

However, if you’re seeking to understand similarities and differences amongst segments, we would recommend 15 respondents per segment. This enables your study to take advantage of our AI, which kicks in at a minimum of 15 respondents.

Conclusion:

What is the right sample size for your qualitative study?
If you’re looking for a short answer on the recommended sample size, it’s 30–50. But keep an open mind to key variables that may influence higher or lower numbers.

 

Related Articles

Showing 4 of 6

Writing Recruitment Screeners

Writing recruitment screeners_icon

Writing Recruitment Screeners

Creating Your Study

7 min read

There are two ways to bring recruits into your Fabric study: provide us with the right information to write the screener for you, or bring your own screener.

Since respondents aren’t paid to answer screeners, try to limit the number of screening questions to ten data points (including age, gender and geography).

1. Let us build the screener for you

When building your DIY study, select “Let Us Recruit”, and then on the following page define the recruits you’re looking for responses from. This may include:

    • Demographic information like age, gender, geographic location, household income and education
    • Behavioral criteria like product usage, recency of purchase, competitive products owned, brand affinity, amount spent, and frequency
    • Psychographic criteria such as personality questions, agreement to attitudinal statements, general preferences, or other intangibles
    • Our team recommends keeping recruit criteria simple to allow for the quickest possible responses and the highest likelihood of approval from the recruiting team.

2. Provide us with your screener

The content of your screener can be cut and pasted into the “Define Your Recruit” field within the “Let Us Recruit” option, or you can paste a link to a document containing your screener for our team to program. 

3. Fabric will accept/reject your criteria within 24 hours at the most

Once you have submitted your recruitment request through the Fabric platform, the team will review it for feasibility within 24 hours. Most requests are approved within 1–2 hours. If there are any questions or potential sticking points, we will reach out via email.

4. Once the screener is approved, finish building your study and launch

Once your screener request is approved, you will receive a notification. In your dashboard, the study status will now read “Approved – Awaiting Payment”. From there, the next step is to do a final review on your study questions. 

Then proceed with launching your study. Your recruits will populate the study dashboard, with the first respondents usually coming in within 24 hours of launch.

Related Articles

Showing 4 of 6

Writing Study Questions

Writing Study Questions Icon
Writing study questions_icon

Writing Study Questions

Creating Your Study

7 min read

Fabric studies include a total of up to 10 questions per respondent. Respondents have 60 seconds to answer each question.

Below are guidelines to help you think through how to ask questions on the Fabric platform to yield the richest, most captivating and emotional responses.

Start broad, then get specific.

Start with the broadest possible context. You may want to start by asking respondents about their relationship to the culture that your product or service exists within. Then drill down into the brand, product and/or ad landscape.

Example:
How would you describe the culture of home furnishings in Madison, Wisconsin?

Ask specifically vague questions.

If you give consumers something to cling to, they will cling to it. Instead, let consumers create the story for you by asking questions that are specific to your area of interest, but that don’t lead the witness.

Example:
Show us any object in your home that defines luxury for you; explain in detail why you consider it luxury.

Ask about shifts in behavior.

A great way to understand habits is to ask about how <blank> is changing for them. Be sure to be specific about the time frame, though.

Example:
Is Adidas a brand on the rise—or the decline—over the past two years? Why?

Ask them to define something.

Sometimes, asking a very foundational question about a definition of something can really open up avenues for consumers. Marketers or product designers might think they know how consumers think, but hearing how they define something can be transformative.

Example:
How do you define competition within your athletic life?

Use polarizing questions.

Respondents will gravitate to gray areas; don’t let them. Ask them what they love or what they hate. Force them to choose A or B, and explain why. If they struggle to answer, that can be telling too. If you want them to answer a number scale (and elaborate on the score), force them to choose 0, 5 or 10 out of 10. 3’s or 7’s won’t tell you much.

Example:
What do you love most about your hair? What do you hate?

Ask WHY.

One of the simplest and often overlooked questions is “why?”. That can be about their motivation, their reward, their product use, their behavior—or even as a projective technique.

Example:
Why do you use FaceTime?

Get respondents in the relevant space.

Have them bring you to the environment that makes the most sense for your mobile video survey. Beyond the actual response, you get a glimpse into their brand and product assortment.

Example:
Please show us all the audio, video and other A/V devices that are part of your home entertainment ‘ecosystem.’

Don’t cram three questions into one.

Imagine we toss you a single tennis ball. Easy to catch, right? But what if we toss you three? Or five? Not so easy. Stick to one question, or they will focus only on one of the questions; and that one might not be the most important one for your study.

Example:
Instead of “How do you feel when you wear high heels? When do you wear flats or sandals?” zero in on a single question: “How do you feel when you wear high heels?”

Tug at the respondent’s emotions.

The best insight comes when people talk about things that they really care about, whether it is something that they love or a secret pet peeve of theirs. Deprivation works. Creating tension can help.

Example:
How do you feel emotionally when you feed your baby something super healthy?

Leverage “Show and Tell”.

Your data will be much richer if you can see the respondent interact with the product on their video. Have them capture a living example of what works well and what frustrates them.

Example:
Show us your cat, and introduce them on camera to us.
Show us your favorite sports bra for racing a 10k, and tell us how it feels different from the one you typically wear to the gym.

Use their language (not your client’s).

Use language that the respondents are comfortable with, and would use if they were talking to a friend. For instance, a respondent might not know what an “asset” is.

Example:
What is the difference between online content that is sponsored versus online content that is not sponsored?

Optional: Keep the last question open-ended.

Giving respondents the freedom to share open-ended thoughts can lead to even more novel insights.

Example:
[Company Name] is listening: how can they make your buying experience better?

Be creative!

Put your respondents in hypothetical situations, use similes and metaphors, or ask a question that is completely “out there.” The more creative your question is, the more creative (and interesting) your responses will be.

Example:
Write a love letter to IKEA and read it on camera.

Related Articles

Showing 4 of 6

Testing Creative Stimulus Materials

Testing Stimulus_icon

Testing Creative Stimulus Materials

Creating Your Study

7 min read

Whether it’s a new product design, the development of a new ad campaign, or the iteration of new UX or features, we often get asked about how to test different types of stimulus materials

How to attach stimulus

You can add a link to any individual question or multiple questions. Just highlight the word you want to link, then (as with Google Drive) paste in the link.

Users will click on that link and be directed to the destination. We generally recommend using Google Drive because most people are familiar with it, but links can take users to:

    • Websites
    • Videos (e.g., YouTube, Vimeo, etc.)
    • Cloud storage locations (Google Drive, Dropbox, Box, etc.)

What kind of stimulus can be attached

Researchers use Fabric to test a broad range of assets. Some examples: 

    • PDFs
    • Videos
    • Sites/apps

How long should a piece of stimulus be?

The kinds of stimulus that have been tested include:

Product design:

    • Sketches of product ideas from a designer’s notebook
    • Descriptions of a new product
    • Proposed layouts
    • UX/UI
    • Packaging
    • Beta products/apps and finished products

Advertising:

    • Platforms
    • Campaigns
    • Tag lines
    • Campaign elements
    • Manifestos
    • Positioning statements
    • Value propositions

Since each respondent is served up 10 questions, there are a number of ways to leverage the Fabric platform for testing stimulus. As a general rule of thumb, if you have four different pieces of stimulus to test, here’s how the arc of the study might look:

    • Q1: baseline perceptions of X
    • Q2: first reactions to Stim 1?
    • Q3: what resonates with Stim 1?
    • Q4-Q9: repeat Q2 + Q3 for the other Stim
    • Q10: compare and contrast or pick fave* 

*Note: for Q10 in the above example, it’s a good idea to include a rollup of all the stimulus to remind them of everything they have already seen. Otherwise they might have trouble recalling the first few concepts.

Testing statements or paragraphs

When testing product descriptions or positioning statements which can run longer in text form, do your best to keep the concepts highly differentiated. Present 3–5 concepts max. If there is significant overlap in the concepts and/or the statements are long, consumers will have trouble distinguishing one from the others. In that case, we recommend that your wrapup include a rollup PDF of all of the statements/concepts. The rollup will refresh the respondent’s memory after they’ve seen each individually.

Avoiding order bias

To avoid your entire sample seeing the stimulus in the same order, therefore biasing their reaction depending on the sequence in which they see the stimulus, break your studies down into smaller sample sizes and switch up the order. 

For example, with a sample size of 15 people (n=15) and three pieces of stimulus, a suggested approach would be to structure it like this:

    • Cohort 1 (n=5): Stimulus A,B,C
    • Cohort 2 (n=5): Stimulus B,C,A
    • Cohort 3 (n=5): Stimulus C,A,B

Confidentiality

As with everything in an online environment, confidentiality can be compromised. A few notes on how to protect your ideas:

    • Our user Terms and Conditions have built-in confidentiality; but as you know, a lot of folks don’t read them all.
    • Serving your concepts up without a logo or brand can help make it brand-blind, eliminating not only security concerns but also may give you a purer read on the relevance and resonance.
    • Serving up the same concept with multiple logos on it can help head-fake consumers, and also give you a read on the influence of the brand associated with it.
    • Lastly, if the risk of the idea somehow leaking is high, we recommend you NOT use Fabric to test your concepts. You have to do the risk/reward calculus. If a 17-year-old teen can hack into the Pentagon, taking a screen grab of a concept is not beyond the realm of what consumers may do.
  •  

Related Articles

Showing 4 of 6