Request a quote!

Blog, News &

Case Studies

Esomar Workshop: How to Design Questionnaires That People Want to Answer
By E2E Research | January 9, 2023

Decorative imageResearchers want people to feel comfortable when they’re answering questions so they will share complete and honest answers about their public and private lives. This can be difficult because you first need to create a research environment that shows them they are trusted and respected.

 

If this resonates for you, then please join our Chief Research Officer, Annie Pettit, for a 3-day (2 hours per day) questionnaire design workshop hosted by Esomar on February 21, 22, 23, at 13:00 UTC.

 

In this highly interactive masterclass, you will learn about the psychology of answering questions and how to apply that knowledge in a practical way to questionnaire design. You will learn to create questions that make people feel valued and respected, that accommodate normal human behaviours no matter how strange those behaviours may seem to you, and that make people look forward to participating in the next research project.

 

Registration is now open on the Esomar website!

 

 

What will you learn?

 

After completing this training, you should be able to:

  • Understand how human psychology interacts with the questionnaire experience and how to write questions that accommodate normal behaviours
  • Write questions that are kind and respectful towards people who are marginalised, and people who are embarrassed to share personal aspects of their lives
  • Write questionnaires that people want to answer this time and the next
  • Write questions that are fun and playful

 

By the end of the workshop, you will have built a set of resources that can be leveraged in a range of future questionnaires.

 

 

Programme at a glance

 

Session 1: Understanding human nature

  • How does human psychology impact how people interpret and respond to questionnaires
  • What are some basic rules for creating respectful questionnaires
  • What mindset do we need to take when building questionnaires

 

Session 2: Making tough topics more comfortable

  • Creating compassionate screener questions
  • Writing respectful data quality questions
  • Asking about embarrassing or private issues
  • Eliciting truth when it comes to unethical and illegal behaviours

 

Session 3: Creating a playful experience for everyone

  • How to write playful questions that excite and make people think
  • Incorporating play into serious topics

 

Please click here to register on the Esomar website.

 

 

About Annie Pettit

Annie Pettit is Chief Research Officer, North America, at E2E Research, an ISO 27001 certified, ESOMAR corporate member company that offers market research, data analytics, and business intelligence solutions to help research leaders understand their buyers, brands, and businesses. Annie is a research methodologist who specializes in research design and analysis, data quality, and innovative methods. She holds a PhD in experimental psychology from York University in Canada, is a Certified Analytics and Insights Professional (CAIP), and is a Fellow of the Canadian Research Insights Council (CRIC). Annie is also Chair of the Canadian ISO Standards Committee (ISO 20252), and the author of “People Aren’t Robots: A practical guide to the psychology and technique of questionnaire design.

What is a Census Representative Sample?
By E2E Research | March 29, 2022

The people researchers choose to share their opinions in marketing research can make a huge difference in the quality of answers we receive. That’s why it’s important to understand the research question and who would be best suited to speak with to get the necessary answers.

 

Let’s consider one type of sample that researchers often consider when conducting research – a census representative sample.

 

 

What is a census representative sample?

Decorative imageYou might also hear these referred to as ‘Census Rep’ samples. A census rep sample requires access to census data, something that is typically generated by large-scale government surveys completed by millions of their residents or citizens. In the USA, that’s census.gov and in Canada, that’s Statistics Canada.

 

A census rep sample can be designed to reflect any specific group of people. The key consideration is that the sample of completed questionnaires reflects the larger population on important criteria. The sample could reflect an entire country (e.g., USA, Mexica, Canada), a state or province (e.g., California, Quebec), or a city or town (e.g., Boston, Ottawa). This type of census rep sample is reasonably easy to define.

 

Another type of a census rep sample can be defined by target group behaviors or characteristics. For instance, you might be interested in a census rep sample of people who smoke or who have diabetes. Of course, building these types of census rep samples is far more difficult because government census data tends to be set up to understand basic demographics like age and gender, rather than behaviors like smoking and ailments like diabetes.

 

 

When would I use a census representative sample?

Census rep samples are extremely important for at least couple research objectives.

 

First, when you need to calculate incidence rates for a product or service, you need to first generalize from a representative group of people of your target audience. You need to be able to define your population before you can know what percent of them uses a product or performs a behavior.

 

Second, census rep samples are extremely important for market sizing. Again, you need to generalize from a representative group of your target audience before you can estimate the percent of people who might qualify to use your product or services.

 

 

Why is a census representative sample important?

Decorative imageCreating a census representative sample is extremely important. You could get into trouble if you recruit a sample of research participants who don’t look like actual users.

 

You might gather opinions from too many older people, too many women, too many higher educated people, or too many lower income people. Your final research conclusion might be based on opinions collected from the wrong people and lead to development of the wrong product or product features.

 

 

An example of a census representative sample

Let’s consider an example where we want to determine which flavour of pasta sauce to launch in a new market – California. We’ve got two delicious options – a spicy, jalapeno version and a mild portobello mushroom version.

 

We know people from different cultures and ethnic backgrounds have very different flavor preferences so we need to ensure that the people who participate in our research will accurately reflect the region where we will launch this new pasta sauce.

 

Now, we could recruit and survey a sample of people based on a basic quota that will help make sure we hear from a range of people. It might look like what you see in the first column of the table – even splits among each of the demographic groups with a bit of estimation for ethnic groups. But that’s not actually what California looks like. Instead, let’s build a census rep sample matrix based on real data.

 

Decorative image To start, we need to define a census rep sample of California. First, we find those people in a census dataset. Then, we identify the frequencies for each of the key demographic criteria – what is the gender, age, ethnicity, and Hispanic background (as well as any other important variables) of the people who live in California. Fortunately for us, this data is readily available. On the census.gov website, we learn that in California, 50% of people are female, 6% are Black, and 39% are Hispanic.

 

Now we can recruit a sample of people from California whose final data will match those demographic criteria – 50% female, 6% Black, and 39% Hispanic. You can see just how different those numbers are are from the original basic quotas!

 

In the last two columns, you can see that we’ve even split out the criteria by gender (even better, you can do this based on the census data). This will ensure that one of the age groups isn’t mostly women or one of the Hispanic groups isn’t mostly men. When we nest our criteria within gender, we end up with a nested, census rep sample. Nested demographics are the ideal scenario but they do make fulfilling sample more costly and time-consuming. You’ll have to run a cost-benefit analysis.

 

 

What’s Next?

Are you ready to build a census representative sample for your next incidence rate or market sizing project? Email your project specifications to our research experts using Projects at E2Eresearch dot com. We’d love to help you turn your enigmas into enlightenment!

 

 

 

Sample Conferences

 

 

Learn more from our case studies

 

 

Learn more from our other blog posts
Doing better survey design in research: A reel talk podcast with Jenn Vogel and Annie Pettit
By E2E Research | November 23, 2021

Hosted by Jenn Vogel at Voxpopme, Reel Talk is a podcast designed to help researchers and marketers gain valuable information to help understand their customers better. Jenn and her guests discuss customer insights and how to use data to make better customer-centric decisions. You can catch all of the podcasts here.

 

In this episode from October 4, 2021, Jenn chats with Annie Pettit, our CEO, about techniques for designing better marketing research questionnaires.

 

From Digital Fingerprinting to Data Validation: Techniques to Facilitate the Collection of High Quality Market Research Data
By E2E Research | August 19, 2021

In the market and consumer research space, there is good data and bad data.

 

Good data comes from research participants who try to do a good job sharing their thoughts, feelings, opinions, and behaviors. They might forget or exaggerate a few things, as all people do every day, but they’re coming from a good place and want to be helpful. In general, most people participating in market research fall into this category. They’re regular, every day people behaving in regular every day ways.

 

Bad data comes from several places.

 

First, sometimes it comes from people who are just having a tough day – the kids need extra attention, the car broke down, their credit card was compromised. Some days, people aren’t in a good frame of mind and it shows in their data. That’s okay. We understand.

 

Second, rarely, bad data comes from mal-intentioned people. Those who will say or do anything to receive the promised incentive.

 

Third, very often, it comes from researchers. Questionnaires, sample designs, research designs, and data analyses are never perfect. Researchers are people too! We regularly make mistakes with question logic, question wording, sample targeting, scripting and more but we always try to learn for the next time.

 

In order to prevent bad data from affecting the validity and reliability of our research conclusions and recommendations, we need to employ a number of strategies to find as many kinds of bad quality data as possible. Buckle up because there are lots!

 

 

Data Validation

What is data validation?

Data validation is the process of checking scripting and incoming data to ensure the data will look how you expect it to look. It can be done with automated systems or manually, and ideally using both methods.

 

What types of bad data does data validation catch?

Data validation catches errors in questionnaire logic. Sometimes those errors are simply scripting errors that direct participants through the wrong sequence of questions. Other times, it’s unanticipated consequences of question logic that means some questions are accidentally not offered to participants. These problems can lead to wrong incidence rates and worse!

 

How do data validation tools help market researchers?

Automated systems based on a soft-launch of the survey speed up the identification of survey question logic that leads to wrong ends or dead ends. Manual systems help identify unanticipated consequences of people behaving like real, irrational, and fallible people.

 

Automated tools can often be integrate with your online survey platforms via APIs. They can offer real-time assessments of individual records over a wide range of question types, and can create and export log files and reports. As such, you can report poor quality data back to the sample supplier so they can track which participants consistently provide poor quality data. With better reporting systems, all research buyers end up with better data in the long run.

 

 

Digital Fingerprinting

What is digital fingerprinting

Digital fingerprinting identifies multiple characteristics of a research participant’s digital device to create a unique “fingerprint.” When enough different characteristics are gathered, it can uniquely identify every device. This fingerprint can be composed of a wide range of information such as: browser, browser extensions, geography, domain, fonts, cookies, operating system, language, keyboard layout, accelerator sensors, proximity sensors, HTTP attributes, and CPU class.

 

 

What types of bad data does digital fingerprinting catch?

  • Digital fingerprinting helps identify data from good-intentioned people who answer the same survey twice because they were sent two invitations. This can easily happen when sample is acquired from more than one source. They aren’t cheating. They’re just doing what they’ve been asked to do. And yes, their data might be slightly different in each version of the questionnaire they answered. As we’ve already seen, that’s because people get tired, bored, and can easily change their minds or rethink their opinions.
  • Digital fingerprinting also helps identify data from bad-intentioned people who try to circumvent processes to answer the same survey more than once so they can receive multiple incentives. This is the data we REALLY want to identify and remove.

 

 

How do digital fingerprinting tools help market researchers?

Many digital fingerprinting tools are specifically designed to meet the needs of market researchers. They’re especially important when you’re using multiple sample sources to gather a large enough sample size. With these tools, you can:

 

  • Integrate them with whatever online survey scripting platform you regularly use, e.g., Confirmit, Decipher, Qualtrics
  • Identify what survey and digital device behaviors constitute poor quality data
  • Customize pass/fail algorithms for any project or client
  • Identify and block duplicate participants
  • Identify and block sources that regularly provide poor quality data

 

 

Screener Data Quality

In addition to basic data quality, researchers need to ensure they’re getting data from the most relevant people. That includes making sure you hear from a wide range of people who meet your target criteria.

 

First, rely more than the key targeting criteria – e.g., Primary Grocery Shoppers (PGS). Over-reliance on one criteria could mean you only listen to women aged 25 to 34 who live in New Jersey.

 

By also screening for additional demographic questions, you’ll be sure to hear from a wide range of people and avoid some bias. For PGS, you might wish to ensure that at least 20% of your participants are men, at least 10% come from each of the four regions of the USA, and at least 10% come from each of four age groups. Be aware of what the census representative targets are and align each project with those targets in a way that makes sense.

 

Second, avoid binary screening questions. It may be easy to ask, “Do you live in Canada,” or “Do you buy whole wheat bread.” However, yes/no questions make it very easy to figure out what the “correct” answer is to qualify for the incentive. Offer “Canada” along with three other English-speaking nations and “Whole wheat bread” along with three other grocery store products. This will help ensure you listen to people who really do qualify.

 

 

Survey Question Data Quality

Once participants are past the screener, the quest for great data quality is not complete. Especially with “boring” research topics (it might not be boring for you but many topics are definitely boring for participants!), people can become disengaged, tired, or distracted.

 

Researchers need to continue checking for quality throughout the survey, from end to end. We can do this by employing a few more question quality techniques. If people miss on one of these metrics, it’s probably ok. They’re just being people. But if they miss on several of these, they’re probably not having a good day today and their data might be best ignored for this project. Here are three techniques to consider:

 

  • Red herrings: When you’re building a list of brands, make sure to include a few made-up brands. If someone selects all of the fake brands, you know they’re not reading carefully – at least not today.
  • Low/high incidence: When you’re building a list of product categories, include a couple of extremely common categories (e.g., toothpaste, bread, shoes) and a couple of rare categories (e.g., raspberry juice, walnut milk, silk slippers). If someone doesn’t select ANY of the common categories or if they select ALL of the rare categories, you know they’re not reading carefully – at least not today.
  • Speeding: The data quality metric we love to use! Remember there is no single correct way to measure speeding. And, remember that some people read extremely quickly and have extremely fast internet connections. Just because someone answers a 15 minute questionnaire in 7 minutes doesn’t necessarily mean they’re providing poor quality data. We need to see multiple errors in multiple places to know they aren’t having a good day today.

 

And of course, if you can, be sure to employ more interesting survey questions that help people maintain their attention. Use heatmaps, bucket fills, gamification, and other engaging questions that people will actually want to answer. A fun survey is an answered survey, and an answered survey is generalizability!

——————————————————–

 

What’s Next?

Every researcher cares about data quality. This is how we generate valid and reliable insights that lead to actionable conclusions and recommendations. The best thing you can do is ask your survey scripting team about their data validation and digital fingerprinting processes. Make sure they can identify and remove duplicate responders. And, do a careful review of your questionnaire to ensure your screener and data quality questions are well written and effective. Quality always starts with the researcher, with you!

 

If you’d like to learn how we can help you generate the best quality data and actionable insights, email your project specifications to our research experts using Projects at E2Eresearch dot com. We’d love to help you grow your business!

 

 

Learn more from our other blog posts

Tips for the First-Time Conjoint Analysis Researcher
By E2E Research | July 16, 2021

Researchers love conjoint analysis. It’s a handy statistical technique that uses survey data to understand which product features consumers value more and less, and which features they might be willing to pay more or less for.

 

It allows you understand how tweaks to combinations of features could increase desirability and, consequently, purchase price and purchase rate. Essentially, it asks, “Would you buy this product configuration if you saw it on the store shelf right now?”

 

Technically, there are numerous ways to present conjoint questions but all of them invite participants to compare two or more things. For example:

 

  • Would you rather buy this in red or yellow?
  • Would you rather pay $5 for a small one or $4 for a large one?
  • Would you rather buy this one or the competitive brand?
  • Would you rather buy this one or keep the one you already own?

 

The comparisons can get extremely complicated as you strive to create scenarios that mirror the complicated options of real life, in-store choices. This is because no two products are have the exact same features. There are always multiple tiny or major things different amongst them including brand, price, color, shape, size, functionality, etc.

 

As you see in the example conjoint question below, participants are being asked to select from among 5 different entertainment bundles, each with a different price and selection of options. Even though this question is nicely laid out, perhaps even nicer than what you might see in a store, it’s not a simple choice!

 

 

example conjoint analysis survey questions

.

 

Quick Conjoint Dictionary

First, let’s cover some quick terminology commonly used with the conjoint method so that the tips we will offer make sense.

 

  • Attribute: A characteristic of a product or service, e.g., size, shape, color, flavor, magnitude, volume, price.
  • Level: A specific measure of the attribute, e.g., red, orange, yellow, green, blue, and violet are levels of the attribute color.
  • Concept: An assembly of attributes and levels that reflect one product, e.g., a large bag of strawberry flavored, red, round candy for $4.99.
  • Set: A collection of concepts presented to a research participant to compare and choose from.
  • Simulator: An interactive, quantitative tool that uses the conjoint survey data to help you review consumer preferences and predict increases or decreases in market share based on potential product features and prices.

.

 

Conjoint Analysis Tips and Tricks

3 to 5: Across all attributes, levels, concepts, and sets, 3 to 5 is a good rule of thumb. With so many possible combinations of attributes, levels, and sets, the ask we’re making of participants could get overwhelmingly complicated and create a lot of cognitive fatigue. That’s why we suggest aiming for no more than 3 to 5 attributes, 3 to 5 levels per attribute, 3 to 5 concepts per set, and 3 to 5 sets. By ensuring that participants enjoy the process and can take the time to review each concept carefully, we can generate much better data quality.

 

Meaningful Levels: Choose attribute levels carefully. Do you really want to test 3 shades of blue or 3 flavors or apple? No. While you could choose price levels of $30, $32, and $34, they aren’t meaningfully different and wouldn’t create a lot of indecision on the store shelf. They wouldn’t create variation within your data. Try to include edge cases – options that are as far apart as you can make them while still being within the realm of possibility.

 

Be frugal with combinations: You already know there are combinations of attributes and levels you would never offer in-store so don’t waste people’s time and cognitive load testing them. Think carefully about which combinations of attributes and levels you would never offer together and exclude them from the test. For example, don’t waste your budget testing the least expensive price and the most expensive feature. Similarly, don’t test the value of adding an extra battery for a version of the product that doesn’t run on batteries.

 

Minimum number of shows: When testing a level, use it in at least 3 concepts for an individual person. Think of it in terms of a ruler – for quantitative metrics (e.g., price, length, volume, weight), you need to see whether the difference between Level 1 and Level 2 is perceived the same as the difference between Level 2 and Level 3.

 

magazinesInclude competitors: The real market includes competitors, often many. People don’t shop for single brands in isolation and neither should they answer your conjoint questions in isolation. Include at least one key competitor in your test, and preferably at least two. Further, if your brand is relatively unknown, you may wish to incorporate a competitor that is also relatively unknown.

 

Include an opt-out: Sometimes when you’re shopping, you discover they don’t have what you’re looking for and you leave the store empty handed. Generating realistic data means we must do the same in our simulated shopping trip – let people select “None of these” and leave without choosing anything. Otherwise, people may be “tricked” into selecting options they would never choose in real life.

 

Easy to read: Remember that conjoint is trying to simulate decisions that would normally happen in-store. Part of the in-store experience is in-store messaging. You’ll rarely see long sentences and paragraphs in the store so avoid them in your conjoint questions too. Use words and phrases that are as close as possible to what someone might see at the store.

 

cookiesUse imagery: We already know that a conjoint task can be cognitively demanding. That’s why imagery helps. Not only does it help people to visualize the product on the shelf amongst it’s competitive brands, it also helps to create a more visually appealing task (mmmmm cookies!). If you can’t provide an image of your product, find other ways to incorporate visuals in the questionnaire.

 

Plan for a hold-back sample: When product development work is extremely sensitive or is associated with life and death decisions, e.g., medical or pharmaceutical research, don’t let your budget determine the validity and rigor of your work. Spend the money to get the sample size you truly need to test each attribute and level with the appropriate rigor. And, build time into the fieldwork and data analysis schedule to permit preliminary analyses and test the model. You might need to tweak attributes, levels, or sets prior to running the full set of fieldwork.

 

Don’t let the statistics think for you: You wouldn’t create an entire marketing strategy based on gender differences just because a statistically significant t-test said 14% of women liked something and only 13% of men liked it. It’s not a meaningful difference. The same thing goes for a conjoint study. Review the model yourself, carefully, regardless of how “statistically significant” it is. Think about the various options suggested by the data. The simulator might reveal that there is a set of attributes and levels that would take over the market but that doesn’t mean you must produce that combination. The human brain is mightier than the spreadsheet!

 

If you’re curious to learn about the different types of conjoint that are available, this video from Sawtooth Software, presented by Aaron Hill, shares details about a few types of conjoint. E2E Research is pleased to offer all of these types to our clients.

 

 

 

What’s Next?

Are you ready to find out what configuration of your products and services consumers would be most keen to purchase? We’d be happy to help you work though the most suitable combinations of attributes and levels and build a conjoint study that meets your unique needs.

 

Please email your project specifications to our research experts using Projects at E2Eresearch dot com.

 

 

Learn more from our case studies

 

Learn more from our other blog posts

8 Reasons to Invest in a Hybrid DIY Market Research Team
By E2E Research | July 3, 2021

I’ve argued for years that there’s nothing wrong with DIY research. It’s a pretty easy argument given I’ve been a DIY researcher for many years myself. Of course, I’ve also had extensive training and experience in research design and analysis so it would make sense that Do-It-Yourself research has often been my favourite path.

 

In reality, the problem isn’t DIY research. The problem is unskilled people not realizing that conducting valid and reliable research requires extensive training and experience. For example, as much as I’d love to DIY a brand new house for you, I have a feeling you wouldn’t be happy with it even if I read every single Dummies manual.

 

For the sake of this argument then, let’s consider that we’re only talking about DIY research where the person is a qualified researcher with an appropriate designation, e.g., a PRC from the Insights Association or a CAIP in Canada. (BTW, if you’re not already certified, doing so is a GREAT way to tell your clients and colleagues that you are a highly competent researcher who upholds the highest ethical standards.)

 

 

Advantages of DIY Research

Agility: Everyone has been in one of these two positions before: You just discovered you need to get a questionnaire into field RIGHT NOW, or you’re watching a questionnaire already in field and you notice that multiple research participants have just provided the same open-end answer. What do you do if it’s Friday at 6pm? You get it done! You don’t have to wait until your supplier gets in on Monday morning so that they can start scripting and be ready for field by Monday evening. When it comes to being agile, no one can get a survey in field or updated faster than a DIY researcher with direct access to their own scripting licence. DIY FTW!

 

Internal knowledge: Regardless of which side of the fence you usually do your research on, supplier or buyer, you’ve learned the hard way that no one can interpret brand data and tabulations better than someone who has full sight-line into the history, projected future, and context of the brand, its sister brands, and the company – the end-client insights team. The confidential research and proprietary knowledge those researchers leverage while designing and interpreting research cannot be matched by anyone else no matter how much experience they have.

 

Price!: It’s impossible to beat the price of DIY research. When budgets are tight and the work is essential, this makes the decision simple. But make this choice wisely. Read on to make sure you’re okay forgoing the potential advantages of managed research which could force you to unexpectedly dig into your wallet after the fact.

 

 

Advantages of Managed Research

Leverage breadth of experience: Working with a supplier that supports many other types of companies has huge advantages. They’ve seen failures and success in multiple types of projects, companies, and industries. They’ve seen how competitive brands and categories carefully craft questionnaires and discussion guides, interpret unusual data, and solve unexpected, complex business problems. They’re a warehouse of rare knowledge and experience that every client benefits from, even when no one notices. And, they won’t incorporate the unconscious, internal biases that you might have picked up along the way from your standard internal processes.

 

Engage experts: Most researchers are moderately familiar with a lot of different research techniques. And, most researchers are masters of a few techniques. But being an expert in Conjoint, MaxDiff, TURF analysis, JAR analysis, or segmentation doesn’t mean you’re also an expert at running focus groups, interviews, mystery shops, or IHUTs. When you’re able to identify your own unique set of skills, you can reserve them for the projects you’d be great at and leverage the expertise of other researchers who’d be far more effective at the other projects.

 

Focus on high value tasks: When you can avoid spending the bulk of your time doing basic tasks like scripting questionnaires and running volumes of tabulations and simple data analyses, you get to spend more of your time on the value-add components of your business – interpreting results, acting on results, and building your business. You get to spend your time creating positive change!

 

Finish more projects: There are only so many hours in the days. When you’ve got a dedicated team of researchers ready at your beck and call, you can design and complete far more than one concept test, pricing study, or customer experience study every 6 months. Rejoice in the fact that more of your key projects can get done with the attention they deserve, in a timely fashion, and before it’s actually too late and damage has been done.

 

Get creative: Using research suppliers results in unlimited creativity. Imagine a multi-method, multi-country, multi-language study with brand new techniques applied in brand new ways. Oh my. I’m getting excited thinking about what that amazing study could look like! Ok, maybe you really don’t need to do that. But, with a larger team, you can certainly cast aside any limitations based on  access to tools and build the EXACT project you need. Not just the one fits into your template.

 

 

Advantages of a Hybrid DIY Research Model

But really, why must we choose DIY research OR managed research? Why can’t we be DIY researchers sometimes, choose managed research other times, and benefit from the positives of both models?

 

A skilled researcher who has inherent knowledge of the brand partnering with an experienced research supplier who has in-depth and broad experience with research techniques presents the ultimate research experience. Over time, it can even lead to building a dedicated external team that’s always on call, whether it’s during seasonal highs or end-of-fiscal rush periods, or to get through that huge pile of long overdue work.

 

In the end, whether you choose DIY research, managed research, or a hybrid model, an informed choice is the best choice!

 

 

If you’re ready to work with a research partner who will help you generate great quality data and actionable outcomes, feel free to email your project specifications to our research experts using Projects at E2Eresearch dot com. We’d love to help you build an engaging questionnaire, script the questionnaire, run data analysis, and write a full report.

 

 

Learn more from our case studies

Learn more from our other blog posts

Trackers Suck. Here’s how market researchers can fix them right now.
By E2E Research | June 21, 2021

Researchers love trackers. At the same time, we also hate them. Trackers are designed to help us stalk brand metrics and compare them with those of sister brands and competitors over time, and build real-time dashboards that flag tiny issues before they explode into unresolvable problems. But the more the world changes, the more our trackers stay the same. The questions stay the same, the answers stay the same, and the insights… well, they become impossible to find.

 

 

Trackers are inherently problematic

One of the biggest complaints researchers have with trackers is that once they’re written, they can’t be changed.

Ever.

 

When we inevitably discover a question that is poorly written, no longer relevant, or simply wrong, we can’t touch it or we’ll introduce confounds invalidating the trendline for every subsequent question. Data quality is always top of mind for researchers who care about making valid and reliable generalizations.

 

 

Oh the times, they are a’changin

But wait. No matter how much we work to keep questions consistent for the sake of research rigor and validity, everything outside of the questions has changed since day one. Every research supplier constantly improves their techniques and processes over time – without getting our approval. Every research participant changes their demographics, internet providers, and digital devices over time – without getting our approval. Like it or not, third parties change the methodological foundation of our trackers every single day without our approval. They’ve embraced change and it makes no sense except for researchers to embrace change too.

 

 

Who’s the boss?

Trackers are inanimate objects we personally create to suit our personal needs. Researchers need data that is valid and reliable. We need data that answers our questions and helps solve our challenges. We need to stop letting questionnaires be the boss of us and start making questionnaires work for us. We need to embrace change.

 

 

Choose change-resistant designs

Fortunately, researchers have methodological techniques that are designed to be resistant to change. If we build change into every questionnaire, change will have a vastly smaller impact on our data.

 

How can we do this?

 

Randomization! When each person receives answers (or questions) in a different order, it helps prevent confounds related to order. Adding an item to a randomized list greatly reduces its ability to affect subsequent items because everyone sees a different set of subsequent items. Make sure to randomize answer options at every appropriate opportunity. If it also makes sense to randomize the order of some questions, then do that too.

 

Individual presentation. Potential order effects can be reduced even more by combining randomization with individual presentation. Rather than showing a full list of items so that people can scan through the entire list, show items individually. Since everyone sees a different set of initial items, order effects are different for everyone and therefore greatly minimized over the full sample.

 

Subsets! If you’re accustomed to breaking long questionnaires into shorter, more manageable chunks for participants, you might already be using question subsets. For example, let’s say Q6 has 20 answer options – perhaps 20 brands or 20 product features. With subsets, each research participant gets only 10 answer options – perhaps three are the same for everyone, and the other 7 answers are randomly assigned. By design, no one sees every answer and your friendly, neighbourhood statistician can easily stitch the full questionnaire of 20 answer options back together. Need to add or remove an answer option? Go right ahead. Since half of people wouldn’t have received that item anyways, you aren’t intruding a serious confound. Even better, everyone benefits from a shorter questionnaire!

 

 

Know what questions are carved in cement

Some questions should never change. There are only a few seriously important KPIs that get added to the norms/benchmarks database every time you complete a wave. They probably include:

 

  • Purchase intent
  • Recommendation
  • Satisfaction
  • Trust
  • Likeability
  • Believability

 

Identify which items on a questionnaire MUST stay the same. They’re the items that are part of every questionnaire ever written for every product line and SKU. From now on, keep them as close as possible to the beginning of the questionnaire . By ensuring this section always stays the same with no potentially new and leading items before them, we can ensure they won’t be confounded by order effects.

 

And don’t get caught up in the idea that questions tied to financial incentives can’t be changed. Do you really want to incentivize the wrong KPIs and the wrong behaviors? Absolutely not!

 

 

Embrace change

 

Now here’s the hard part.

 

Change is good.

 

Track valid benchmarks: Tracking invalid data serves no purpose. Creating a brand new VALID benchmark serves a great purpose. Once you realize you’ve been tracking invalid data, it’s time to make a change and fix the problem. Similarly, once you realize you’ve missed answer options or used disrespectful language, it’s time to fix the problem.

 

Watch the world evolve: Change lets us account for our evolving society, culture, technology, and political atmosphere.

 

  • When did you change the sex and/or gender questions on all of your studies to be more respectful and inclusive? If you haven’t done so yet, this PDF from Insights in Color will get you started.
  • When did you add Facebook or Instagram as viable channels in addition to door-to-door salespeople, radio, and TV? Have you added TikTok to your list of channels yet? (You’d better!)
  • When did you add Madonna to your list of influencers? What about Beyoncé? What about Billy Eilish?

 

You made those changes and didn’t think twice because it was the right thing to do it.

 

Plan to measure current issues: Build an entire section into your questionnaire that is all about change. If Section A is your unchangeable KPIs, make Section D completely new every single time. This quarter, it might be all about sustainability. Maybe next quarter it will be innovative packaging and the quarter after that will be all about diversity and equity.

 

Embrace fun! Change also lets us create questionnaires that are better able to capture the imagination of participants. Social networks and online games are fun because they leverage audio, video, swiping, and dragging. It’s time to change up your questionnaires so they are just as engaging.

 

 

.

 

What’s next?

It’s time for researchers to stop being pushed around by trackers. We know what we’re trying to accomplish and why. We know how change affects data. It’s time for us to be the boss of trackers and make them work for us! Embrace change!

 

Are you ready to design a useful tracker that generates great quality data using questions that are inherently engaging? Email your project specifications to our research experts using Projects at E2Eresearch dot com!

 

Learn more from our case studies

 

Download information about our services

 

8 Engaging Question Types to Improve Participants’ Survey Taking Experience
By E2E Research | May 14, 2021

From Minecraft to Fortnite and from Pinterest to TikTok, there are innumerable highly entertaining ways for people to spend their free time with their cell phones, tablets, and computers. No matter the demographics of your audience, people of every age, gender, ethnicity and more have grown to love the swiping and dragging and audio/video capabilities of their favorite online hobbies. Participant engagement matters. A lot.

 

The only way for market, opinion, and social consumer researchers to compete with those experiences is to provide people with meaningful, realistic, and entertaining ways to communicate their product and service needs to companies.

 

Fortunately, the digital research experience of the 21st century has far surpassed the paper-cut and broken pencil tip experiences of the 20th century. We can now present research participants with visually accurate stimuli, static and animated imagery, audio and video prompts, and response options that go far beyond clicking in radio buttons and check boxes. If you can think of it, expert survey scripters can create it.

 

Here are eight question types that will help you build a more engaging questionnaire and inspire new ways to think about the research experience.

 

 

Create more realistic shopping moments.

E2E Engame question animationI’ve yet to wander through a brick-and-mortar store where every product was presented to me as a black and white written description with no imagery. If a study doesn’t require the external validity of an in-store or facility shelf test, consider creating a questionnaire with high quality artwork, photographs, and animations that reflect a more realistic product selection experience.

 

Simulate a retail environment in the digital space where products are shown on a shelf, and then selected and dropped into a shopping basket. Include product details and prices as necessary. Include competitive brands on the shelf and give them compelling details as well.

 

 

 

 

Let the human mind work in a more natural way.

Traditional questionnaires list out the brand names in alphabetical order and often ask people to assign rank order numbers to them. The most desirable product is assigned the number 1 while the least desirable product is assigned the number 5 or 10 or some other larger number.

But that’s not how we really think about products. When we’re in the store, we look at all the packages, we pick up a few packages and put them back, we hold one closer and then the other closer, and we might actually lay them sid

e by side in an order. A more personal experience can be simulated by using drag and drop questions that let people “pick up” product visuals, drop them into an order, and then drag them in a different order.

Similarly, when asked to rate product packages, websites, brochures, or other visual materials, it’s quite common for questionnaires to show an image and then pose a series of  Likert scale questions. However, a Hotspot or Highlighter with drag and drop pinpoints and outlines is more natural and engaging. Think about how people normally critique a package – they hold it, point to areas, and highlight sections with their fingers. Being able to replicate an in-person experience is far more natural and meaningful.

 


.
.
.
.
.

Cater to different communication styles.

Everyone communicates in different ways. Painters and authors and musicians (and those of us who aspire to be one of those!) find it easier to share opinions and ideas in visual or written or auditory ways. Further, some question types are better at capturing basic facts and straightforward opinions while other question types are better at capturing feelings and emotions.

 

We owe it to ourselves and to research participants to give everyone the opportunity to give answers that truly reflect how they feel. Instead of presenting page after page of written questions and answers, consider incorporating some visual, more projective questions that speak to the soul and the imagination. It’s a great way to think about brand or corporate personality and mission statements!

 

 

 

Collect audio and video responses.

And of course, what about people who prefer to share their opinions and ideas verbally or visually? Digital devices make it very easy to share and capture both audio and video materials.

 

Instead of an open-end or “Please specify,” consider asking people to record themselves speaking. Similarly, ask them to take a photo or video of their fridge, pantry, medicine cabinet, desk, backyard, or car. We all know a picture is worth a thousand words. A video could be priceless!

 

 

 

Help participants with the math.

For quantitative researchers running tabulations and statistical analyses day after day, it’s easy to forgot how intimidating math is for many people. Fortunately, our digital devices are ready and willing to help. At the most basic level, researchers can design questions that automatically do the math for participants – no more, “Please make sure your numbers add to 100%.”

 

Now, we can even convert counts and percentages into slider questions so that sums will always equal 5.000, 10, 100%, or $100. Banish the fear of math and make numerical responses far easier and interesting!

 

 

 

Say goodbye to grid questions.


Grids are old news. They’re boring, they’re taxing on the eyes, and they cause people to disengage and lose focus. Fortunately, there are many ways to redesign them. One of my favorite ways to present Likert scale grid questions is to present each item individually with clickable color-coded answer option beneath. As each item is answered, the next item automatically pops up. Easy peasy and fast! It’s great for short, easy to read questions.

 

There are many other alternatives for grid questions. You could drag each item or image onto the scale. You could slide each item or image across its own unique scale. You could drag each item up a ladder with 5, 7, or 9 steps or place each item somewhere on a five-level podium.

 

There are so many options beyond the typical grid that can make the questionnaire experience just a bit more interesting.

 

 

 

Get qualitative information from a quantitative tool.

When people agree to participate in a survey, they know they will be asked to click in circles and boxes, and select items from a list. Unfortunately, they’re often less interested in typing out long explanations of their answers.

 

However, when we convert a boring text box into an engaging storytelling exercise, it’s much more enjoyable to share information. Take a few minutes to figure out the story you want to hear from your customers. Work out a few story prompts and them guide them through a virtual book with pages that actually turn. With a bit of creativity, sharing verbatims can be enjoyable.

 

 

 

Ask for their final opinion about the questionnaire.

You, the researcher, were in charge of 99% of the survey experience. You told participants what the questions were and you told them what their answers could be. Once you’ve reached the end of the survey, however, it’s time to let participants be in charge. End the questionnaire in a respectful but fun way by incorporating a question that uses a bit of creativity.

 

Ask for any additional comments that weren’t included in the questionnaire or if they’d like to share their opinion of the research experience. And make sure to act on their feedback!

 

 

 

In Sum

A survey incorporating all of these question types could be quite fun but there are a few rules.

  • Don’t go overboard and use engaging question types for every single question. Sometimes, traditional questions really are the best question types. Focus on the sections of the questionnaire that are particularly challenging or disengaging, and sprinkle little bits of fun throughout.
  • Don’t aim to use as many different question types as possible. Choose two or three that really meet your needs. Consistency makes for better data quality and it helps participants feel more comfortable with their task.
  • Rather than starting with a bang, try to end with a bang. If the only place to incorporate an engaging question is at the very beginning of a questionnaire, think about whether you really need that question. You don’t want to question #1 to be amazing and then follow that up with ten minutes of boring traditional questions.
  • Remember that more than a third of participants answer questionnaires on mobile devices. Be aware of the size and space limitations those devices have. Remember that not everyone will be in an environment where they can play sounds and movies. Choose question types that are appropriate for your audience, their locations, and their devices.

 

The widgets you see here are just a few of the more than 100 templated and fully customizable widgets we’ve already built for our clients. With your imagination and your knowledge of your products and your consumers, any of these widgets could be customized to meet your specific needs. Or, if you’ve been inspired and have an idea for a brand new question, let us know! We’d love to create an engaging question just for you. Your imagination is the limit!

 

Download our Questionnaire Engagement Share Sheet to learn how we help research companies throughout the entire survey design, scripting, analysis, and reporting process. Or, feel free to email your project specifications to our research experts using Projects at E2Eresearch dot com.

 

 

Learn more from our case studies

Great reads about questionnaire design

Annie Pettit Joins E2E Research as Chief Research Officer, North America
By Rupa Raje | March 1, 2021

New York, New York – March 1, 2021 – – E2E Research, a full-service, end-to-end provider of market and competitive intelligence, market research, and analytics in North America, Europe, and APAC, today announced that Annie Pettit, PhD CAIP FCRIC will be joining as Chief Research Officer, North America.

 

Annie Pettit headshotAnnie Pettit joins with more than twenty years of experience in the marketing research industry, focused on analytics and survey research, data quality, and research standards. Prior to working as a research and communications consultant for four years, she served as VP Research Standards at Research Now (now Dynata), Chief Research Officer at Conversition Strategies, and VP Panel Analytics at Ipsos Interactive Services.

 

Annie is currently Chair of the Canadian Mirror Committee for ISO/TC 25 Market, Opinion, and Social Research which is responsible for setting and maintaining standardization of the requirements for organizations and professionals conducting market, opinion and social research including ISO 20252, ISO 26362, and ISO 19731. She is also on the board of Canada’s CAIP-PAIM (Certified Analytics and Insights Professional), and a member of the CRIC (Canadian Research and Insights Council) standards committee.

 

“We’re very happy to welcome Annie to the E2E Research team,” said Rupa Raje, President at E2E Research. “Throughout her career, she’s consistently demonstrated that high standards of practice are of utmost importance. Her focus on quality and her respect for research participants is a good match for our team and for our clients.”

 

“Annie’s experience with advanced research design and data analytics is key,” said Yogesh Rana, COO at E2E Research. “She brings hands-on technical experience working with massive datasets, both from panel data and social media data, which will help us ensure our data and analytics products and services align well with our clients’ needs.”

 

 

About E2E Research

For more than ten years, E2E Research has specialized in converting enigmas into enlightenment for researchers and business leaders around the world. With a full range of market research, data analytics, and business intelligence solutions, and expertise in a range of industries, E2E handles the entire research process from complex questionnaire design to inspiring reports and everything in between as an extension of our clients’ team. E2E Research is a proud member of ESOMAR and is certified to ISO 27001.

Please connect with Rupa Raje, President, E2E Research, at Rupa.Raje@E2Eresearch.com.