Request a quote!

Blog, News &

Case Studies

How to generate more insightful reports
By E2E Research | April 11, 2022

Decorative image
We keep hearing that slow food is better food. It seems to me that the same thing applies to marketing research reports.

 

At its heart, the intention of slow food is to promote local products and traditional methods, to focus on quality over quantity, and to prevent over-production and waste.

 

Those key concepts are just as relevant for research reports. Rather than focusing on fancy ways of reporting, focus on effective ways of reporting. Rather than focusing on providing pages and pages of results, focus on the most important results, and rather than focusing on creating extravagantly detailed results, focus on creating insightful results.

 

Let’s break this down a bit more.

 

 

Statistical significance isn’t important

You heard that right. While focusing on statistical significance is a fast way to find cool numbers to slam into a report, I often don’t care about it. If 5% is statistically different from 6%, it’s not important unless that 1% difference is actionable and meaningful. In healthcare, 1% can be life or death and I will care enough to consider replicating the results – immensely. But when it comes to choosing a package design, that 1% is sampling error, analysis error, random error, meaningless.

 

Instead of being driven by shiny, statistically significant p-values, slow down and think through which numbers are meaningful and actionable. Which numbers will create reliable and measurable differences in the market. Once you’ve decided that a significant difference is meaningful, then you can worry about it. And if something is significant but not meaningful, put it aside to think about potential applications later.

 

 

Every title should be a conclusion

Page and chart titles can be really easy to write. Pick out the biggest or smallest number on the page and copy it into the title space – “45% of participants chose concept B.” Done and done! But that’s a report no one will care about. It won’t get shared with the leadership team and you won’t be asked (or paid!) to write next year’s report.

 

Instead of focusing on data points, focus on what was learned on a specific page. Slow down and take the time to digest the 5 or 50 data points on the chart or table. What story does the collection of data points on that single page tell? What is surprising about the interactions among those data points? What is the overall conclusion based on the themes of answer options? What is the key recommendation you would draw from that page? The biggest (or smallest) number is often not the important story. But once you slow down to figure out what the real story is, then you’ve got a great page title.

 

 

Alphabetical is never the right order

When we write questionnaires, we order the questions in terms of what makes sense to the research participants reading each page. From the researcher’s point of view, it would be really fast to create a final report where page 1 is a chart for Question 1 and page 100 is a chart for Question 100. But that type of reporting is nonsensical to readers. Indeed, some questions are superfluous – they are there to focus participants’ attention or to direct them to a certain line of thinking.

 

Every study is designed to answer very specific research objectives and those objectives should be top and front of the research report, not buried in alphabetical order. Take the time to think through which questions provide a holistic answer to Objective 1, and prepare those together at the beginning of the report. Even if that means analyzing questions 17, 43, and 96 together – out of order!

 

Once you’ve understood all of your data and the results, reorder all the slides (and then edit all the titles) so they tell a logical story from beginning to end that reflects the research objectives. Forget the order of questions and focus on the order of the story.

 

 

Every header in the executive summary is one sentence of the full story

Once you’ve created your 20 (or 50) page story, it’s time to build the executive summary. These are never just a listing of the coolest data points. As fast as that is, you’ll once again end up with a report that lands in the recycle bin with nary a forward email. With a fully written, reordered, and edited report in hand, you need to create an executive summary that serves the needs of different key stakeholders.

  • First, the CEO and CMO should be able to read the title of the executive summary and know exactly what their next business move should be next year. That is story #1 – punchy and obvious.
  • Second, the VP of Brand or Insights should be able to read only the headers of the executive summary and know how to plan for this year. That is story #2.
  • Third, the Brand Manager or Insights Manager should be able to read all of the bullet points in the executive summary to plan for tomorrow.

 

Your goal is to ensure that each key stakeholder can read as little as possible to get their job done. Of course, if you’ve gone slow and taken the time to write an amazing report, you might just find that all of your stakeholders have read the entire executive summary and every other page as well!

Decorative image

 

 

 

Highlight oddities

The fastest way to finish illustrating every slide is to use call-outs and arrows to highlight the smallest and biggest numbers, the winners and losers, the leaders and the laggards.

 

But, we already know this serves no purpose. It’s not helpful and it’s not insightful. Any reader can glance at a chart or table and instantly see what is at the bottom or the top of the list. X does not mark the spot!

 

Your job as an analyst and report writer is to take the time to read between the lines and find the hidden treasures. Which number is sneaking by in an unexpected place? What trend is the reverse of expected?

 

Highlight that oddity so your reader doesn’t have to waste time searching out what you already described in writing. The green outline here saves the reader time by drawing their eyes immediately to the interesting data point. If they want to look at the top and bottom or left and right of the chart as illustrated by the unhelpful red box, they can easily do that too.

Decorative image

 

Embrace white space

Sometimes, you’ll need to incorporate an extremely complex chart that has lots of answer options or segments. It would be quick to just stretch the borders to so that it is really tall or wide. You must resist the urge!

 

Treat the white space around the edge of a slide as sacred, never to be infringed upon. That white space creates breathing room for the eyes. It makes charts and tables easier to read, moreso than stretching charts out wider across the page.

 

If a chart or table simply won’t fit, find other ways to make complicated charts easier to read. Convert it into two charts – perhaps organized by segment or outcome or culture.

Decorative image

 

 

Never use default charts

It sure is fast to use default charts. The fact that everyone can spot them from a mile away is not the reason to stop using them though. If you take the time, default charts can always be improved to highlight the results in a more valid and insightful way.

 

Take the time to ensure the axis starts at 0, not -5. Reduce the number of guiding lines to just 2 or 3 – or even none! Simplify the colors or, at least, convert them from florescent to neutral so you don’t burn your readers’ eyes. Make sure the colors or textures of the lines are different enough that people who are color blind can read them – Is there even a difference between March and July in this chart? And, is this even the right kind of chart?

Decorative image

 

 

Refuse to use chart junk

Sometimes, a slide needs to be prettied up. Perhaps you’ve got 5 slides in a row that are text after text after text. It’s time to introduce some design and get the eyeballs back on track.

 

However, this is NOT the time to drop in all of your favorite cute clip art icons. This is the time to consider what is the story you want to tell and does that story require clip art – the answer is most often no. Clip art is the PPT user’s version of chart junk (Thank you Edward Tufte for that term!).

 

Instead, opt for subtle designs that don’t detract from the message. Or, use photographs as you see in the executive summary above. Plenty of websites offer free images, e.g., Pexels, Pixabay, Unsplash. And if those sources don’t provide the diversity you need, there are additional sources such as Disabled and Here, Jopwell, and Nappy.

 

 

 

What’s Next?

We’d love to show you what a great report looks like. If you’re ready to discover top quality insights about your buyers, brands, and business, email your project specifications to our research experts using Projects at E2Eresearch dot com. We’d love to help you turn your enigmas into enlightenment!

 

 

 

Learn more from our case studies

 

 

Learn more from our other blog posts

 

 

Books You Might Like

 

 

Conferences You Might Like

 

 

 

What is a Census Representative Sample?
By E2E Research | March 29, 2022

The people researchers choose to share their opinions in marketing research can make a huge difference in the quality of answers we receive. That’s why it’s important to understand the research question and who would be best suited to speak with to get the necessary answers.

 

Let’s consider one type of sample that researchers often consider when conducting research – a census representative sample.

 

 

What is a census representative sample?

Decorative imageYou might also hear these referred to as ‘Census Rep’ samples. A census rep sample requires access to census data, something that is typically generated by large-scale government surveys completed by millions of their residents or citizens. In the USA, that’s census.gov and in Canada, that’s Statistics Canada.

 

A census rep sample can be designed to reflect any specific group of people. The key consideration is that the sample of completed questionnaires reflects the larger population on important criteria. The sample could reflect an entire country (e.g., USA, Mexica, Canada), a state or province (e.g., California, Quebec), or a city or town (e.g., Boston, Ottawa). This type of census rep sample is reasonably easy to define.

 

Another type of a census rep sample can be defined by target group behaviors or characteristics. For instance, you might be interested in a census rep sample of people who smoke or who have diabetes. Of course, building these types of census rep samples is far more difficult because government census data tends to be set up to understand basic demographics like age and gender, rather than behaviors like smoking and ailments like diabetes.

 

 

When would I use a census representative sample?

Census rep samples are extremely important for at least couple research objectives.

 

First, when you need to calculate incidence rates for a product or service, you need to first generalize from a representative group of people of your target audience. You need to be able to define your population before you can know what percent of them uses a product or performs a behavior.

 

Second, census rep samples are extremely important for market sizing. Again, you need to generalize from a representative group of your target audience before you can estimate the percent of people who might qualify to use your product or services.

 

 

Why is a census representative sample important?

Decorative imageCreating a census representative sample is extremely important. You could get into trouble if you recruit a sample of research participants who don’t look like actual users.

 

You might gather opinions from too many older people, too many women, too many higher educated people, or too many lower income people. Your final research conclusion might be based on opinions collected from the wrong people and lead to development of the wrong product or product features.

 

 

An example of a census representative sample

Let’s consider an example where we want to determine which flavour of pasta sauce to launch in a new market – California. We’ve got two delicious options – a spicy, jalapeno version and a mild portobello mushroom version.

 

We know people from different cultures and ethnic backgrounds have very different flavor preferences so we need to ensure that the people who participate in our research will accurately reflect the region where we will launch this new pasta sauce.

 

Now, we could recruit and survey a sample of people based on a basic quota that will help make sure we hear from a range of people. It might look like what you see in the first column of the table – even splits among each of the demographic groups with a bit of estimation for ethnic groups. But that’s not actually what California looks like. Instead, let’s build a census rep sample matrix based on real data.

 

Decorative image To start, we need to define a census rep sample of California. First, we find those people in a census dataset. Then, we identify the frequencies for each of the key demographic criteria – what is the gender, age, ethnicity, and Hispanic background (as well as any other important variables) of the people who live in California. Fortunately for us, this data is readily available. On the census.gov website, we learn that in California, 50% of people are female, 6% are Black, and 39% are Hispanic.

 

Now we can recruit a sample of people from California whose final data will match those demographic criteria – 50% female, 6% Black, and 39% Hispanic. You can see just how different those numbers are are from the original basic quotas!

 

In the last two columns, you can see that we’ve even split out the criteria by gender (even better, you can do this based on the census data). This will ensure that one of the age groups isn’t mostly women or one of the Hispanic groups isn’t mostly men. When we nest our criteria within gender, we end up with a nested, census rep sample. Nested demographics are the ideal scenario but they do make fulfilling sample more costly and time-consuming. You’ll have to run a cost-benefit analysis.

 

 

What’s Next?

Are you ready to build a census representative sample for your next incidence rate or market sizing project? Email your project specifications to our research experts using Projects at E2Eresearch dot com. We’d love to help you turn your enigmas into enlightenment!

 

 

 

Sample Conferences

 

 

Learn more from our case studies

 

 

Learn more from our other blog posts
New! Raven Data Dashboards for Inquisitive Minds, from E2E Research
By E2E Research | January 26, 2022

What’s the first thing that comes to mind when you think of ravens?
.
For me, it’s how smart and inquisitive they are. They’re incredible problem solvers, always investigating the world around them and evaluating their options. Their intelligence has been thoroughly tested and well documented by nature experts around the world.

.
For example, this clip shows a raven that has learned to solve a complicated puzzle using sticks and stones. (BBC Earth, 4 minutes).
.. ..

.

 

Dashboards As Insight Tools

Just as ravens use tools to solve puzzles, researchers use data dashboards to solve puzzles. When we drop our data into dashboards, we manipulate the data, evaluate our hypotheses, and test and interpret outcomes. This process requires intelligence, cunning, and intense curiosity.

All this to say that the E2E Research team is excited to bring our new, proprietary Raven dashboard out of beta testing and into the limelight. Whether your focus is marketing, consumer, or social research, we know that Raven dashboards will help spark your curiosity so that you can convert enigmas into enlightenment and grow your brand.

..

 

Key Features of Raven Dashboards

Regardless of whether you’re a devoted data user or a hesitant data avoider, we’ve made sure that Raven is easy to use.

.

 

By Researchers, For Researchers

Our researchers, developers, and engineers have years of experience building highly customized, complex dashboards. We’ve learned exactly what features researchers and marketers need to unravel problems and get to the insight. And, it’s time for simple and small projects to benefit from dashboards too!

.

 

Beautiful Charts and Tables

A variety of charts and tables are immediately accessible from the main menu.

Instantly switch your charts from horizontal to vertical orientation and back again to more clearly display labels or improve the readability of data.

With the click of a button, instantly switch a chart into a detailed table with decimal places.

Every data point is labeled and full details about that point are available on mouse-over.

 

 

 

Raven data dashboard detailed tablesInsightful Crosstabs and Filtering

Move beyond univariate analysis and evaluate differences among demographic and psychographic segments.

Use crosstabs to look at interactions within multiple variables. Choose the key variable, and the cross-tab variable at the top of the chart/table.

Use filtering to hone in on tight demographic or psychographic segments of consumers and customers. Select only the segment you are interested in using the variables in the left-hand menu. Filter from within the legend.

 

 

Raven data dashboard detailed tablesShareable PPT and PDF Exports

Quickly export any chart or table into PowerPoint. Drop the chart into your branded template and edit the size, colors, fonts, and more to meet your specific needs.

For an extra bit of security and to ensure that data aren’t accidentally changed, charts and tables can also be exported to PDF files.

 

 

.

.

 

What have our partners said about Raven so far?

I wanted to send you a personal email to congratulate you on the dashboard. We presented it to the clients and they were amazed by the tool, the style and the intuitivity. So, I thought it was important to share this with you and to tell you that I was impressed with the progress you made.

 

Oooohhhhhhh wow. That was SOOOOOOO cool! The visual information is a quick read once you get used to looking at it. Our clients better love this and be at least as impressed as I am. 😉 Thanks all who built this. CONGRATULATIONS TEAM!

 

 

 

Competitively priced

Raven dashboards are competitively priced for both one-time studies with small sample sizes, as well as large, multi-country, multi-wave trackers. Simple projects are ready for you to dive into in as little as one day!

.

 

Ready to try Raven?

Are you ready to dive into a beautiful and easy to use dashboard to discover top quality insights about your buyers, brands, and business? Ask for a live demonstration or email your project specifications to our research experts using Projects at E2Eresearch dot com. We’d love to help you turn your enigmas into enlightenment!

 

.

 

Learn more from our case studies

 .

Learn more from our other blog posts

 

 

About E2E Research

For more than ten years, E2E Research has specialized in converting enigmas into enlightenment for researchers, business leaders, and insights companies around the world.

 

As an ISO 27001 certified, ESOMAR corporate member, we offer a full range of market research, data analytics, and business intelligence solutions to help you extend your services, fill the gaps, and create more value for your clients. Services include research and questionnaire design/analysis/reporting, data science and analytics, multiple panel management, scripting/hosting, data validation, digital fingerprinting, tabulation, qualitative coding, written reports, infographics, real-time digital dashboards, and mobile apps.

 

Award winning research services and solutions from End-to-End.

 

NewMR Webinar with Annie Pettit: Statistics are dead. Long live statistics!
By E2E Research | October 28, 2021

NewMR hosted their event focusing on New Thinking on Wednesday, November 10, 10am Eastern. We’re happy to share all three recordings below. Enjoy!

 

Annie Pettit, E2E Research: Statistics are dead. Long live statistics!

 

  • It’s time to stop letting statistics do the thinking for us. I’ll share how it’s finally time to give up on statistics and learn how to interpret and action on research results in more impactful ways.

 

 

Ray Poynter, Potentiate and NewMR: The Implications of Democratising Insights for Research

 

  • 50% of insight projects are conducted internally by clients. Research is being democratised, a shift enabled by the explosion in the number of platforms. Ray will highlight the implications of more research being conducted by people who may have less research knowledge, but who have a greater topic understanding (& with the ability to implement results).

 

 

Stephen Cribbett, Further: The Mindful Consumer’s Contradiction

 

  • People are more concerned about the future survival of the planet and society than ever before, but this isn’t shifting the way people shop and consume as much as you think it is. This presentation explores the say-do gap and provides useful methods and techniques that researchers can deploy to surface it and set about changing it.

 

.

.

Check out our case studies

 

What is big data analytics and why does it matter?
By E2E Research | September 30, 2021

Marketers and researchers use the word ‘analytics’ to describe many different things that can be done with digital data. Without a common understanding, it can be easy to misinterpret what a client actually needs and end up assigning project tasks to the wrong people, costing jobs inaccurately, and not meeting client expectations.

 

In this post, we’ll take a deep dive into the different interpretations this word can have to ensure that both clients and suppliers are on the same page when it comes to extracting relevant insights from from myriad datasets about buyers, brands, and businesses.

 

Before we get into the details, you might appreciate this short introduction to data analytics from The Career Force on YouTube.

 

 

 

Types of Data

First of all, let’s look at some types of data that business leadership, marketers, brand managers, and researchers have access to in order to better understand consumer and market enigmas.

.

Primary research data

Primary research data is generally considered ‘small data. They’re easily stored in traditional spreadsheets like Excel and the files are small enough to be emailed without getting stuck in your outbox or flagged as spam. These data tend to represents people’s opinions and perceptions about various topics asked of them in a quantitative questionnaire, or a qualitative interview or focus group.

  • Ad hoc survey or interview data: Often under 1000 records and under 100 variables. Normally focused on one brand or topic. Qualitative datasets converted to quantitative formats may have fewer records but much more, or much larger, variables.
  • Tracker survey data: When gathered across multiple brands or countries, may be up to 50 000 records and a couple hundred variables. Normally focused on one product category though they may shift in focus from time to time.

.

Business data

Business data is often created in passing – as something happens in the company, a physical or digital record is created. Created and stored over years and in many disparate formats, these records are used to fulfill customer requests, manage employees, or keep track of product development. In many cases, these data are left lying around, ignored on servers, collecting virtual dust, and not leveraged for the insights that lie within.

  • Employee data: Records of retention, satisfaction, reviews, salaries, promotions, complaints, departments and more can be transformed and standardized as variables for statistical analysis.
  • Customer data: This is where we start to use the phrase “big data.” Transactional data reflecting purchases, SKUs, prices, times, dates, and more can come in datasets of millions or trillions of records with thousands of variables. Click-stream data gathered from websites can be exponentially more massive as every tiny movement and action made by a finger, pen, or mouse on digital screens is tracked. These data are already collected in standardized datasets and ready to be reformatted or transformed into specialized datasets for analysis.
  • Business data: Executives are often most interested in these data – revenue, costs, finances, operations, inventory, supply chain, & logistical data. These data, also usually available in standardized datasets, are often summarized from individual level data but are even more valuable at the individual level.

.

Secondary research data

Secondary research data is all-encompassing. It can include any type of primary research or business data that were collected for some other purpose, whether by yourself, someone else at your company, or someone at a different company. As such, you might have access to small survey datasets, massive transactional datasets, or compiled and summarized datasets. In addition to the primary and research data already described, it could include:

  • Third party data: A huge range of data types and sizes can be purchased from third parties that create, curate, and collate many sources of data, potentially terabytes of individual or summary level data.
  • Social media data: Originally created to communicate a specific message to a specific person (or persons), social media data can be gathered and used for purposes other than originally intended. These data may include information about brands, people, and companies, date, time, geography, sentiment, and more. It may need to be transformed and standardized but a wealth of insights exist here as well.

 

Types of Analyses

There are three categories of analytics and skill-sets that might be required in the course of a research project. 

.

Standard analytics

Most quantitative market researchers have a broad understanding of the theory and application of statistics. They know when and why to apply certain types of analyses to achieve specific research goals. Specifically, they have a lot of experience interpreting massive data tabulation files and running standard survey analyses to help us identify patterns and understand what happened and why.

They focus mostly on:

  • Types of data: Primary data, usually quantitative survey data
  • Types of analyses: Correlations, t-tests, chi-square, means, standard deviations, ANOVAs, descriptive and diagnostic statistics
  • Analysis tools: Menu driven SPSS, Excel, data tabulations
  • Outputs: PPT reports, static Excel reports

.

 Questions to Find Out If This Is The Goal
  • Will the analyses focus on details from the data tabulations?
  • Do you need insights beyond what is covered in the data tabulations?
  • Do you need anything beyond descriptive statistics like means, standard deviations, and box scores?

.

 Possible Research Questions

 .

Advanced analytics

Advanced analytics are usually conducted by people who have specialized training and expertise in statistics. They are experienced with non-standard and special cases of statistical tests that can’t be determined from data tabulations. Advanced analytics can help us understand what happened, why it happened, and predict what is likely to happen next.

They focus mostly on:

  • Types of data: Primary research data, small business datasets, biometrics data
  • Types of analyses: All of the standard analytics plus linear / logistic / multiple regression, conjoint, MaxDiff, TURF, factor analysis, cluster analysis, segmentation, discriminant analysis, perceptual mapping, special cases of standard analytics, predictive analytics, forecasting, and more
  • Analysis tools: Menu or script driven SPSS, SAS, R, Python
  • Outputs: PPT reports, static or dynamic Excel reports, user-guided dashboards, simulators

.

 Questions to Find Out If This Is The Goal
  • Do you need to segment people or products into groups?
  • Do you need to predict purchases or forecast sales?

.

 Possible Research Questions

 .

Business Analytics/Intelligence

Answering business intelligence questions to improve strategic decision making and create a competitive advantage normally requires advanced expertise in both statistics and data management. That skill set is often described as data science. Of course, for maximum effectiveness, you would also want this person to have extensive experience with marketing and consumer data.

These experts focus mostly on:

  • Types of data: Big data, business data, transactional data, logistics, employee data, real-time or near-time data
  • Types of analyses: All standard and advanced analytics, plus data transformation and manipulation, data fusion, data mining
  • Analysis tools: Python, R, SAS, SQL, machine learning, AI
  • Outputs: PPT reports, static or dynamic Excel reports, user-guided dashboards, simulators

.

 Questions to Find Out If This Is The Goal
  • Do you need to combine different types of data from multiple sources?
  • Do you need to make sales or logistics predictions in real-time?

.

 Possible Research Questions
  • Why are we unable to keep warehouses stocked with the right products at the right time?
  • Where are we dropping the ball with our processes and logistics, and how can we solve small problems before they become big problems?
  • How can we increase our efficiency to improve our overall profitability?
  • When a customer has selected a single product, what other products would they be most interested in?
  • Can we drop a rarely purchased product without causing our highest value customers to switch retailers?
  • How can we ensure optimal inventory for every SKU using existing business data? – A case study

.

What’s Next?

There’s a lot of overlap among various analytical techniques and objectives. One project may require only standard analytics whereas another may require all of them. However, once the research problem and the available datasets are clearly defined (not as easy as you’d think!), your analyst will know which techniques and software are best suited to uncover your answers.

If you’re ready to gather top quality insights about your buyers, brands, and business, please do email your project specifications to our research experts using Projects at E2Eresearch dot com. We’d love to help you turn your enigma into enlightenment!

 

 

Podcasts about Business Intelligence

 

Business Intelligence and Data Analytics Conferences

When to Leverage a User-Guided Market Research Data Dashboard
By E2E Research | September 9, 2021

When you’re immersed in data and numbers every day, all day long, it’s easy to forget that numbers can be intimidating. However, built with care and purpose, real-time dashboards are a great way to help non-technical people feel more comfortable with numbers and encourage them to dig into real-time data without feeling overwhelmed.

 

Regardless of how comfortable people are with numbers, everyone needs to understand and analyze their KPIs and critical data points to make more informed decisions that will result in business growth. As with any tool, there are good reasons to choose one data presentation tool over another.

 

With that in mind, let’s first consider under what circumstances dashboards are preferable and second, how to set up an actionable dashboard that people will want to use.

 

 

Optimal Use Cases for Digital Dashboards

Huge Sample Sizes

No one likes a long, PPT report. But when sample sizes are huge, forcing a potentially massive set of results into a short static report can minimize the potential of the data you so carefully collected. Think about it in terms of a global report covering a brand 15 different countries. It doesn’t make sense to write 15 reports that are each 20 pages. However, it does make sense to capture high-level global insights in one report and then provide dashboard access to the nuanced results within each country.

 

  • Trackers that accumulate thousands of records on a daily, weekly, monthly, or quarterly basis
  • Global point-in-time studies covering many SKUs, languages, and countries
  • Transactional/purchase datasets covering hundreds of SKUs, hundreds of retailers, and millions of individual, consumer purchases

 

 

Time-Dependent Reporting

Whether it’s tracker data from the last 6 months or historical, business records from the last 6 years, dashboards can help you consolidate terabytes of data into meaningful chunks. Discover insights that have been hidden in the data because the data wasn’t previously reviewed with a certain question in mind or because year-over-year data wasn’t previously available.

 

  • Monitor brand health and campaign effectiveness year-over-year
  • Monitor seasonal employee satisfaction and engagement.
  • Review the past, monitor the present, and predict the future

 

 

Access Real-Time Insights

When you’ve waited 4 weeks since the start of a project, 2 weeks since it went in field, and you still have to wait 2 more weeks until tabulations and a draft report is ready, you know the power of accessing real-time data. Dashboards can be the answer to quick insights, particularly when a problem appears seemingly out of nowhere!

 

  • Identify problematic business practices and roadblocks from transactional or logistics data in real time
  • Catch consumer-reported problems in social media data or tracker data before they become full-blown crises

 

 

Mine for Insights

It’s impossible to anticipate every possible, meaningful analysis prior to writing a report. With a user-guided dashboard, you can check hunches, test wild scenarios, and discover insights that were secondary (or tertiary) to the original research questions or that weren’t obvious at the time of writing. And, these analyzes can be done even by those who don’t have access to or knowledge or SPSS, SAS, or the original data tables.

 

  • Dig into to data beyond the original research objectives
  • Uncover serendipitous insights that would never otherwise be discovered

 

 

Reach Multiple Audiences

Most written reports are tailored for a single audience. But we know that research data is invaluable to many groups of people. With an interactive dashboard, each user can focus on the level of detail that will help them make the best decisions in their role, and all of them can be using the same raw data source for a consistent message.

 

  • Sales/Marketing Team: Dashboards can help you understand the performance of individual salespeople, track the pipeline and conversion, understand marketing campaigns. All of this will help them understand how they are performing and where they need to direct their efforts.
  • Brand Managers: Brand managers rely on analytical dashboards to track campaigns, product development, customer satisfaction, and more. Dashboards help them track key metrics and spot and resolve issues before they become much bigger problems.
  • Operations Managers: Operations managers rely on operational dashboards to track purchase behaviors, discover logistical roadblocks, and improve processes.
  • Decision-Makers: CEOs need a strategic dashboard with KPIs across all departments to track company goals, visualize new trends, and inform future strategies – all in one place.

 

 

Fuse Data from Multiple Sources

If you’ve ever struggled through 3 reports written by 3 different people in 3 different formats and tried to consolidate trends and themes, you know how valuable inputting all that data into one dashboard can be. Save time and confusion by incorporating website analytics, transactional data, survey tracker data, and customer support data into one place to reveal holistic, company-wide insights.

 

  • Merge transactional and survey data for a holistic picture of customer
  • Merge employee engagement data and sales data for a holistic picture of the business

 

 

Detailed Building Blocks for a User-Friendly, User-Guided Dashboard

After you’ve decided that an interactive, user-guided data dashboard is the right reporting tool for your research, then you need to actually build that dashboard. Here are a few key tips to keep in mind during the development process.

 

  • Choose play: People want to play, even adults! Dashboards don’t have to be boring just because they’re designed for business professionals. Incorporate pleasing designs and interactive filters that encourage play and discovery. A playful dashboard is a used dashboard!
  • Choose clean data: Don’t assume that all data is good data, and that all data can be immediately dropped into a dashboard. Check all of the data for errors, both manual and systematic, before loading it into the dashboard and letting users work with it. Make sure it’s clean, complete, and compliant. Don’t let the data lie to users.
  • Choose the most important data: Yes, you can have a dashboard with 100 filters and 50 pages. But will they all be used? As the dashboard creator, you know which variables are of key importance. Focus on those so that users don’t get distracted by incidental data.
  • Choose actionable data: If you know that you can never act on a certain issue, then it’s a waste of time, space, and users’ cognitive power to include it in a dashboard. Focus on data that people can and will act on to improve the business.
  • Choose the right charts not the pretty charts: The purpose of a dashboard is not to include one of every type of chart. The purpose is to choose charts that are best suited to the data being shared. If that means one page has 5 line charts and no bar charts or pie charts, then so be it. Clarity is key.
  • Choose accessibility: Sometimes, accessibility is easy. Make sure to use large fonts, comprehensive labels, indicators that can be differentiated in both black/white and color, generous spacing, and large clickable areas. Consider whether your audience has unique accessibility needs due to a disability. Even better, consult with an accessibility expert.

 

 

Types of Market and Consumer Insight Dashboards

No matter what kind of dashboard you need, you will be available to find a solution. If you can focus on your audience and your goal, you’ll be able to properly distinguish between three major categories of dashboards.

 

  • Quick: When budgets are tight, timelines are short, and you still need a user-driven tool to investigate data and discover insights, try a quick and cheap dashboard. They may not have the swoopy transitions or endless bonus features but you can still get the basic functionality you truly need to analyze a few waves of tracker data or a multi-country study. Our Raven dashboards are one example of a quick and competitively priced dashboard.
  • Comprehensive: For most people, the middle option works best. With tools like PowerBI (cost-effective for Microsoft users) and Tableau (super-speed with massive datasets), most medium to large datasets can be nicely transformed into easy to use, attractive dashboards.
  • Custom: The sky is the limit! With tools like .NET and Python, you can have the dashboard of your dreams. Filter real-time transactional, survey, and logistics data into one dashboard. Forecast future sales given consumer opinion scores and live purchase data. Plan more timely deliveries of the SKUs they actually want.

 

 

What’s Next?

Once you’ve decided to use a dashboard, the sky is the limit. Focus on your needs not your wants, and you’ll end up with a dashboard that will help you gather insights into your buyers, brands, and business, and create a successful future.

 

Are you ready to build a quick, comprehensive, or custom dashboard that helps you communicate more effectively with a wide range of key stakeholders? E2E’s Raven dashboards are competitively priced even for small projects.  Email your project specifications to our research experts using Projects at E2Eresearch dot com.

 

 

 

Learn more from our case studies

 

Learn more from our other blog posts

From Digital Fingerprinting to Data Validation: Techniques to Facilitate the Collection of High Quality Market Research Data
By E2E Research | August 19, 2021

In the market and consumer research space, there is good data and bad data.

 

Good data comes from research participants who try to do a good job sharing their thoughts, feelings, opinions, and behaviors. They might forget or exaggerate a few things, as all people do every day, but they’re coming from a good place and want to be helpful. In general, most people participating in market research fall into this category. They’re regular, every day people behaving in regular every day ways.

 

Bad data comes from several places.

 

First, sometimes it comes from people who are just having a tough day – the kids need extra attention, the car broke down, their credit card was compromised. Some days, people aren’t in a good frame of mind and it shows in their data. That’s okay. We understand.

 

Second, rarely, bad data comes from mal-intentioned people. Those who will say or do anything to receive the promised incentive.

 

Third, very often, it comes from researchers. Questionnaires, sample designs, research designs, and data analyses are never perfect. Researchers are people too! We regularly make mistakes with question logic, question wording, sample targeting, scripting and more but we always try to learn for the next time.

 

In order to prevent bad data from affecting the validity and reliability of our research conclusions and recommendations, we need to employ a number of strategies to find as many kinds of bad quality data as possible. Buckle up because there are lots!

 

 

Data Validation

What is data validation?

Data validation is the process of checking scripting and incoming data to ensure the data will look how you expect it to look. It can be done with automated systems or manually, and ideally using both methods.

 

What types of bad data does data validation catch?

Data validation catches errors in questionnaire logic. Sometimes those errors are simply scripting errors that direct participants through the wrong sequence of questions. Other times, it’s unanticipated consequences of question logic that means some questions are accidentally not offered to participants. These problems can lead to wrong incidence rates and worse!

 

How do data validation tools help market researchers?

Automated systems based on a soft-launch of the survey speed up the identification of survey question logic that leads to wrong ends or dead ends. Manual systems help identify unanticipated consequences of people behaving like real, irrational, and fallible people.

 

Automated tools can often be integrate with your online survey platforms via APIs. They can offer real-time assessments of individual records over a wide range of question types, and can create and export log files and reports. As such, you can report poor quality data back to the sample supplier so they can track which participants consistently provide poor quality data. With better reporting systems, all research buyers end up with better data in the long run.

 

 

Digital Fingerprinting

What is digital fingerprinting

Digital fingerprinting identifies multiple characteristics of a research participant’s digital device to create a unique “fingerprint.” When enough different characteristics are gathered, it can uniquely identify every device. This fingerprint can be composed of a wide range of information such as: browser, browser extensions, geography, domain, fonts, cookies, operating system, language, keyboard layout, accelerator sensors, proximity sensors, HTTP attributes, and CPU class.

 

 

What types of bad data does digital fingerprinting catch?

  • Digital fingerprinting helps identify data from good-intentioned people who answer the same survey twice because they were sent two invitations. This can easily happen when sample is acquired from more than one source. They aren’t cheating. They’re just doing what they’ve been asked to do. And yes, their data might be slightly different in each version of the questionnaire they answered. As we’ve already seen, that’s because people get tired, bored, and can easily change their minds or rethink their opinions.
  • Digital fingerprinting also helps identify data from bad-intentioned people who try to circumvent processes to answer the same survey more than once so they can receive multiple incentives. This is the data we REALLY want to identify and remove.

 

 

How do digital fingerprinting tools help market researchers?

Many digital fingerprinting tools are specifically designed to meet the needs of market researchers. They’re especially important when you’re using multiple sample sources to gather a large enough sample size. With these tools, you can:

 

  • Integrate them with whatever online survey scripting platform you regularly use, e.g., Confirmit, Decipher, Qualtrics
  • Identify what survey and digital device behaviors constitute poor quality data
  • Customize pass/fail algorithms for any project or client
  • Identify and block duplicate participants
  • Identify and block sources that regularly provide poor quality data

 

 

Screener Data Quality

In addition to basic data quality, researchers need to ensure they’re getting data from the most relevant people. That includes making sure you hear from a wide range of people who meet your target criteria.

 

First, rely more than the key targeting criteria – e.g., Primary Grocery Shoppers (PGS). Over-reliance on one criteria could mean you only listen to women aged 25 to 34 who live in New Jersey.

 

By also screening for additional demographic questions, you’ll be sure to hear from a wide range of people and avoid some bias. For PGS, you might wish to ensure that at least 20% of your participants are men, at least 10% come from each of the four regions of the USA, and at least 10% come from each of four age groups. Be aware of what the census representative targets are and align each project with those targets in a way that makes sense.

 

Second, avoid binary screening questions. It may be easy to ask, “Do you live in Canada,” or “Do you buy whole wheat bread.” However, yes/no questions make it very easy to figure out what the “correct” answer is to qualify for the incentive. Offer “Canada” along with three other English-speaking nations and “Whole wheat bread” along with three other grocery store products. This will help ensure you listen to people who really do qualify.

 

 

Survey Question Data Quality

Once participants are past the screener, the quest for great data quality is not complete. Especially with “boring” research topics (it might not be boring for you but many topics are definitely boring for participants!), people can become disengaged, tired, or distracted.

 

Researchers need to continue checking for quality throughout the survey, from end to end. We can do this by employing a few more question quality techniques. If people miss on one of these metrics, it’s probably ok. They’re just being people. But if they miss on several of these, they’re probably not having a good day today and their data might be best ignored for this project. Here are three techniques to consider:

 

  • Red herrings: When you’re building a list of brands, make sure to include a few made-up brands. If someone selects all of the fake brands, you know they’re not reading carefully – at least not today.
  • Low/high incidence: When you’re building a list of product categories, include a couple of extremely common categories (e.g., toothpaste, bread, shoes) and a couple of rare categories (e.g., raspberry juice, walnut milk, silk slippers). If someone doesn’t select ANY of the common categories or if they select ALL of the rare categories, you know they’re not reading carefully – at least not today.
  • Speeding: The data quality metric we love to use! Remember there is no single correct way to measure speeding. And, remember that some people read extremely quickly and have extremely fast internet connections. Just because someone answers a 15 minute questionnaire in 7 minutes doesn’t necessarily mean they’re providing poor quality data. We need to see multiple errors in multiple places to know they aren’t having a good day today.

 

And of course, if you can, be sure to employ more interesting survey questions that help people maintain their attention. Use heatmaps, bucket fills, gamification, and other engaging questions that people will actually want to answer. A fun survey is an answered survey, and an answered survey is generalizability!

——————————————————–

 

What’s Next?

Every researcher cares about data quality. This is how we generate valid and reliable insights that lead to actionable conclusions and recommendations. The best thing you can do is ask your survey scripting team about their data validation and digital fingerprinting processes. Make sure they can identify and remove duplicate responders. And, do a careful review of your questionnaire to ensure your screener and data quality questions are well written and effective. Quality always starts with the researcher, with you!

 

If you’d like to learn how we can help you generate the best quality data and actionable insights, email your project specifications to our research experts using Projects at E2Eresearch dot com. We’d love to help you grow your business!

 

 

Learn more from our other blog posts

Tips for the First-Time Conjoint Analysis Researcher
By E2E Research | July 16, 2021

Researchers love conjoint analysis. It’s a handy statistical technique that uses survey data to understand which product features consumers value more and less, and which features they might be willing to pay more or less for.

 

It allows you understand how tweaks to combinations of features could increase desirability and, consequently, purchase price and purchase rate. Essentially, it asks, “Would you buy this product configuration if you saw it on the store shelf right now?”

 

Technically, there are numerous ways to present conjoint questions but all of them invite participants to compare two or more things. For example:

 

  • Would you rather buy this in red or yellow?
  • Would you rather pay $5 for a small one or $4 for a large one?
  • Would you rather buy this one or the competitive brand?
  • Would you rather buy this one or keep the one you already own?

 

The comparisons can get extremely complicated as you strive to create scenarios that mirror the complicated options of real life, in-store choices. This is because no two products are have the exact same features. There are always multiple tiny or major things different amongst them including brand, price, color, shape, size, functionality, etc.

 

As you see in the example conjoint question below, participants are being asked to select from among 5 different entertainment bundles, each with a different price and selection of options. Even though this question is nicely laid out, perhaps even nicer than what you might see in a store, it’s not a simple choice!

 

 

example conjoint analysis survey questions

.

 

Quick Conjoint Dictionary

First, let’s cover some quick terminology commonly used with the conjoint method so that the tips we will offer make sense.

 

  • Attribute: A characteristic of a product or service, e.g., size, shape, color, flavor, magnitude, volume, price.
  • Level: A specific measure of the attribute, e.g., red, orange, yellow, green, blue, and violet are levels of the attribute color.
  • Concept: An assembly of attributes and levels that reflect one product, e.g., a large bag of strawberry flavored, red, round candy for $4.99.
  • Set: A collection of concepts presented to a research participant to compare and choose from.
  • Simulator: An interactive, quantitative tool that uses the conjoint survey data to help you review consumer preferences and predict increases or decreases in market share based on potential product features and prices.

.

 

Conjoint Analysis Tips and Tricks

3 to 5: Across all attributes, levels, concepts, and sets, 3 to 5 is a good rule of thumb. With so many possible combinations of attributes, levels, and sets, the ask we’re making of participants could get overwhelmingly complicated and create a lot of cognitive fatigue. That’s why we suggest aiming for no more than 3 to 5 attributes, 3 to 5 levels per attribute, 3 to 5 concepts per set, and 3 to 5 sets. By ensuring that participants enjoy the process and can take the time to review each concept carefully, we can generate much better data quality.

 

Meaningful Levels: Choose attribute levels carefully. Do you really want to test 3 shades of blue or 3 flavors or apple? No. While you could choose price levels of $30, $32, and $34, they aren’t meaningfully different and wouldn’t create a lot of indecision on the store shelf. They wouldn’t create variation within your data. Try to include edge cases – options that are as far apart as you can make them while still being within the realm of possibility.

 

Be frugal with combinations: You already know there are combinations of attributes and levels you would never offer in-store so don’t waste people’s time and cognitive load testing them. Think carefully about which combinations of attributes and levels you would never offer together and exclude them from the test. For example, don’t waste your budget testing the least expensive price and the most expensive feature. Similarly, don’t test the value of adding an extra battery for a version of the product that doesn’t run on batteries.

 

Minimum number of shows: When testing a level, use it in at least 3 concepts for an individual person. Think of it in terms of a ruler – for quantitative metrics (e.g., price, length, volume, weight), you need to see whether the difference between Level 1 and Level 2 is perceived the same as the difference between Level 2 and Level 3.

 

magazinesInclude competitors: The real market includes competitors, often many. People don’t shop for single brands in isolation and neither should they answer your conjoint questions in isolation. Include at least one key competitor in your test, and preferably at least two. Further, if your brand is relatively unknown, you may wish to incorporate a competitor that is also relatively unknown.

 

Include an opt-out: Sometimes when you’re shopping, you discover they don’t have what you’re looking for and you leave the store empty handed. Generating realistic data means we must do the same in our simulated shopping trip – let people select “None of these” and leave without choosing anything. Otherwise, people may be “tricked” into selecting options they would never choose in real life.

 

Easy to read: Remember that conjoint is trying to simulate decisions that would normally happen in-store. Part of the in-store experience is in-store messaging. You’ll rarely see long sentences and paragraphs in the store so avoid them in your conjoint questions too. Use words and phrases that are as close as possible to what someone might see at the store.

 

cookiesUse imagery: We already know that a conjoint task can be cognitively demanding. That’s why imagery helps. Not only does it help people to visualize the product on the shelf amongst it’s competitive brands, it also helps to create a more visually appealing task (mmmmm cookies!). If you can’t provide an image of your product, find other ways to incorporate visuals in the questionnaire.

 

Plan for a hold-back sample: When product development work is extremely sensitive or is associated with life and death decisions, e.g., medical or pharmaceutical research, don’t let your budget determine the validity and rigor of your work. Spend the money to get the sample size you truly need to test each attribute and level with the appropriate rigor. And, build time into the fieldwork and data analysis schedule to permit preliminary analyses and test the model. You might need to tweak attributes, levels, or sets prior to running the full set of fieldwork.

 

Don’t let the statistics think for you: You wouldn’t create an entire marketing strategy based on gender differences just because a statistically significant t-test said 14% of women liked something and only 13% of men liked it. It’s not a meaningful difference. The same thing goes for a conjoint study. Review the model yourself, carefully, regardless of how “statistically significant” it is. Think about the various options suggested by the data. The simulator might reveal that there is a set of attributes and levels that would take over the market but that doesn’t mean you must produce that combination. The human brain is mightier than the spreadsheet!

 

If you’re curious to learn about the different types of conjoint that are available, this video from Sawtooth Software, presented by Aaron Hill, shares details about a few types of conjoint. E2E Research is pleased to offer all of these types to our clients.

 

 

 

What’s Next?

Are you ready to find out what configuration of your products and services consumers would be most keen to purchase? We’d be happy to help you work though the most suitable combinations of attributes and levels and build a conjoint study that meets your unique needs.

 

Please email your project specifications to our research experts using Projects at E2Eresearch dot com.

 

 

Learn more from our case studies

 

Learn more from our other blog posts

8 Reasons to Invest in a Hybrid DIY Market Research Team
By E2E Research | July 3, 2021

I’ve argued for years that there’s nothing wrong with DIY research. It’s a pretty easy argument given I’ve been a DIY researcher for many years myself. Of course, I’ve also had extensive training and experience in research design and analysis so it would make sense that Do-It-Yourself research has often been my favourite path.

 

In reality, the problem isn’t DIY research. The problem is unskilled people not realizing that conducting valid and reliable research requires extensive training and experience. For example, as much as I’d love to DIY a brand new house for you, I have a feeling you wouldn’t be happy with it even if I read every single Dummies manual.

 

For the sake of this argument then, let’s consider that we’re only talking about DIY research where the person is a qualified researcher with an appropriate designation, e.g., a PRC from the Insights Association or a CAIP in Canada. (BTW, if you’re not already certified, doing so is a GREAT way to tell your clients and colleagues that you are a highly competent researcher who upholds the highest ethical standards.)

 

 

Advantages of DIY Research

Agility: Everyone has been in one of these two positions before: You just discovered you need to get a questionnaire into field RIGHT NOW, or you’re watching a questionnaire already in field and you notice that multiple research participants have just provided the same open-end answer. What do you do if it’s Friday at 6pm? You get it done! You don’t have to wait until your supplier gets in on Monday morning so that they can start scripting and be ready for field by Monday evening. When it comes to being agile, no one can get a survey in field or updated faster than a DIY researcher with direct access to their own scripting licence. DIY FTW!

 

Internal knowledge: Regardless of which side of the fence you usually do your research on, supplier or buyer, you’ve learned the hard way that no one can interpret brand data and tabulations better than someone who has full sight-line into the history, projected future, and context of the brand, its sister brands, and the company – the end-client insights team. The confidential research and proprietary knowledge those researchers leverage while designing and interpreting research cannot be matched by anyone else no matter how much experience they have.

 

Price!: It’s impossible to beat the price of DIY research. When budgets are tight and the work is essential, this makes the decision simple. But make this choice wisely. Read on to make sure you’re okay forgoing the potential advantages of managed research which could force you to unexpectedly dig into your wallet after the fact.

 

 

Advantages of Managed Research

Leverage breadth of experience: Working with a supplier that supports many other types of companies has huge advantages. They’ve seen failures and success in multiple types of projects, companies, and industries. They’ve seen how competitive brands and categories carefully craft questionnaires and discussion guides, interpret unusual data, and solve unexpected, complex business problems. They’re a warehouse of rare knowledge and experience that every client benefits from, even when no one notices. And, they won’t incorporate the unconscious, internal biases that you might have picked up along the way from your standard internal processes.

 

Engage experts: Most researchers are moderately familiar with a lot of different research techniques. And, most researchers are masters of a few techniques. But being an expert in Conjoint, MaxDiff, TURF analysis, JAR analysis, or segmentation doesn’t mean you’re also an expert at running focus groups, interviews, mystery shops, or IHUTs. When you’re able to identify your own unique set of skills, you can reserve them for the projects you’d be great at and leverage the expertise of other researchers who’d be far more effective at the other projects.

 

Focus on high value tasks: When you can avoid spending the bulk of your time doing basic tasks like scripting questionnaires and running volumes of tabulations and simple data analyses, you get to spend more of your time on the value-add components of your business – interpreting results, acting on results, and building your business. You get to spend your time creating positive change!

 

Finish more projects: There are only so many hours in the days. When you’ve got a dedicated team of researchers ready at your beck and call, you can design and complete far more than one concept test, pricing study, or customer experience study every 6 months. Rejoice in the fact that more of your key projects can get done with the attention they deserve, in a timely fashion, and before it’s actually too late and damage has been done.

 

Get creative: Using research suppliers results in unlimited creativity. Imagine a multi-method, multi-country, multi-language study with brand new techniques applied in brand new ways. Oh my. I’m getting excited thinking about what that amazing study could look like! Ok, maybe you really don’t need to do that. But, with a larger team, you can certainly cast aside any limitations based on  access to tools and build the EXACT project you need. Not just the one fits into your template.

 

 

Advantages of a Hybrid DIY Research Model

But really, why must we choose DIY research OR managed research? Why can’t we be DIY researchers sometimes, choose managed research other times, and benefit from the positives of both models?

 

A skilled researcher who has inherent knowledge of the brand partnering with an experienced research supplier who has in-depth and broad experience with research techniques presents the ultimate research experience. Over time, it can even lead to building a dedicated external team that’s always on call, whether it’s during seasonal highs or end-of-fiscal rush periods, or to get through that huge pile of long overdue work.

 

In the end, whether you choose DIY research, managed research, or a hybrid model, an informed choice is the best choice!

 

 

If you’re ready to work with a research partner who will help you generate great quality data and actionable outcomes, feel free to email your project specifications to our research experts using Projects at E2Eresearch dot com. We’d love to help you build an engaging questionnaire, script the questionnaire, run data analysis, and write a full report.

 

 

Learn more from our case studies

Learn more from our other blog posts

Trackers Suck. Here’s how market researchers can fix them right now.
By E2E Research | June 21, 2021

Researchers love trackers. At the same time, we also hate them. Trackers are designed to help us stalk brand metrics and compare them with those of sister brands and competitors over time, and build real-time dashboards that flag tiny issues before they explode into unresolvable problems. But the more the world changes, the more our trackers stay the same. The questions stay the same, the answers stay the same, and the insights… well, they become impossible to find.

 

 

Trackers are inherently problematic

One of the biggest complaints researchers have with trackers is that once they’re written, they can’t be changed.

Ever.

 

When we inevitably discover a question that is poorly written, no longer relevant, or simply wrong, we can’t touch it or we’ll introduce confounds invalidating the trendline for every subsequent question. Data quality is always top of mind for researchers who care about making valid and reliable generalizations.

 

 

Oh the times, they are a’changin

But wait. No matter how much we work to keep questions consistent for the sake of research rigor and validity, everything outside of the questions has changed since day one. Every research supplier constantly improves their techniques and processes over time – without getting our approval. Every research participant changes their demographics, internet providers, and digital devices over time – without getting our approval. Like it or not, third parties change the methodological foundation of our trackers every single day without our approval. They’ve embraced change and it makes no sense except for researchers to embrace change too.

 

 

Who’s the boss?

Trackers are inanimate objects we personally create to suit our personal needs. Researchers need data that is valid and reliable. We need data that answers our questions and helps solve our challenges. We need to stop letting questionnaires be the boss of us and start making questionnaires work for us. We need to embrace change.

 

 

Choose change-resistant designs

Fortunately, researchers have methodological techniques that are designed to be resistant to change. If we build change into every questionnaire, change will have a vastly smaller impact on our data.

 

How can we do this?

 

Randomization! When each person receives answers (or questions) in a different order, it helps prevent confounds related to order. Adding an item to a randomized list greatly reduces its ability to affect subsequent items because everyone sees a different set of subsequent items. Make sure to randomize answer options at every appropriate opportunity. If it also makes sense to randomize the order of some questions, then do that too.

 

Individual presentation. Potential order effects can be reduced even more by combining randomization with individual presentation. Rather than showing a full list of items so that people can scan through the entire list, show items individually. Since everyone sees a different set of initial items, order effects are different for everyone and therefore greatly minimized over the full sample.

 

Subsets! If you’re accustomed to breaking long questionnaires into shorter, more manageable chunks for participants, you might already be using question subsets. For example, let’s say Q6 has 20 answer options – perhaps 20 brands or 20 product features. With subsets, each research participant gets only 10 answer options – perhaps three are the same for everyone, and the other 7 answers are randomly assigned. By design, no one sees every answer and your friendly, neighbourhood statistician can easily stitch the full questionnaire of 20 answer options back together. Need to add or remove an answer option? Go right ahead. Since half of people wouldn’t have received that item anyways, you aren’t intruding a serious confound. Even better, everyone benefits from a shorter questionnaire!

 

 

Know what questions are carved in cement

Some questions should never change. There are only a few seriously important KPIs that get added to the norms/benchmarks database every time you complete a wave. They probably include:

 

  • Purchase intent
  • Recommendation
  • Satisfaction
  • Trust
  • Likeability
  • Believability

 

Identify which items on a questionnaire MUST stay the same. They’re the items that are part of every questionnaire ever written for every product line and SKU. From now on, keep them as close as possible to the beginning of the questionnaire . By ensuring this section always stays the same with no potentially new and leading items before them, we can ensure they won’t be confounded by order effects.

 

And don’t get caught up in the idea that questions tied to financial incentives can’t be changed. Do you really want to incentivize the wrong KPIs and the wrong behaviors? Absolutely not!

 

 

Embrace change

 

Now here’s the hard part.

 

Change is good.

 

Track valid benchmarks: Tracking invalid data serves no purpose. Creating a brand new VALID benchmark serves a great purpose. Once you realize you’ve been tracking invalid data, it’s time to make a change and fix the problem. Similarly, once you realize you’ve missed answer options or used disrespectful language, it’s time to fix the problem.

 

Watch the world evolve: Change lets us account for our evolving society, culture, technology, and political atmosphere.

 

  • When did you change the sex and/or gender questions on all of your studies to be more respectful and inclusive? If you haven’t done so yet, this PDF from Insights in Color will get you started.
  • When did you add Facebook or Instagram as viable channels in addition to door-to-door salespeople, radio, and TV? Have you added TikTok to your list of channels yet? (You’d better!)
  • When did you add Madonna to your list of influencers? What about Beyoncé? What about Billy Eilish?

 

You made those changes and didn’t think twice because it was the right thing to do it.

 

Plan to measure current issues: Build an entire section into your questionnaire that is all about change. If Section A is your unchangeable KPIs, make Section D completely new every single time. This quarter, it might be all about sustainability. Maybe next quarter it will be innovative packaging and the quarter after that will be all about diversity and equity.

 

Embrace fun! Change also lets us create questionnaires that are better able to capture the imagination of participants. Social networks and online games are fun because they leverage audio, video, swiping, and dragging. It’s time to change up your questionnaires so they are just as engaging.

 

 

.

 

What’s next?

It’s time for researchers to stop being pushed around by trackers. We know what we’re trying to accomplish and why. We know how change affects data. It’s time for us to be the boss of trackers and make them work for us! Embrace change!

 

Are you ready to design a useful tracker that generates great quality data using questions that are inherently engaging? Email your project specifications to our research experts using Projects at E2Eresearch dot com!

 

Learn more from our case studies

 

Download information about our services