Kantar's Profiles Blog

Vive la Difference

Posted by Jon Puleston on Nov 28, 2012

Part 2 in our series, So Many Variables, So Little Time: A practical guide to what to worry about when conducting multi-country studies

Cultural DifferencesIt’s been said that we’re all more alike than we are different. While we’d all do well to keep that in mind, it is also true that there are fundamental cultural character differences among people from different countries. And 220-802 exam those differences cause people to answer surveys differently, resulting in significant data variances in multi-country studies.

Our recent paper, Dimensions of Online Survey Data Quality: What Really Matters?, discusses the results of two large-scale multi-county survey experiments interviewing more than 11,000 respondents in 15 countries that tested several factors (speeding, lying panel sourcing , age and demographic balance, and question design) in a comparative cross-national treatment versus control group approach. Our goals were to understand how these factors affect the comparison of data from 220-802 country to country.

The one factor that underlies all others is basic cross-cultural variance. Our experiments showed an average shift in results of 7.1% across all 15 countries. However, data shift from individual countries on a question-to-question level was significantly higher, with result changes up to 15% not uncommon. The traits that we observed that had the greater impact on variance included:

  • Yes: The propensity to answer “yes” to a simple yes-no question varies by country. Based on aggregated data from 60 yes-no questions asked in 30 countries, Western and more developed markets in Asia trended less likely to answer “yes” to a question. Respondents in India and Africa tended to answer “yes” the most, while Southern and Eastern Europe were in the middle.
  • Like: Participants’ propensity to report liking something more than tripled from Japan (7%) to India (23%) based on the aggregated self-reported liking scores from 90 different questions. Northern Europe and Korea were at the low end of the scale, while South America and Africa were at the high end. While the Japanese and Koreans, Northern Europeans and British all say “yes” to a similar degree, “liking” draws out more measurable differences.
  • Agree: On aggregated agreement scores from 580 questions asked in various surveys across 30 countries, agreement patterns are different from liking and propensity to say yes. For example, the Chinese are very likely to say they agree with something but relatively less likely to say they like something. Once again, Japan and Northern Europeans are the least likely to agree with anything.
  • Disagree: When expressing disagreement, a division arises among Northern Europeans that does not exist with positive scoring indicators. The Dutch are much more willing to disagree than others and measurably outscore other Northern European countries. At the other end of the scale are the Chinese, who are reluctant to disagree.
  • On the fence: Across many countries in Asia (excluding India and China) there is a strong reticence to express opinions, which results in a tendency to give neutral scores. Closely behind Asians are Northern Europeans, who also have a high neutral score.

When using likert range scales, cultural differences result in massive skews in the relative score attained across different countries. At the one end of the scale, the Japanese very rarely present a positive opinion and at the other end, the Indians very rarely do not. Spain, Russia and South Korea are right in the middle, with the UK and USA on the negative end and Mexico and China on the positive.

The Importance of Weighting

With knowledge of these differences in the basic character of respondents across countries it is possible to re-weight certain types of questions to deliver more comparable cross-country data. For example, one of the test questions in our experiments asked respondents to rate their happiness. The raw data showed Mexicans rated themselves happiest while the Swedish and German respondents rated themselves the least happy. However, when the data was weighted to account for question agreement bias, the data showed personal happiness ratings similar in most countries with the exception of China, India and Brazil, whose respondents rated themselves less happy.

Translation and Interpretation

Because this factor is fundamental to every question on every survey across countries, we do not have statistical evidence of variance from translation and interpretation. It is nonetheless critically important and can lead to data variance scores on certain types of questions with which other variances pale in comparison.

Our research showed word selection lists were most vulnerable to language and interpretation. Word selection rates even between two countries that speak the same language can differ based on how the words are used in each country. This is particularly important for range choices, where subtle meaning can make a difference in data. Our recommendation is to use proven, professional, and detail-oriented translation resources and to take the time to understand how words can be interpreted across languages.

This is part 2 in a series. If you would like to read more, visit:

Part 1: So Many Variables, So Little Time: A practical guide to what to worry about when conducting multi-country studies

- See more at: http://www.ls-gmi.com/data-quality/vive-la-difference/#sthash.RNWKL5uY.dpuf

Topics: Blog Post

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all