The Say-Do Gap: Why Singapore Consumer Research Keeps Lying to You

The Say-Do Gap: Why Singapore Consumer Research Keeps Lying to You | Assembled

Assembled is a market research agency in Singapore with 600+ projects completed across Southeast Asia since 2016, a 100,000-member proprietary panel, and publications in MRS Research Live and ESOMAR Research World. This say-do gap in Singapore consumer research analysis draws on patterns from skincare and healthcare research projects moderated by founder Felicia Hu, who scopes, moderates, analyses, and presents every project herself. In Singapore’s high-context culture, a participant who says “can consider” is saying no. Felicia, a bilingual moderator in English and Mandarin with fluency in Hokkien, Cantonese, and Singlish, was recently quoted in the South China Morning Post on Singapore consumer dining behaviour.

The Say-Do Gap: Why Singapore Consumer Research Keeps Lying to You

A skincare brand conducts pre-launch research in Singapore. Results look promising — 72% of surveyed consumers say they would "definitely" or "probably" try the product. Six months post-launch, actual trial sits at 9%. This is not a research failure in the traditional sense. The methodology was sound. The sample was representative. The questions were clear. The problem is something else entirely.

Research captured what consumers believed about themselves, not what they actually do. The gap between stated intention and actual behavior is what researchers call the "say-do gap." It is not unique to Singapore. But Singapore's high-context, harmony-oriented communication culture makes it particularly pronounced — at least, that's the pattern we observe consistently across categories.

We see this dynamic repeated in our Singapore coffee market research, where brand loyalty claims collapse under scrutiny of actual app usage. We see it in chronic disease management research, where patients describe adherence behaviors that their actual pill counts contradict. And we see it in our focus group analysis practice, where the techniques for surfacing real behavior versus performed behavior have become a core methodological competency.

Understanding why the gap exists — and how to design around it — reveals how to produce research that actually predicts what happens at the shelf, the pharmacy, or the checkout screen.

Why Singapore Consumers Say What They Don't Mean

The Harmony Preference

In cultures that value social harmony, expressing disagreement or negativity feels rude, even in research settings designed for honest feedback. When a moderator asks "would you try this product?", saying no feels like criticism of whoever created it. Saying yes costs nothing in the room. The gap emerges when the hypothetical meets reality — when saying yes means spending actual money, changing actual routines, taking actual risk.

This is not lying. Participants genuinely believe, in that moment, that they might try the product. I think it is more accurate to describe this as optimistic self-modelling than deliberate deception. Which is why our focus group methodology uses indirect probing techniques rather than direct purchase intent questions. The questions are designed to surface behavior, not beliefs about future behavior.

The Aspiration Projection

Consumers describe themselves as they wish to be, not as they are. In focus groups about healthy eating, participants emphasize vegetables and lean proteins. Their Grab receipts tell different stories. SingStat household expenditure data consistently shows that food delivery spending skews toward convenience and indulgence, not the health-conscious choices consumers claim in research settings.

Singapore's aspirational middle-class identity intensifies this. Admitting to cheap purchases, unhealthy habits, or impulsive decisions feels like status failure. Research settings amplify the performance. We documented identical patterns in Instagram dining research, where stated motivations for restaurant choice consistently overstated aesthetic and social factors while understating price and convenience. We also observed it in men's skincare research, where male participants dramatically underreported the products they actually use.

The Context Collapse

Research environments strip away the context that shapes actual decisions. A consumer evaluating a product in a focus group has time, attention, and no competing priorities. The same consumer in a pharmacy aisle has thirty seconds, a buzzing phone, and three other errands to run. The considered preference expressed in research rarely survives contact with real shopping conditions.

This is why we increasingly pair traditional in-depth interviews with digital diary studies and mobile ethnography that capture decisions in context — at the actual moment of choice, not in a sanitized discussion environment.

How the Gap Manifests Across Categories

In Price Research

Consumers consistently claim they will pay more for quality, sustainability, or convenience. In practice, price sensitivity dominates most categories. The gap is widest for products where quality is hard to evaluate before purchase. A consumer who says she would pay $80 for "premium" skincare might hesitate at the shelf and reach for the $45 option. Our premium versus value skincare research documents this pattern in category-specific detail — the decision logic is more rational than it appears, but it is also more context-dependent than stated preferences suggest.

In Brand Switching

"I'm open to trying new brands" might be the most common research fiction. Most consumers are trapped by habit, convenience, and risk aversion. The switching costs they will actually tolerate are far lower than those they claim to accept in surveys. We observed this in our coffee market research, where consumers claim brand loyalty while simultaneously carrying three competing delivery apps on their phones. And in our food delivery research, where multi-platform behavior is universal despite repeated claims of preferred-platform loyalty.

In Ethical Consumption

Sustainability research consistently overestimates ethical purchasing. Singapore consumers say they care about environmental impact. Their behavior shows that price, convenience, and habit still dominate. The Health Promotion Board's behavioral research reveals a similar pattern in health choices: Singaporeans consistently overstate exercise frequency and understate consumption of processed food. The gap creates a market for "virtue at no cost" products — sustainable-ish options that do not require sacrifice. Our plant-based food research found that stated environmental motivation accounted for less than 15% of actual purchase decisions, while taste and price did almost all the real work.

Research Designs That Reveal Truth

Framework 1: The Behavioral Question Audit

For each question in your research design, ask a single diagnostic question: am I capturing what people do or what they believe? Converting attitudinal questions to behavioral ones is the single most impactful change you can make to a discussion guide. Actually, let me sharpen that — it is not just about question wording. It is about the epistemological frame. Past behavior predicts future behavior more reliably than stated intentions. This principle should shape every discussion guide we build.

Attitudinal (What They Believe) Behavioral (What They Do)
"Would you try this product?" "Tell me about the last three new products you tried. What made you try them?"
"How much would you pay for this?" "What's the most you've paid for something in this category in the past year?"
"Do you care about sustainability?" "Walk me through your last five purchases. Which ones were influenced by sustainability?"
"Are you loyal to any brands?" "What did you buy last time? The time before? Has it been the same brand?"
"Would price affect your decision?" "Tell me about a time you wanted something but didn't buy it. What stopped you?"

Framework 2: The Intention-Reality Discount

These discount factors are calibrated from patterns across Assembled's 600+ research projects in Singapore. They are not precise science — but they are directionally useful when interpreting purchase intent findings. I am still refining the healthcare numbers; clinical decision-making has different dynamics from consumer product choice.

Claim Type Discount Factor Why
"Definitely would buy" 50–60% Strongest claims have the largest gaps
"Probably would buy" 70–80% Hedged claims slightly more reliable
"Willing to pay premium" 60–70% Price reality hits harder than expected
"Would switch brands" 75–85% Habit is stronger than stated openness
"Would recommend to friends" 40–50% Recommendation is social risk, easily claimed, rarely acted on

Framework 3: Triangulating Against Observable Data

Research findings become far more reliable when stated preferences are triangulated against secondary behavioral data. GrabFood ordering data and Enterprise Singapore's consumer spending research both provide behavioral anchors that help calibrate the credibility of stated preferences. In fast food research, we often ask participants about their last ten transactions before any attitudinal questions — the transaction history grounds the conversation before self-reporting bias can set in.

In our fast food market research, triangulating stated QSR preferences against actual purchase records revealed preference hierarchies almost inverted from what direct questions produced. In our multicultural audience research, the say-do gap varies significantly by ethnic community, complicating any assumption that one community's stated preferences are more reliable than another's.

Where This Leaves Researchers

The say-do gap is not a flaw to eliminate. It is a feature of human psychology to design around. Research that asks "what do you want?" will always mislead. Research that reveals "what do you actually do?" can change business outcomes. The patterns documented here appear in every category we research — from food delivery switching behavior to chronic disease medication adherence to skincare price sensitivity. The underlying mechanism is always the same: a gap between the self we perform in research and the self that makes decisions at the shelf, the pharmacy, or the checkout screen.

For brands and researchers, the implication is straightforward even if the execution is not. Design research that records what consumers have done, not just what they claim they would do. Use real market outcome data as external calibration for stated intent findings. And treat the gap itself — the distance between aspiration and action — as a strategic insight rather than a methodological inconvenience.

QUESTIONS WORTH EXPLORING

Frequently asked questions

How should brands account for the say-do gap when interpreting purchase intent research?

If your research shows high purchase intent, that number is almost certainly inflated. Across Assembled's 600+ projects, stated "definitely would buy" intent typically converts at 40–50% of the claimed rate at best. The most straightforward fix is to ask about past behavior instead of future intentions: "Tell me about your last three purchases in this category" reveals more than "would you buy this?" ever will. Pairing qualitative research with transaction data triangulation further reduces the distortion. Our research brief guide covers how to frame questions that surface real behavior from the start of the project design.

Is the say-do gap larger in Singapore than in other markets?

In our experience, yes — for specific categories. Singapore's high-context communication culture creates stronger harmony-preference effects than most Western markets. Participants work harder to avoid negative responses. The aspirational middle-class self-image adds another layer: admitting to cheap purchases or unhealthy habits feels like social failure in a way that is culturally specific. The gap is also amplified in multi-ethnic group settings where participants may be performing for an imagined cross-community audience rather than speaking from personal experience. That said, the gap exists everywhere — Singapore just makes some of its mechanisms more visible.

How do you validate stated preferences against observable actions?

The most reliable method is behavioral anchoring: asking detailed questions about recent specific purchases before any attitudinal or intention questions. "Walk me through your last five purchases in this category" produces a behavioral baseline that subsequent attitudinal claims can be checked against. We also use projective techniques — asking about "consumers like you" rather than direct self-report — to reduce the performance incentive. Where clients can share transaction or CRM data, cross-referencing stated preferences against actual purchase history produces the most accurate calibration. Secondary data from SingStat and category-specific government sources provide useful triangulation for market-level patterns.

What happens when actual behavior is significantly less favorable than stated intention?

This is the most important diagnostic question in market research, and most research designs never ask it. If your stated intent is 70% but your actual trial is 12%, the gap tells you something important: either the purchase friction is too high (price, availability, awareness), the product underdelivers at the moment of trial, or the stated preference was never a real preference to begin with. Each explanation calls for different remediation. Research that treats stated intent as a reliable predictor — without understanding the conversion gap — misses the most useful information the data contains.

Observations in this post draw on patterns from Assembled's consumer research projects and healthcare research across multiple categories in Singapore, including focus group discussions, in-depth interviews, and behavioral validation studies. Secondary data from SingStat household expenditure surveys and Health Promotion Board behavioral data. For research enquiries, contact felicia@assembled.sg., our research guide

RESEARCH ENQUIRY

Closing the gap between what consumers tell you and what they actually do

Stated intent predicts behaviour less than brands assume. We design research that captures actual decisions, not aspirational answers, across F&B, skincare, and healthcare categories in Singapore.

Request a quote →
Felicia Hu, Managing Director of Assembled, Singapore market research agency

Felicia Hu, Managing Director

600+ qualitative research projects across Singapore and Southeast Asia since 2016. Published in Research Live (MRS UK) and Research World (ESOMAR). Quoted in the South China Morning Post. Bilingual moderation in English and Mandarin. NVPC Company of Good Fellow.

About Felicia LinkedIn felicia@assembled.sg
Felicia Hu

Founder and Managing Director of Assembled, Singapore’s best-reviewed market research agency (700+ five-star Google reviews). 600+ projects since 2016 across skincare, financial services, F&B, healthcare, luxury goods, retail, aviation, and technology. Research World, MRS LIVE columnist. Quoted in South China Morning Post. ESOMAR standards. Bilingual fieldwork in English and Mandarin from a 100,000-member proprietary panel. More about Felicia → https://www.linkedin.com/in/feliciahuyanling/

https://assembled.sg/
Previous
Previous

Singapore Coffee Market Research: What Consumers Actually Want

Next
Next

Why K-Beauty Dominates Singapore, And What Local Brands Can Learn From Consumer Market Research