Online Qualitative Research vs In-Person in Singapore

Assembled is a market research agency in Singapore with 600+ projects completed across Southeast Asia since 2016, a 100,000-member proprietary panel, and publications in MRS Research Live and ESOMAR Research World. This analysis of online versus in-person qualitative research draws on patterns from technology and education research projects and hundreds of focus group sessions moderated by founder Felicia Hu, who scopes, moderates, analyses, and presents every project herself. In Singapore's high-context culture, a participant who says "can consider" is saying no, and that refusal sounds different on a screen than it does across a table. Felicia, a bilingual moderator in English and Mandarin with fluency in Hokkien, Cantonese, and Singlish, was recently quoted in the South China Morning Post on how companies approach market research in Singapore.

I moderated a focus group on financial planning attitudes last month. Six participants, all professionals in their 30s. Three joined from their living rooms on Zoom. Three sat in the discussion room at our office on Beach Road. Same screener, same discussion guide, same 90-minute session. And the data that came back was not the same.

The three in the room told me about the guilt they feel when they overspend. One woman described hiding a Sephora bag from her husband in the boot of her car. The participants on Zoom? They gave me answers that were accurate, structured, well-considered. But the Sephora-bag confession never would have happened through a laptop camera. I'm fairly sure of that.

This matters because Singapore has become one of the most digitally connected populations on earth. IMDA's digital society data shows 99% of resident households now have internet access. SingStat's age-group data confirms that internet usage is essentially universal among those aged 15 to 59. The infrastructure for online research is perfect. The question is whether perfect infrastructure produces perfect data.

It doesn't. Not always. But it doesn't always fall short, either, and that's the part most methodology debates get wrong.

The convenience argument solved the wrong problem

When virtual focus groups became the default during 2020 and 2021, the industry framed it as a temporary fix. We'd go back to rooms when we could. Research from Archibald et al. in the International Journal of Qualitative Methods confirmed that Zoom was viable for qualitative data collection, that participants found it convenient, and that discussion quality held up reasonably well. What the early research didn't capture (and I think couldn't capture at that point) was how the medium would reshape participant behaviour over time once the novelty wore off.

Singapore participants adapted fast. According to IMDA's 2025 Digital Economy Report, Singapore's digital economy now contributes 18.6% of GDP. Employees average 2.8 remote working days per week, the highest in the Asian region. Video calls became muscle memory. And therein lies the problem for researchers.

When participants are deeply comfortable with the video call format, they default to their professional presentation mode. They self-edit more. They give you the boardroom version of their opinions, not the kitchen-table version. I first noticed this in an IDI series we ran for a healthcare client. Participants on Zoom sat straighter, used more complete sentences, and volunteered fewer digressions. The same demographic, recruited from the same panel, gave us messier and more honest responses in person.

Actually, let me qualify that. "More honest" is the wrong framing. The Zoom participants weren't lying. They were performing a slightly polished version of themselves, the way anyone does when they know they're on camera in their professional context. The in-person participants were performing too, but the social dynamics of a physical room create different kinds of performance. Ones that, in our experience, tend to produce richer data for certain research questions.

What the screen changes (and what it doesn't)

I've been building a mental model for this over the past two years, and I think it breaks into two dimensions. The first is topic sensitivity. The second is how much the research question depends on spontaneous, unstructured response versus considered, reflective response.

THE SCREEN EFFECT MATRIX

TOPIC SENSITIVITY
The Confessional In-person wins. Financial guilt, parenting anxieties, health stigma. Participants need physical proximity to lower their guard.
The Shield Online wins. Sexual health, addiction, workplace complaints. The screen provides distance that enables disclosure.
The Room In-person wins. Product reactions, sensory testing, group brainstorming. Physical presence creates energy you can't fake on Zoom.
The Toss-Up Either works. Brand perceptions, media habits, purchase rationale. Quality depends on moderator skill, not medium.
SPONTANEITY OF RESPONSE NEEDED

Let me walk through each quadrant because the examples matter more than the labels (and I'm still refining whether "spontaneity" is the right axis name, or whether it's something closer to "embodied response").

The Confessional (high sensitivity, high spontaneity needed)

We ran a study on chronic disease management in Singapore where patients needed to describe the gap between what they told their doctors and what they actually did at home. In person, participants would lean forward, lower their voices, start a sentence with "Don't judge me, but..." and describe skipping medication for weeks. On Zoom, these same revelations came out more carefully packaged. The shame was still there but filtered through a screen that gave participants just enough distance to clean it up.

For research that needs raw emotional data, in-person is almost always better. The physical room creates a shared vulnerability that a grid of video tiles cannot replicate.

The Shield (high sensitivity, considered response needed)

Here's where it gets interesting, and where I had to revise my initial assumptions. For some sensitive topics, the screen is actually an advantage. We've seen this in mental health research, in studies on therapy-seeking behaviour, and in research on workplace discrimination. When the topic carries social stigma and what you need is the participant's considered perspective (rather than raw emotional response), the distance of a screen can function like a confessional booth. Participants feel protected enough to share.

A RAND Corporation study comparing online and in-person qualitative methods found that online modalities did not produce substantially different thematic findings, though the in-person data tended to be richer in word count. That richness distinction matters. Themes were preserved, but texture was sometimes lost.

The Room (low sensitivity, high spontaneity needed)

Try running a product testing session on Zoom. I mean, you can. We have. But when a participant picks up a skincare product, turns it over, frowns at the ingredient list, and puts it back down, all within three seconds, that micro-rejection tells you something no online survey or video call will capture. The physical room gives you access to unguarded body language, the way people interact with objects, and the energy that builds when six people in a room start riffing off each other's reactions.

Group dynamics are the biggest casualty of the screen. In a physical focus group, someone laughs, and that laughter changes the energy. Someone's offhand comment triggers a memory in the person next to them. That chain reaction, what we sometimes call "the cascade", produces some of the best qualitative insight. On Zoom, the slight lag, the mute-unmute friction, the fact that only one person can really talk at a time without chaos, all of that dampens the cascade.

The Toss-Up (low sensitivity, considered response needed)

Brand perception studies. Media consumption diaries. Purchase journey mapping. For research questions where you need participants to think, reflect, and articulate a reasoned perspective, online and in-person produce comparable data. The Greenbook research community's analysis confirms this: online groups can match in-person quality when the discussion guide is structured for reflection and the topic doesn't require spontaneous emotional disclosure.

For a well-written research brief with clear objectives and considered questions, the medium matters less than most people assume. Our guide to market research in Singapore covers this in more detail, but the short version is that question design drives data quality more than the room you're sitting in.

Singapore-specific factors that complicate the choice

Most of the published research on online versus in-person qualitative methods comes from Western contexts. Singapore introduces complications that researchers in London or New York don't face.

The language-switching problem

In a physical room, I can hear when a Mandarin-dominant participant is struggling with an English concept. I see the micro-pause, the glance at another participant, the slight body shift before they switch languages. On Zoom, those signals are harder to catch. The compression of video strips away the peripheral cues that a bilingual moderator relies on to know when to switch languages or rephrase a question.

This is not a small issue. Singapore's multicultural research context means that many focus groups involve participants who think in one language and speak in another. The gaps between those two languages (Mandarin thought patterns expressed in English syntax, Malay cultural concepts translated into Chinese business idiom) are where the real insight often lives. And those gaps are wider on screen.

The HDB factor

Singapore is small. Most participants join online sessions from HDB flats where privacy is limited. I've had participants whisper their answers because their mother-in-law was in the next room. I've had a respondent describing his feelings about a premium whisky brand while his kids ran through the background screaming about dinner. These interruptions don't just break the flow. They change what people are willing to say. In our research facility on Beach Road, participants are in a neutral space. Nobody's watching. Nobody's listening from the kitchen. That neutrality has value, especially for market research projects where honest consumer opinion is the entire point.

The "on time" culture

Singaporeans show up. This is one of the genuinely useful advantages of online research here. No-show rates for online sessions run about 5-8% in our panel data, compared to 12-15% for in-person (where traffic on the PIE or a delayed MRT can turn a committed participant into a cancellation). For projects where sample consistency matters more than depth, online's reliability advantage is real. According to IMDA's household infocomm usage data, broadband penetration is near-universal, which means technical failures are rarely the issue. Participants connect, they stay connected, they participate.

A practical decision framework (with honest caveats)

Here's how we actually make the call when scoping a project. I'm sharing this with the caveat that it's based on our experience across 600+ projects, not on a controlled experiment. Other agencies may weigh these factors differently, and they might be right to.

ONLINE VS IN-PERSON DECISION GUIDE

Choose In-Person
Choose Online
Topic type
Emotional, confessional, stigmatized (in social settings). Parenting guilt, financial shame, health embarrassment.
Reflective, opinion-based, or stigmatized (in personal settings). Brand perceptions, media habits, workplace issues, sexual health.
Data need
Spontaneous reactions, body language, product interaction, group energy, cascading discussion.
Considered opinions, structured responses, individual perspectives, diary-style reflection.
Participant demo
Elderly (65+), less digitally comfortable, Mandarin/dialect-dominant, participants who benefit from moderator's physical presence.
Working professionals with limited daytime availability, parents of young children, participants with mobility issues, geographically dispersed.
Budget reality
Higher. Facility rental ($300-500/session), F&B, travel incentives, moderator travel if outside CBD.
Lower. No facility cost, lower incentives (participants save commute time), faster turnaround on recruitment.
Language needs
Multilingual sessions where code-switching is expected. Mandarin-English groups, dialect speakers, sessions requiring Singlish nuance.
Single-language sessions in English. Mandarin-only sessions also work well online if moderator is fluent.

The table looks clean. Reality is messier. Most projects we scope don't sit neatly in one column. A skincare study might need product handling (in-person) for the testing phase and reflective journaling (online) for the diary phase. We increasingly design hybrid methodologies that use both modalities, not as a compromise, but because different phases of the same research question genuinely call for different environments.

What the data quality research actually shows

I should be transparent about what the published evidence says, because it doesn't entirely support my preference for in-person work on sensitive topics.

Greenbook's analysis of industry expert perspectives suggests that online focus groups "can be just as effective as in-person" for many research objectives. The RAND study I mentioned earlier found no substantial thematic differences between modalities. And ESOMAR's research guidelines treat online and in-person methodologies as equivalent in terms of ethical standards and quality expectations.

Where the divergence shows up is in what researchers call "richness." In-person transcripts tend to be longer. They contain more tangents, more unfinished thoughts, more moments where a participant contradicts themselves mid-sentence. These are the moments that produce insight, not just data. Whether those moments matter depends on your research question. For a brand tracking study, they probably don't. For understanding why consumers say one thing and do another, they're everything.

The moderator variable nobody talks about

Here's what I think gets underweighted in the online-vs-in-person debate. The moderator matters more than the medium. A skilled moderator on Zoom will extract better data than an average moderator in a room. The medium is the second-order variable. The first-order variable is whether the person asking the questions knows how to listen, when to push, when to sit in silence, and how to read cultural signals that aren't in the participant's words.

In Singapore's context, that means knowing that a participant's "I think it's okay lah" is not a neutral statement but a polite disagreement. It means catching the pause before a Mandarin speaker switches to English for a concept they can't quite express in their mother tongue. These skills transfer across modalities, but they're slightly harder to deploy through a screen. Slightly. Not impossibly.

Probe for online sessions: "I noticed you paused before answering that. Take your time. There's no wrong answer, and I'm genuinely curious about the hesitation itself, not just the answer."

Where this leaves us (and where I'm still uncertain)

My provisional conclusion, and I want to stress that word "provisional", is that the binary framing of online versus in-person is less useful than most methodology discussions assume. The better question is always: what does this specific research question need?

If you need to understand how a caregiver feels when they watch a parent refuse medication, put them in a room with other caregivers. The shared vulnerability produces data you can't get through a screen. If you need to understand how professionals choose between two insurance products, an online in-depth interview will give you everything you need at half the cost and twice the scheduling convenience.

The methodology cluster of our research (posts on how to analyse focus group data, choosing between IDIs and focus groups, and mobile ethnography approaches) keeps circling the same truth. The method is a tool. The insight comes from knowing which tool fits the job. And sometimes, it appears, the job needs both.

QUESTIONS WORTH EXPLORING

What should brands ask before choosing online or in-person research

Is online qualitative research as reliable as in-person in Singapore?
For most research objectives, online qualitative research produces comparable thematic findings to in-person methods. Published studies show no substantial differences in the themes that emerge from either modality. The difference appears in data "richness": in-person sessions tend to generate longer transcripts with more spontaneous disclosure and unstructured tangents, which can matter for research questions involving emotional or stigmatized topics. For focus groups exploring considered opinions or brand perceptions, online methods work well.
What topics work better for online focus groups?
Online focus groups tend to perform well for brand perception studies, media consumption analysis, purchase journey mapping, and topics where participants benefit from the psychological distance a screen provides. Research on workplace issues, some health conditions, and topics carrying social stigma can actually produce better disclosure online because participants feel more protected in their own environment. The key factor is whether the research needs spontaneous emotional response (better in-person) or considered reflective response (often equivalent online).
How much cheaper is online qualitative research compared to in-person?
Online qualitative research typically costs 30-40% less than equivalent in-person sessions in Singapore, driven by savings on facility rental ($300-500 per session), reduced participant incentives (participants save commute time, so lower incentives are acceptable), and faster recruitment timelines. The cost difference narrows for in-depth interviews (where facility costs are lower) and widens for multi-day studies where facility and logistics costs compound.
Can you moderate multilingual focus groups effectively on Zoom?
Multilingual moderation on video calls is possible but harder than in-person. The compression of video strips away peripheral cues that bilingual moderators rely on to detect when a participant is struggling with language. Code-switching moments (where participants shift between Mandarin and English, or introduce dialect terms) are easier to catch and explore in a physical room. For sessions where language-switching is expected, such as groups with older Mandarin-dominant or dialect-speaking participants, in-person moderation is recommended. English-only sessions and Mandarin-only sessions work well online with a fluent moderator.
When should brands use a hybrid approach combining online and in-person research?
Hybrid approaches work best when different phases of a research project have different data needs. A skincare study might use in-person sessions for product testing (where physical interaction matters) and online sessions for reflective journaling or follow-up interviews. A patient journey study might start with in-person focus groups (for emotional depth) and follow up with online IDIs (for convenience and wider geographic reach). The decision should be driven by what each phase of the research question specifically requires, not by budget alone.
Observations in this post draw on patterns from Assembled's qualitative methodology research in Singapore, including focus group discussions, in-depth interviews, and hybrid online-offline research designs. Secondary data from IMDA, SingStat, and published methodology research from ESOMAR. For research enquiries, contact felicia@assembled.sg.
RESEARCH ENQUIRY

Choosing the right modality for your next qualitative study in Singapore

Online and in-person qualitative methods answer different questions differently. If you're scoping a research project and need to determine which approach (or hybrid combination) will produce the data your decisions actually require, we design the methodology around the question, not the budget.

Request a quote →

Felicia Hu

Managing Director, Assembled

Felicia personally scopes, moderates, analyses, and presents every project. With 600+ studies completed since 2016 and publications in MRS Research Live and ESOMAR Research World, she brings a researcher's instinct to every conversation. A bilingual moderator in English and Mandarin with fluency in Hokkien, Cantonese, and Singlish.

Felicia Hu

Founder and Managing Director of Assembled, Singapore’s best-reviewed market research agency (700+ five-star Google reviews). 600+ projects since 2016 across skincare, financial services, F&B, healthcare, luxury goods, retail, aviation, and technology. Research World, MRS LIVE columnist. Quoted in South China Morning Post. ESOMAR standards. Bilingual fieldwork in English and Mandarin from a 100,000-member proprietary panel. More about Felicia → https://www.linkedin.com/in/feliciahuyanling/

https://assembled.sg/
Next
Next

Telemedicine Adoption in Singapore, What Patients Actually Think When the Doctor Is on a Screen