Online Qualitative Research vs In-Person in Singapore
I moderated a focus group on financial planning attitudes last month. Six participants, all professionals in their 30s. Three joined from their living rooms on Zoom. Three sat in the discussion room at our office on Beach Road. Same screener, same discussion guide, same 90-minute session. And the data that came back was not the same.
The three in the room told me about the guilt they feel when they overspend. One woman described hiding a Sephora bag from her husband in the boot of her car. The participants on Zoom? They gave me answers that were accurate, structured, well-considered. But the Sephora-bag confession never would have happened through a laptop camera. I'm fairly sure of that.
This matters because Singapore has become one of the most digitally connected populations on earth. IMDA's digital society data shows 99% of resident households now have internet access. SingStat's age-group data confirms that internet usage is essentially universal among those aged 15 to 59. The infrastructure for online research is perfect. The question is whether perfect infrastructure produces perfect data.
It doesn't. Not always. But it doesn't always fall short, either, and that's the part most methodology debates get wrong.
The convenience argument solved the wrong problem
When virtual focus groups became the default during 2020 and 2021, the industry framed it as a temporary fix. We'd go back to rooms when we could. Research from Archibald et al. in the International Journal of Qualitative Methods confirmed that Zoom was viable for qualitative data collection, that participants found it convenient, and that discussion quality held up reasonably well. What the early research didn't capture (and I think couldn't capture at that point) was how the medium would reshape participant behaviour over time once the novelty wore off.
Singapore participants adapted fast. According to IMDA's 2025 Digital Economy Report, Singapore's digital economy now contributes 18.6% of GDP. Employees average 2.8 remote working days per week, the highest in the Asian region. Video calls became muscle memory. And therein lies the problem for researchers.
When participants are deeply comfortable with the video call format, they default to their professional presentation mode. They self-edit more. They give you the boardroom version of their opinions, not the kitchen-table version. I first noticed this in an IDI series we ran for a healthcare client. Participants on Zoom sat straighter, used more complete sentences, and volunteered fewer digressions. The same demographic, recruited from the same panel, gave us messier and more honest responses in person.
Actually, let me qualify that. "More honest" is the wrong framing. The Zoom participants weren't lying. They were performing a slightly polished version of themselves, the way anyone does when they know they're on camera in their professional context. The in-person participants were performing too, but the social dynamics of a physical room create different kinds of performance. Ones that, in our experience, tend to produce richer data for certain research questions.
What the screen changes (and what it doesn't)
I've been building a mental model for this over the past two years, and I think it breaks into two dimensions. The first is topic sensitivity. The second is how much the research question depends on spontaneous, unstructured response versus considered, reflective response.
THE SCREEN EFFECT MATRIX
Let me walk through each quadrant because the examples matter more than the labels (and I'm still refining whether "spontaneity" is the right axis name, or whether it's something closer to "embodied response").
The Confessional (high sensitivity, high spontaneity needed)
We ran a study on chronic disease management in Singapore where patients needed to describe the gap between what they told their doctors and what they actually did at home. In person, participants would lean forward, lower their voices, start a sentence with "Don't judge me, but..." and describe skipping medication for weeks. On Zoom, these same revelations came out more carefully packaged. The shame was still there but filtered through a screen that gave participants just enough distance to clean it up.
For research that needs raw emotional data, in-person is almost always better. The physical room creates a shared vulnerability that a grid of video tiles cannot replicate.
The Shield (high sensitivity, considered response needed)
Here's where it gets interesting, and where I had to revise my initial assumptions. For some sensitive topics, the screen is actually an advantage. We've seen this in mental health research, in studies on therapy-seeking behaviour, and in research on workplace discrimination. When the topic carries social stigma and what you need is the participant's considered perspective (rather than raw emotional response), the distance of a screen can function like a confessional booth. Participants feel protected enough to share.
A RAND Corporation study comparing online and in-person qualitative methods found that online modalities did not produce substantially different thematic findings, though the in-person data tended to be richer in word count. That richness distinction matters. Themes were preserved, but texture was sometimes lost.
The Room (low sensitivity, high spontaneity needed)
Try running a product testing session on Zoom. I mean, you can. We have. But when a participant picks up a skincare product, turns it over, frowns at the ingredient list, and puts it back down, all within three seconds, that micro-rejection tells you something no online survey or video call will capture. The physical room gives you access to unguarded body language, the way people interact with objects, and the energy that builds when six people in a room start riffing off each other's reactions.
Group dynamics are the biggest casualty of the screen. In a physical focus group, someone laughs, and that laughter changes the energy. Someone's offhand comment triggers a memory in the person next to them. That chain reaction, what we sometimes call "the cascade", produces some of the best qualitative insight. On Zoom, the slight lag, the mute-unmute friction, the fact that only one person can really talk at a time without chaos, all of that dampens the cascade.
The Toss-Up (low sensitivity, considered response needed)
Brand perception studies. Media consumption diaries. Purchase journey mapping. For research questions where you need participants to think, reflect, and articulate a reasoned perspective, online and in-person produce comparable data. The Greenbook research community's analysis confirms this: online groups can match in-person quality when the discussion guide is structured for reflection and the topic doesn't require spontaneous emotional disclosure.
For a well-written research brief with clear objectives and considered questions, the medium matters less than most people assume. Our guide to market research in Singapore covers this in more detail, but the short version is that question design drives data quality more than the room you're sitting in.
Singapore-specific factors that complicate the choice
Most of the published research on online versus in-person qualitative methods comes from Western contexts. Singapore introduces complications that researchers in London or New York don't face.
The language-switching problem
In a physical room, I can hear when a Mandarin-dominant participant is struggling with an English concept. I see the micro-pause, the glance at another participant, the slight body shift before they switch languages. On Zoom, those signals are harder to catch. The compression of video strips away the peripheral cues that a bilingual moderator relies on to know when to switch languages or rephrase a question.
This is not a small issue. Singapore's multicultural research context means that many focus groups involve participants who think in one language and speak in another. The gaps between those two languages (Mandarin thought patterns expressed in English syntax, Malay cultural concepts translated into Chinese business idiom) are where the real insight often lives. And those gaps are wider on screen.
The HDB factor
Singapore is small. Most participants join online sessions from HDB flats where privacy is limited. I've had participants whisper their answers because their mother-in-law was in the next room. I've had a respondent describing his feelings about a premium whisky brand while his kids ran through the background screaming about dinner. These interruptions don't just break the flow. They change what people are willing to say. In our research facility on Beach Road, participants are in a neutral space. Nobody's watching. Nobody's listening from the kitchen. That neutrality has value, especially for market research projects where honest consumer opinion is the entire point.
The "on time" culture
Singaporeans show up. This is one of the genuinely useful advantages of online research here. No-show rates for online sessions run about 5-8% in our panel data, compared to 12-15% for in-person (where traffic on the PIE or a delayed MRT can turn a committed participant into a cancellation). For projects where sample consistency matters more than depth, online's reliability advantage is real. According to IMDA's household infocomm usage data, broadband penetration is near-universal, which means technical failures are rarely the issue. Participants connect, they stay connected, they participate.
A practical decision framework (with honest caveats)
Here's how we actually make the call when scoping a project. I'm sharing this with the caveat that it's based on our experience across 600+ projects, not on a controlled experiment. Other agencies may weigh these factors differently, and they might be right to.
ONLINE VS IN-PERSON DECISION GUIDE
The table looks clean. Reality is messier. Most projects we scope don't sit neatly in one column. A skincare study might need product handling (in-person) for the testing phase and reflective journaling (online) for the diary phase. We increasingly design hybrid methodologies that use both modalities, not as a compromise, but because different phases of the same research question genuinely call for different environments.
What the data quality research actually shows
I should be transparent about what the published evidence says, because it doesn't entirely support my preference for in-person work on sensitive topics.
Greenbook's analysis of industry expert perspectives suggests that online focus groups "can be just as effective as in-person" for many research objectives. The RAND study I mentioned earlier found no substantial thematic differences between modalities. And ESOMAR's research guidelines treat online and in-person methodologies as equivalent in terms of ethical standards and quality expectations.
Where the divergence shows up is in what researchers call "richness." In-person transcripts tend to be longer. They contain more tangents, more unfinished thoughts, more moments where a participant contradicts themselves mid-sentence. These are the moments that produce insight, not just data. Whether those moments matter depends on your research question. For a brand tracking study, they probably don't. For understanding why consumers say one thing and do another, they're everything.
The moderator variable nobody talks about
Here's what I think gets underweighted in the online-vs-in-person debate. The moderator matters more than the medium. A skilled moderator on Zoom will extract better data than an average moderator in a room. The medium is the second-order variable. The first-order variable is whether the person asking the questions knows how to listen, when to push, when to sit in silence, and how to read cultural signals that aren't in the participant's words.
In Singapore's context, that means knowing that a participant's "I think it's okay lah" is not a neutral statement but a polite disagreement. It means catching the pause before a Mandarin speaker switches to English for a concept they can't quite express in their mother tongue. These skills transfer across modalities, but they're slightly harder to deploy through a screen. Slightly. Not impossibly.
Probe for online sessions: "I noticed you paused before answering that. Take your time. There's no wrong answer, and I'm genuinely curious about the hesitation itself, not just the answer."
Where this leaves us (and where I'm still uncertain)
My provisional conclusion, and I want to stress that word "provisional", is that the binary framing of online versus in-person is less useful than most methodology discussions assume. The better question is always: what does this specific research question need?
If you need to understand how a caregiver feels when they watch a parent refuse medication, put them in a room with other caregivers. The shared vulnerability produces data you can't get through a screen. If you need to understand how professionals choose between two insurance products, an online in-depth interview will give you everything you need at half the cost and twice the scheduling convenience.
The methodology cluster of our research (posts on how to analyse focus group data, choosing between IDIs and focus groups, and mobile ethnography approaches) keeps circling the same truth. The method is a tool. The insight comes from knowing which tool fits the job. And sometimes, it appears, the job needs both.
What should brands ask before choosing online or in-person research
Is online qualitative research as reliable as in-person in Singapore?
What topics work better for online focus groups?
How much cheaper is online qualitative research compared to in-person?
Can you moderate multilingual focus groups effectively on Zoom?
When should brands use a hybrid approach combining online and in-person research?
Choosing the right modality for your next qualitative study in Singapore
Online and in-person qualitative methods answer different questions differently. If you're scoping a research project and need to determine which approach (or hybrid combination) will produce the data your decisions actually require, we design the methodology around the question, not the budget.
Request a quote →