Mobile Ethnography in Singapore: When Traditional Research Methods Fall Short

Assembled is a market research agency in Singapore with 600+ projects completed across Southeast Asia since 2016, a 100,000-member proprietary panel, and publications in MRS Research Live and ESOMAR Research World. This mobile ethnography methodology draws on patterns from research projects moderated by founder Felicia Hu, who scopes, moderates, analyses, and presents every project herself. In Singapore's high-context culture, a participant who says "can consider" is saying no. Felicia, a bilingual moderator in English and Mandarin with fluency in Hokkien, Cantonese, and Singlish, was recently quoted in the South China Morning Post on cultural nuance in market research.

Why Traditional Research Asks You to Lie (and Mobile Ethnography Doesn't)

I was watching a focus group last week where a participant described her morning skincare routine: cleanser, toner, essence, serum, moisturizer, SPF. Thorough. Disciplined. Ten minutes. Then, during the break, I watched her walk out to the bathroom and splash water on her face for maybe 20 seconds, pat dry, and apply moisturizer. That was it. No essence, no serum, no toner.

That gap between what people say and what they do isn't deception—it's how human memory works. Traditional research methods ask people to remember behavior. Mobile ethnography asks them to document it in real-time. The difference in data quality can be dramatic (though I'm still refining how consistent this gap is across different product categories and consumer segments).

When consumers recall their behavior in focus groups or surveys, memory reconstructs rather than retrieves. The timeline compresses. Details blur. Social desirability reshapes the narrative. What participants believe happened diverges from what actually happened. Mobile ethnography uses smartphones to capture behavior as it occurs. Participants photograph purchases, video their routines, log decisions in the moment. The gap between memory and reality closes.

The Infrastructure Exists. The Question is How to Use It Well

According to IMDA's household technology adoption data, smartphone penetration in Singapore exceeds 90%. Nearly everyone carries a documentation device. The Enterprise Singapore market research resources also emphasize mobile-first methodologies for capturing behavior in context. The infrastructure exists. The question is how to use it well.

Mobile ethnography works best for three types of research questions. High-frequency, low-salience behaviors (how do people actually use your product day-to-day?). Context-dependent decisions (what triggers a snack purchase? what makes someone open a delivery app?). Journeys that unfold over time (major purchases involve research and consideration across weeks or months; mobile ethnography tracks how these actually unfold).

For each, the method is the same: participants download an app or join a WhatsApp-based platform and receive daily or situation-triggered prompts asking them to document specific behaviors. They photograph meals, video their routines, log whenever they make a purchase. The specificity matters—vague tasks produce thin data.

How Mobile Ethnography Actually Works in Practice

Studies typically run 3-14 days. Shorter periods miss patterns. Longer periods cause fatigue and declining compliance (though I've seen exceptions where highly engaged participants sustain quality submissions for 21 days). A week often hits the sweet spot for consumer behavior research.

The work is in the task design. "Photograph your bathroom cabinet" produces limited insight. "Photograph your bathroom cabinet, then tell us: which products do you use daily? Which are aspirational (you own but rarely use)? Which are medical necessities?" produces insight. The difference is whether the prompt surfaces thinking or just documentation.

Photo diary task (skincare): "Photograph your complete skincare routine each morning for 7 days. We're looking for: products used, actual sequence, time spent, and whether anything gets skipped. Tell us what you're thinking when you skip something."

Researchers review photos, videos, and text logs to identify patterns, contradictions, and insights that wouldn't surface in traditional methods. Some platforms use AI for initial coding; human interpretation remains essential. Or so the vendors suggest (though I want to stress-test this claim more thoroughly before full adoption).

When Mobile Ethnography Beats Traditional Methods

For understanding daily habits and routines, mobile ethnography wins because it captures real-time behavior instead of recalled behavior. Memory gaps disappear. For mapping decision triggers in context, it wins because participants document the environment, the moment, the emotional state, and the outcome. For validating claimed versus actual behavior, it wins because documented evidence is harder to reinterpret.

But mobile ethnography has real limitations. It captures individual behavior only (no group discussion). Follow-up probing is limited (you can ask clarifying questions via app but you don't get the conversational richness of an in-depth interview). It requires sustained participant effort (fatigue and drop-off are real concerns).

Research Need Traditional Methods Mobile Ethnography
Daily habits and routines Memory gaps Real-time capture
Emotional responses to concepts In-person depth Limited probing
Decision triggers in context Context absent Captures environment
Idea generation through discussion Focus groups excel Individual only
Validate say-do gap Gap persists Evidence-based
Track behavior over time Diary studies possible Continuous capture

Designing Tasks That Produce Actual Insight

Task type determines output. Photo diary tasks ("photograph every meal for 7 days") reveal actual food choices versus claimed diet. Video capture tasks ("record your morning skincare routine") show products used, sequence, and time spent. Triggered log tasks ("log whenever you consider ordering food delivery") surface decision triggers, context, and outcome. Receipt capture ("photograph all grocery receipts this week") reveals actual purchase behavior versus intent. Environment scans ("show us inside your bathroom cabinet") reveal product inventory reality.

The specificity matters. A task that asks "How do you use this product?" gets generic answers. A task that asks "Show us your product. Tell us how long you've owned it. When did you last use it? Do you think you'll use it again this week?" gets grounded responses (though I'm still testing optimal probe intensity—more detailed prompts drive better insight but also higher fatigue).

The Practical Challenges (and How to Solve Them)

Participant fatigue is real. Tasks that are too frequent or too demanding produce declining compliance. Solution: keep tasks simple. Offer escalating incentives for full completion. Pilot the tasks with a small group first and gauge burden.

Privacy concerns exist. Some participants are uncomfortable documenting their lives. Solution: explain data handling clearly. Allow opt-outs for specific tasks. Build trust before requesting sensitive documentation (bathroom cabinets, financial receipts, medication routines).

Analysis complexity is significant. Hundreds of photos and videos require systematic review. Solution: code rigorously. Use multiple researchers. Don't cherry-pick examples that confirm hypotheses. Let patterns emerge from the data, not your expectations.

See also: Straits Times

Market intelligence from Business Times Singapore, Channel NewsAsia, and TechAsia informs this research.

QUESTIONS WORTH EXPLORING

What Should You Ask Before Choosing Mobile Ethnography?

What specific behaviors do we need to observe?
Be explicit. "How often do people use our product" is vague. "Do people use our product in the morning or evening? Do they use it standalone or with other products? How long between application and benefit perception?" is actionable. If you can't describe the behavior precisely, mobile ethnography might not be the right method—you might need focus groups for exploratory research first.
Could we learn this through recall-based methods?
If the behavior is infrequent (major purchases, rare events), participants can recall it accurately. If it's daily, habitual, or context-dependent, recall fails. That's when mobile ethnography's real-time capture becomes essential. Think about your specific behavior before committing to the method.
What's the minimum documentation burden that produces usable data?
Pilot the tasks first. Ask 5-10 people to complete your proposed tasks for 2 days. Track where they get confused, where they drop off, where they provide thin responses. Then revise before launching the full study. This prevents wasting time and budget on poorly designed data collection.
Have we piloted the tasks?
Do not skip the pilot. Untested tasks often produce bad data. Ambiguous prompts lead to inconsistent responses. Overly burdensome tasks lead to drop-off. A two-day pilot with 10 participants costs minimal time and reveals problems before they become expensive.
What patterns emerge that contradict our hypotheses?
During analysis, you'll find behavior that contradicts what you expected. This is where mobile ethnography's value lies. Don't explain it away. Investigate it. Ask follow-up questions: Why did this happen? What were the circumstances? The contradictions often hold the most valuable insights.
Where does documented behavior diverge from what we expected?
Mobile ethnography excels at revealing the say-do gap. When actual behavior differs from claimed behavior, you've found a gap in the market (or a problem with your positioning). Use this moment to reframe your research questions: What's driving the actual behavior? And how should we position our product or service to match reality instead of aspiration?
Observations in this post draw on patterns from Assembled's 600+ qualitative research projects across Southeast Asia. For research enquiries, contact felicia@assembled.sg.
RESEARCH ENQUIRY

Designing mobile ethnography that captures what your customers actually do

Mobile ethnography closes the gap between what consumers tell you and what they actually do. We design studies that capture real-time decisions in real contexts.

Request a quote →
Felicia Hu, Managing Director of Assembled, Singapore market research agency

Felicia Hu, Managing Director

600+ qualitative research projects across Singapore and Southeast Asia since 2016. Published in Research Live (MRS UK) and Research World (ESOMAR). Quoted in the South China Morning Post. Bilingual moderation in English and Mandarin. NVPC Company of Good Fellow.

About Felicia LinkedIn felicia@assembled.sg
Felicia Hu

Founder and Managing Director of Assembled, Singapore’s best-reviewed market research agency (700+ five-star Google reviews). 600+ projects since 2016 across skincare, financial services, F&B, healthcare, luxury goods, retail, aviation, and technology. Research World, MRS LIVE columnist. Quoted in South China Morning Post. ESOMAR standards. Bilingual fieldwork in English and Mandarin from a 100,000-member proprietary panel. More about Felicia → https://www.linkedin.com/in/feliciahuyanling/

https://assembled.sg/
Previous
Previous

Multicultural Audience Market Research in Singapore: Getting It Right and Rolling

Next
Next

Plant-Based Food Market Research in Singapore: Who's Actually Buying and Why