Market research has always had two lanes. If you wanted numbers, you ran a survey: big sample, fixed questions, clean comparisons. If you wanted understanding, you ran qual: small sample, flexible conversations, deeper insight.
Different tools for different jobs. That wasn’t a limitation. It was clarity.
What’s Actually New
AI has blurred that line. Now you can run something that feels like hundreds of interviews at once. Instead of asking:
“On a scale of 1–7…”
You ask:
“Tell me about the last time your phone plan frustrated you.”
Then you follow up. And follow up again. It’s more engaging. People actually say things. And you end up with a massive amount of rich, messy, human data, plus tools that can analyze it quickly. That’s real progress.
Why This Is Genuinely Useful
Traditional surveys are terrible at getting people to explain themselves. You ask an open-ended question and get “fine” or “idk.” Conversational formats fix that. They pull people in and keep them talking. You get actual thoughts instead of placeholders. More importantly, you get context.
Surveys are great at telling you how many people think something. They’re not great at telling you what that belief actually means in someone’s life.
Conversational approaches surface the stories, the nuance, the unexpected stuff you didn’t think to ask. And they make qual more accessible. Faster turnaround, lower cost, easier to scale. For a lot of teams, that’s a big step forward.
Where People Get It Wrong
This is where things go sideways.
1. You still need real qual upfront
If you don’t understand the space, the language, and the assumptions going in, a conversational survey won’t magically fix that. Human-led qual still plays a critical role at the beginning. You can’t skip it and expect the system to figure everything out mid-flight.
2. Open text is not measurement
This is the biggest mistake I see. Conversational responses are not designed for statistical measurement. Yes, AI can organize the data. It can identify themes, count mentions, and surface patterns. That’s all valuable. But it’s not the same as structured survey data.
In a traditional survey, everyone answers the exact same question with the same options. That consistency is what allows you to say, “40% chose cost.”
In a conversational flow, that consistency disappears. Some people are asked directly about cost. Others mention it on their own. Others never get there at all.
So when you say, “40% mentioned cost,” you’re describing a pattern in language, not a measured preference. Those are different things.
Don’t try to fake it
There’s a growing temptation to skip structured questions and let AI reconstruct the numbers from open text.
That doesn’t work.
You can use open text to explore ideas and add depth. But you cannot replace a multiple-choice question with a conversational prompt and expect comparable, benchmarkable results. If you need defensible numbers, you still need structured questions. There’s no shortcut.
Where tools like Blix fit
AI text analysis is incredibly powerful. Platforms like Blix can process massive volumes of open-ended data, think 100,000 responses per month across dozens of markets and languages. That used to take an army of coders. Now it takes minutes.
But there’s an important detail: this works best when everyone is answering the same prompt. Fixed open-ended questions, not fully dynamic conversations.
You can absolutely analyze conversational data with tools like this. You just have to be clear about what you’re getting. It’s structured insight. It’s not a statistically valid measurement.
3. Bias doesn’t disappear, it scales
Conversational systems feel neutral, but they’re not. The way questions are phrased, the way follow-ups are triggered, these introduce bias. And when you scale it, you don’t dilute that bias. You multiply it.
A slightly leading prompt becomes a consistent distortion across your dataset.
4. AI analysis can create false confidence
When you’re dealing with thousands of responses, it’s tempting to trust the summary and move on. But if no one reads the raw data, errors slip through. Weak signals get over-interpreted. Clean narratives form around shaky foundations. AI speeds things up. It doesn’t guarantee you’re right.
5. You’re asking more from respondents than you think
Conversational surveys feel easier, but they’re often more demanding. You’re asking people to reflect, explain, and articulate their thoughts. That takes effort. If you’re going to do that, respect it. Keep things tight. Set expectations. Pay people appropriately.
Conversational surveys are great for exploratory work, figuring out how people think, what matters to them, and how they talk about it.
They’re not great when precision matters. If you need to track change over time, benchmark results, or make decisions where small differences matter, you need consistency more than conversation.
The best approach is usually hybrid: structured questions for measurement, conversational elements for depth.
One Last Thing
You want to use conversational surveys with real human respondents, like those from Ola Surveys’ Survey Diem panel.
While there are a number of reasons to be skeptical of LLM-generated survey data, the thought of a bot talking to another bot should make your brain hurt and wonder if we’re really talking about research.
Conversational surveys aren’t the future of all research. They are the future of a specific kind of research: fast, exploratory research at scale or blended research that involves a survey with structured questions and conversational elements.
When it comes strictly to qual research, is qual at scale overkill? In some situations, it might be, but it is often the case that the more data you have, the better.
Remember also that traditional qual is a good fit when you can only find a couple dozen qualified participants. Conversational surveys and other qual at scale approaches tend to make more sense when you can access several hundred qualified participants. If you have lots of potential participants, that’s great. If you only have a handful of conversation opportunities, you probably want to stick to skilled human moderators.
Most survey analysis focuses on descriptive analysis, with diagnostic analysis used to explain key drivers.
What are the four types of survey methods?
Common survey methods include:
Online surveys
Phone surveys
Paper surveys
In-person interviews
Online surveys are the most popular types used today due to speed, reach, and ease of analysis.
How do you analyze open-ended survey questions at scale?
Manual verbatim coding becomes inefficient and inconsistent as response volume grows. Software-based analysis platforms, such as Blix, support scalable qualitative analysis by automatically organizing, categorizing, and summarizing text responses across large datasets.
Cam Wall
Research Industry Professional
Cam Wall is a senior executive with 20 years of experience leading research operations, data science teams, product innovation, and enterprise strategic initiatives in market research. He spent six years on the executive leadership team of a $75M insights organization and currently serves as the founder and CEO of Ola Surveys, where he seeks to improve the quality of survey data through better incentives and enhanced vetting of research participants.