
March 23, 2026
Conjoint analysis is a statistical technique used in market research to understand how people make decisions between competing products or services.
Instead of asking respondents what they prefer directly, it presents them with realistic combinations of product features and asks them to choose between options (much like they would when shopping in real life).
For example, think about choosing between two streaming plans: one that’s cheaper with ads and another that’s more expensive but ad-free with offline viewing.
By analyzing these choices, researchers can see which features influence decisions the most. This strategy helps research teams move beyond stated preferences to understand what truly drives purchase behavior.
In this article, we’ll explain what conjoint analysis is, how teams apply it in real-world market research, the main types of conjoint analysis, and when to use each method.
In a conjoint analysis study, researchers show respondents multiple versions of a product or service, each with a different combination of features. These features might include aspects like:
Respondents are then asked to choose the option they would be most likely to buy or use from each set.
Each choice reveals how much value someone places on individual features by forcing them to make trade-offs. This doesn’t happen just once; respondents are shown several rounds of choices where the feature combinations change each time.
For example, let’s say someone is comparing two smartphones across different rounds:
By using conjoint analysis to look at patterns across many of these comparisons, researchers can determine which features matter most and how much people are willing to trade one benefit for another.
Before running a conjoint analysis study, research teams first need to decide which product features should be included in the comparisons. This discovery phase involves identifying the attributes that may influence how customers make decisions, such as price, delivery time, durability, or brand reputation.
Selecting the right features at the discovery stage can significantly affect the quality and usefulness of your conjoint analysis results. If the attributes included in the study don’t reflect what actually influences customer decisions, the findings may lead to misleading importance scores and ultimately poor product or pricing decisions.
This is where preliminary research with open-ended questions can become especially valuable.
By analyzing reviews, open ended survey responses, support tickets, or product feedback, teams can uncover recurring themes that point to what customers truly care about before designing a conjoint study.
AI text analysis tools like Blix can help researchers organize large volumes of qualitative feedback into themes and quantitative analysis in minutes, making it easy to identify which attributes should be tested in the first place.
Choice-based conjoint asks respondents to choose between multiple product options that each include different combinations of features. Instead of rating products individually, they must pick the one they would most likely buy.
For example:
Which laptop would you choose?
In each round, respondents select the option they prefer. This process is repeated with different combinations of features to understand which attributes influence their decisions the most.
Use when: Simulating real purchase decisions where customers must choose between competing products.
Full-profile conjoint asks respondents to evaluate one complete product profile at a time. Each product includes several features, such as price, warranty, or delivery time, and respondents are asked to rate or rank how appealing each option is on its own.
For instance:
How likely would you be to purchase this winter jacket?
Respondents might rate this option a 7 out of 10, then evaluate another jacket profile separately, such as one that costs more but includes faster shipping or a longer warranty. This method allows researchers to understand how different combinations of features affect overall perception, even when products are not being compared side by side.
Use when: You want to understand how several features interact together to influence perception or appeal.
Adaptive conjoint analysis adjusts the questions respondents see based on how they answered previous ones. As the survey progresses, it focuses more on the features that appear to matter most to each individual, rather than asking them to evaluate every possible combination.
For example, let’s say you're customizing a new car. In the first round, you might be asked to choose between:
If you consistently choose lower-priced options in early questions, the survey may begin to show you more comparisons involving price versus other features like fuel efficiency or warranty coverage and fewer questions about luxury upgrades. This helps narrow down what influences your decisions without requiring you to evaluate dozens of feature combinations.
Use when: Your product or service includes many attributes and you want to avoid overwhelming respondents with too many choices at once.
Menu-based conjoint allows respondents to build their own preferred product or service by selecting from a list of available features or upgrades. Instead of choosing between predefined options, they create the combination that best fits their needs.
For example:
Build your own internet plan by selecting the features you want:
Respondents select the features they would include in their plan, allowing researchers to see which upgrades are most commonly chosen and how different combinations affect perceived value.
Use when: Testing customizable products or services where customers can select different features or add-ons.
Self-explicated conjoint asks respondents to evaluate product features one at a time instead of choosing between complete product options. Rather than comparing full bundles of features, respondents indicate which features they prefer, how much they prefer certain variations, and how important each feature is to their overall decision.
For example, think about shopping for a new mattress.
First, you might be asked which firmness levels you would consider:
Then, you would select the firmness you prefer most and least.
Next, you might rate how desirable each remaining firmness level is compared to your top choice.
Finally, you would be asked to distribute 100 points across different features based on how important they are to your purchase decision:
This helps researchers understand which features matter most and which feature levels are preferred, without requiring respondents to compare full product combinations.
Use when: You need quick directional insights or are conducting early-stage research.
MaxDiff (Best–Worst Scaling) asks respondents to select the most important and least important feature from a small set of options. Instead of rating each feature individually, they must make trade-offs by choosing what matters most, and what matters least, in each round.
For example:
When choosing a gym membership, which feature matters most and least to you?
Respondents repeat this process across multiple sets with different combinations of features. This helps researchers create a clear ranking of which features consistently matter more than others.
Use when: You want to prioritize features, benefits, or messages before building full product or pricing combinations.
The two-attribute tradeoff asks respondents to compare two features at a time to indicate which they prefer. By isolating just two attributes in each comparison, this approach focuses on how people make decisions when weighing one benefit against another.
To illustrate:
When booking a hotel, which would you prefer?
In another round, they might compare:
This allows researchers to understand how customers prioritize specific features when forced to choose between them.
Use when: You want to understand specific trade-offs, such as whether customers value faster delivery more than a lower price.
After respondents complete a conjoint survey, statistical modeling methods are used to analyze their choices and estimate how much value they place on each feature.
These methods examine patterns in the trade-offs respondents made across multiple comparison tasks to determine which attributes influenced their decisions and by how much.
Hierarchical Bayes (HB) estimates preferences at the individual respondent level by analyzing how each person answered multiple choice tasks throughout the survey.
For example, if two respondents both completed a laptop conjoint study:
HB modeling can estimate that Respondent A places more value on price, while Respondent B places more value on battery performance, even if they occasionally made different choices.
Multinomial logit (MNL) estimates preferences at the overall group level by looking at trends across all respondents rather than individual patterns.
For example, in a hotel booking conjoint study, the analysis might show that:
This model gives an overall view of what features tend to matter most across the entire sample.
Latent class analysis identifies groups of respondents with similar preference patterns based on how they made trade-offs in the survey.
For instance, in a smartphone study, the model might reveal:
This model helps researchers segment customers based on shared decision-making patterns rather than treating all respondents as one average group.
Conjoint analysis can be applied across various business functions to support decisions related to product design, pricing, marketing, and service offerings.
Common applications of conjoint analysis include:
While conjoint analysis can provide valuable insights into customer decision-making, there are several challenges to consider when designing and analyzing these studies:
Because of these challenges, many research teams rely on specialized platforms to help design surveys, analyze results, and interpret findings for conjoint studies.
The following tools are commonly used for conducting conjoint analysis in market research.
By analyzing open-ended customer responses, researchers can identify the features that should be tested and better understand the trade-offs customers make during the study.
Before designing a conjoint study, researchers first need to decide which product features should be tested. One of the best ways to identify these features is by analyzing open-ended feedback collected during earlier research.
Open-text responses often reveal what customers actually care about. Instead of guessing which attributes to include in the conjoint design, researchers can use this feedback to surface the themes that matter most.
Here’s how you can use Blix to analyze open-ended responses before building a conjoint study:
Open-text responses can also help explain the results of your conjoint study. It’s useful to know that customers prefer a lower price over longer battery life, but it’s even more valuable to understand why.
Maybe most of them spend their day in the office and always have access to a charger. Maybe they already rely on portable chargers. Or maybe the price difference simply felt too high.
A simple approach is to include a follow-up question after each conjoint task, such as:
"Can you briefly explain why you chose that option?"
Once you collect these responses, a text analysis tool like Blix can analyze the feedback at scale and help you understand the “why” behind your conjoint analysis results.
Conjoint analysis helps teams understand how customers make trade-offs between product or service features in realistic decision-making scenarios.
Choosing the right conjoint method depends on your research goals, such as pricing optimization, feature prioritization, or product design.
However, while conjoint analysis shows what respondents choose, it doesn’t always explain why they made those decisions. Open-ended survey responses can help identify which features should be included in the study from the start, and provide additional context into the motivations or concerns that influenced respondent selections. Tools like Blix can analyze this feedback to support both study design and interpretation of conjoint results.
The four main types are:
Most survey analysis focuses on descriptive analysis, with diagnostic analysis used to explain key drivers.
Common survey methods include:
Online surveys are the most popular types used today due to speed, reach, and ease of analysis.
Manual verbatim coding becomes inefficient and inconsistent as response volume grows. Software-based analysis platforms, such as Blix, support scalable qualitative analysis by automatically organizing, categorizing, and summarizing text responses across large datasets.
Save hours of manual work with AI powered open ends coding, with human-level quality and zero manual work.
Turn qualitative feedback into data and insights in minutes, with a few clicks.
Blix is trusted by top brands and market research firms worldwide: