Conjoint Analysis - Best Practice Implementation FAQ

Please be aware that opinions and suggestions given here are exactly that -- Our opinions and suggestions. Conjoint Analysis is a fairly complicated model and while there is no single solution this FAQ should be considered as a broad opinion trying to encompass as many scenarios as possible. Many of our suggestions and methods here are OUR opinions and may contradict other research executed for specific scenarios.

How do I determine how many concepts per screen should I ask?

In general for internet based (self-administered) surveys, asking users to choose between more than 3 or 4 concepts is generally not reasonable. The cognitive stress placed on users far outweighs the extensibility in the utility calculation.

The number of concepts that you choose also depends upon the number attributes that is being measured. For example, if you are measuring more than 3 attributes, the users have to understand and comprehend product (that is comprised or 3 or more attributes) and then make a decision between the different concepts.

In general, however if you have 3 or less attributes, you can go with 3 concepts per task. However, if you have more than 3 attributes, we would suggest that you do not put in more than 2 concepts per task.

How are the tasks (profiles) created and displayed? Is it pre-defined or does the system create concepts and present them to the user?

The concepts are randomly created and displayed to the user. You can use the "Prohibited Pairs" feature to make sure certain pairs of levels are never part of the same concept. But, the model generally is to keep the creation of the concepts random - so as to explore as varied a possibility to gain insight into individual utilities.

The alogrithm for computing a profile is as follows:

System will choose random levels for each attribute to create the first concept.

Subsequent concepts (for each task) will be created such that the levels are not repeated. In cases where this is not possible, the levels will be repeated.

Under no circumstances will the system display two identical profiles

The system will also take into account User-Defined "Prohibited Pairs" to make sure profiles that are created.

Can there be an over-reliance on a particular attribute like "Price" or "Cost?"

Yes. In almost all conjoint studies, Price and Cost will be the primary determining factor. If previous conjoint studies have shown an over-reliance on Price/Cost factors, it may also make sense to conduct a study WITHOUT price as one of the components as a test-run. While this approach will not work for price-sensitivity testing, it will however elimitate the "Price-Fixation" attitude and give you deeper insight into the other attributes.

Another mechanism for solving the "Price-Fixation" (where users show a tedency to heavily rely on price i.e. the cheapest price always wins) is to do price-bands -- i.e. where the levels in the Cost/Price attribute is defined as a range -- Eg. ($1.99 - $2.99) This approach may be able to help in "de-focusing" cost and allowing users to pay equal attention to other factors.

How do I determine how many tasks I should have the user complete?

Our experience has shown that there is a precipitious drop-out rate after about 15 tasks. Unless there is a strong personal incentive for the end-users to complete the survey, we would suggest to keep the number of tasks to under 15 especially in cases where users are volunteering to take surveys. Please keep in mind that conjoint product selection is a little more involved than simply "answering a survey question" -- users have to comprehend each of the attributes/concepts and then make a choice. This is a lot more involved than say choosing "Male/Female" on a gender question.

On the lower side, we would suggest that 6-8 tasks be the minium for a conjoint model with 3 attributes. The more attributes you have, the more number of tasks users have to fill out.

It is obvious that it's a balancing act between the number of tasks, concepts per task and the total number of attributes/levels than need to be displayed.

Two factors determine the overall utility:

Concepts Per Task

Total # of Tasks

The system provides the "Concept Simulator" - with the concept simulator you can see the TOTAL number of times a particular level will be displayed (given the total number of respondents) - See Concept Simulator in the References section below.

What about the "None" option? How does having the "none" option affect data calculation and reliability?

If the "None" option is enabled, the utility calculation takes into account that NONE of the options was selected. The utility calculation fundamentally relies on the number of times a particular level is displayed to the user compared to the number of times a level is part of a chosen concept. If the "none" option is selected, then the utilities for all the levels in the options (that were not selected) will go down.

What are the other implications of enabling the "none" options?

From a practical standpoint, in some cases, where we've seen an over-emphasis on price/cost or a single level, the None option is selected whenever the emphasized level is not present in one of the random concepts. In such cases, it may not make sense to enable the "none" option and force users to choose the "best" option that they are presented with.

How many levels can I have within each attribute? How many attributes can I have? Any guidelines?

From a technical standpoint, the system does NOT impose any limitations. You can have unlimited attributes and unlimited levels within each attribute.

However, from a practical standpoint, it is unreasonable to have more than 4-6 attributes, and about 3-4 levels per attribute. Our suggestion would be to keep the number of attributes to under 5 and try and seek about 3 levels for each attribute.

I have a lot of Attributes that I'd like to test out (more than 5-6) - what can I do?

Like mentioned above, the system does not limit the number of attributes. However from a practical presentation standpoint, it really does not make sense to have choices with more than 5-6 attributes because of the cognitive stress involved. However, if you do have a case where you'd like to test out 10-20 attributes we would suggest you do this as a two part project:

Create a screening/profiling survey and use simple "Multiple Choice (Multiple Select)" to determine viability of attributes - "Pick 6 of 20" etc.

Use TURF Analysis to pick the Top 5 or 6 attributes with the highest reach.

Then as a secondary wave, run the conjoint study on highest reach attributes.

Ready to start listening to your customers? FREE TRIAL