Speaker: John Oppenhimer, Market Researcher & Community Manager, QuestionPro
Top questions & answers from this webinar
Q: How do you determine how many times a task should be repeated for Max-Diff and Conjoint? What about the number of items per task?
Answer: For MaxDiff exercises, it depends on the number of different items that you want to compare with each other. In general, the best practice is to make sure that each item is presented and compared to other items at least three times. That is for good measure so that the ranking data does not end up skewed and that the system can create a ranking system based on where items ranked relative to the rest.
For Conjoint exercises, if tested concepts are composed of 4 or more attributes, then your better off comparing just 2 concepts between each other for each task, and having no more than 6-8 tasks for this exercise. If concepts consist of only 2-3 attributes, then you have the flexibility to compare 3 or 4 concepts together and still have a max of 6-8 concepts, or you can continue to compare just 2 concepts per task, but have as many as 10-15 tasks per exercise. *You want to find the balance between having enough exercises so that the system can break down the comparisons, but you don’t want to tax your respondents to a point where they 'tire out' and not provide you with highest-quality insights.
Q: Is the anchored MaxDiff feature up yet? How will it work?
Answer: The Anchored MaxDiff is now available in the surveys platform. This feature works so that once respondents finish a MaxDiff exercise, they are then presented with all (or a select number) of these items/options and are asked in a multi-select question type if these items are 'appealing' or are a 'must have' (depending on what the researcher wants to get out of this and will provide binary insights). This additional exercise supplements the main MaxDiff exercise by unveiling the absolute importance of these items, since MaxDiff exercise measure just relativity, and provide you with a ranking hierarchy of all the items (without providing details for you if they're 'good' or 'bad').
Q: For the industry i am in, it is very challenging to obtain a large data set (limited respondents).. Is there an analysis that would compare to Conjoint with a limited set of respondents?
Answer: While it is better to run a Conjoint exercise with at least 300 respondents, you can try running an exercise with fewer respondents. For best practice, make sure to run about tasks so that enough attribute combinations can be compared. Also you'll want to supplement your exercise by asking general survey questions as well (e.g. among these following options under Attribute X, please select ALL, or top 3 / top 5, etc., that you find to be appealing to you). By having analytics from both the Conjoint exercise and from other survey questions regarding these attribute items, you're story will come full circle with what it is that your audience (respondents) value in the products/services you offer.
Q: Can you please explain how the denominator is calculated for the MaxDiff output that you showed?
Answer: Based on the percentages for each item being selected as the most-preferred and the least-preferred, the Share of Preference scores reflects (out of 100%) a simulated proportion among the audience who completed this exercise, of which item as their ‘most preferred’, if all item options were presented to each respondent at once.
Q: How is this better than asking for a ranking (assuming the list of attributes isn't long)?
Answer: What these exercises do is help you understand relativity by having items compared between each other. While ranking questions do ask respondents to list items from most to least preferred, the data is not as interpretable, especially when there can be polarizing results, and you cannot see the number of people who selected as their best/worst (ends up being computed as a statistical average). It also simulates your target audience’s decision-making when they shop in-store or online, and have to compare and choose what their most/least preferred options are. Though for good practice, you’ll want to supplement these exercises by asking those basic survey questions to measure that is popular or not, as these exercise just measure relativity.
Q: When you want to get consumer preferences, which one do you use?
Answer: If you are just comparing like-items between each other and there are no other factors that come into play, you'll want to use MaxDiff. For example, say were opening a small ice-cream parlor, and between 30 flavors, you need to limit it down to 10 flavors but also maximize profit, MaxDiff will simulate purchase decision-making and will allow you to know what the most popular ice cream flavors and will minimize any disappointment among your customers when they see the options they can choose.
If you're trying to determine what products/services you want to sell in your next product line, but have multiple factors to consider (price, size of product, special offer that each product would provide, then you’ll want to run a Conjoint exercise to see what set of combinations would be the most appealing to your target audience.