Interpolate product attributes

Solution 1:

A somewhat unconventional way to get what you desire is to build a global ranking of all your products using your 10k draws.

Use each draw as a source of binary contests between the 10 products, and sum the results of these contests over all draws.

This will give you a final "leader-board" for your 10 products. From this you have relative utility across all consumers, or you can assign an absolute value based on the number of wins (and optionally, the "strength" of the alternative in each contest) for each product.

When you want to test a new product with a different attribute profile find its sparse(st) representation as a vector sum of (weighted) other sample products, and you can run the contest again with the win probabilities weighted by the contribution weights of the component attribute vectors.

The advantage of this is that simulating the contest is efficient, and the global ranking combined with representing new products as sparse vector sums of existing data allows much pondering and interpretation of the results, which is useful when you're considering strategies to beat the competition's product attributes.

To find a sparse (descriptive) representation of your new product (y) solve Ax = y where A is your matrix of existing products (rows as their attribute vectors), and y is a vector of weights of contributions from your existing products. You want to minimize the non-zero entries in y. Check out Donoho DL article on the fast homotopy method (like a power iteration) to solve l0-l1 minimization quickly to find sparse representations.

When you have this (or a weighted average of sparse representations) you can reason usefully about the performance of your new product based on the model set up by your existing preference draws.

The advantage of sparseness as a representation is it allows you to reason usefully, plus, the more features or product you have, the better, since the more likely the product is to be sparsely representable by them. So you can scale to big matrices and get really useful results with a quick algorithm.