A response to “A Guide to Recommender Systems”
Richard MacManus published a post on ReadWriteWeb on Monday (which was re-published in the NYTimes) titled “A Guide to Recommender Systems;” this post is a reply.
I’m excited to see a conversation around recommender systems that delves deeper into the different approaches to recommending products. There are some folks who follow the misconception that all recommendation engines are created equal—they most definitely are not. Although Richard’s analysis may have oversimplified the problem in identifying only four approaches to recommendations (personalized, social, item, and combination), he does affirm the critical point that there are very different ways to derive high-quality recommendations.
I’m also happy to see Amazon highlighted for its leadership (“King of Recommendations,”) and that the post attributes this to Amazon’s combination of technologies. As the former head of Personalization R&D at Amazon I got the opportunity to work with this world-class team that spearheaded the personalized recommendation space and outright rejected the notion that personalization is commoditized. In fact, Amazon made it a core competency, investing millions of dollars to build a broad diversity of recommendation types. Their investment is paying off. One can argue about many things, but in looking at their earnings, it’s hard to argue with Amazon’s results.
As Richard further notes, most recommendation vendors today focus on one or two specific methods of recommendation, rather than follow Amazon’s example. Why is that? Especially given Amazon’s success, this seems quite surprising! The reason is simple: building a high-quality recommendation engine is rocket science. (One of my team members at Amazon actually got his degree in Rocket Science ☺—shout out to Mr. Rauser). As VP of Software and Data Mining at Overstock.com, where I worked after Amazon, I even got the opportunity to see the results of some of these “uni-dimensional” approaches. Needless to say, that experience opened my eyes to the opportunity—hence, I founded richrelevance.
Furthermore, recommendations are not one size fits all (just as shopping is not, and human beings are certainly not!). Amazon uses a combination of approaches which are different to those that work at Overstock; Google, as discussed in Richard’s post, uses everything from location to search history to make search results. Net-net, no “single algorithmic” approach can hope to keep up with today’s ever-changing consumer mindset. That’s why our engineering team embraced these lessons in building the next generation of recommendations. Instead of forcing retailers and consumers into a single bucket, we built a system that adapts to the retailer and to each customer in real-time. Based on our desire to keep pace with ever-changing consumer behavior, we implemented an adaptive type of artificial intelligence called Bayesian Ensemble Learning.
Ensemble Learning has been around since 1979 but until now has not been commercially applied. We analyze vast amounts of shopping behavior—but keep each part of the analysis separate. This prevents us from creating an unmanageable, tangled mess of data. Most recommendation engines try to incorporate different types of recommendations by packing all the data into one large, overly-complicated algorithm—i.e., attempting to combine social recommendations with personal, or item with social, or all of them at once, and as the saying goes: “garbage in, garbage out.” Ensemble Learning relies on multiple algorithms to do each part of the analysis well, methodically combining these analyses only at the final step. The result is a high-quality set of recommendations that are valuable and sensible to the end-consumer.
Furthermore, Ensemble Learning can effectively incorporate user feedback—what types of recommendations that are enticing incremental behavior versus those which are not. While this may seem like an easy problem to solve, it is incredibly difficult to do well; teams at Amazon and Overstock have struggled with it for many years.
On a final note, one thing Richard failed to bring to the table in this discussion is the concept of merchandising and control. What if the retailer cares more about margin than revenue? Or values repeat purchase above all else? What if the inventory is running low on a particularly popular—and so likely often recommended—pair of shoes? How do we handle the transition of seasonal goods or liquidations? No matter how perfect a recommendation engine might be, without merchandiser and marketing influence a recommendation engine is useless—or even worse, it may do more damage than good. Imagine looking at a Playboy magazine while shopping for baby books. Trust me, it has happened (not with us!) and it’s not pretty. (Yes, our solution empowers marketers and merchandisers with unprecedented levels of control. Our customers like our controls so much, we even patented them!)
I very much look forward to the evolving conversation!
David Selinger
CEO, RichRelevance
Update 1/29/2009: It looks like Richard posted another set of comments highlighting some more of the challenges—yes, it is Rocket Science! ” http://www.readwriteweb.com/archives/5_problems_of_recommender_systems.php