Forget home runs and double plays: the hottest competition this weekend involved nearest-neighbor weights and root mean squared error, with two teams vying for the Netflix Contest Prize in its final hours.
Almost three years ago, Netflix offered $1 million to whoever could improve the accuracy of its movie recommendation system by 10 percent or more. Roughly 5500 teams participated, but none succeeded until a month ago, when four teams banded together under the name BellKor’s Pragmatic Chaos. Their improvement of 10.05 percent set off a 30-day window for other competitors to do better.
In the final day, another team called The Ensemble came up with a better percentage, and both teams went back and forth until Ensemble emerged with a 10.1 percent improvement, beating BellKor’s 10.09 with just four minutes until deadline.
Either way, a 10 percent improvement over the old recommendation system is significant, but will it matter? BellKor’s system is designed to predict how you’ll rate a recommended movie, but it’s hard to tell from all the jargon how the system will improve discovery and push users into new waters. Whenever Netflix implements the new system, I hope the company will be candid about the effects.
One thing’s certain: Incentive-based crowdsourcing worked out swimmingly for Netflix, so maybe the company should consider farming out some more of its dirty work for a prize. Let’s pool some crack Web designers to improve the Netflix site’s browsing experience, or bring some middle managers together to get the postal service delivering movies faster. Even better, let’s offer some lawyers and businessmen a hefty sum to work on securing more movies for Instant Watch.
If none of this works out for the better, the publicity alone will foot the bill.