Edwin Chen's Blog

Improving Twitter search with real-time human computation

(This is a post from the Twitter Engineering Blog that I wrote with Alpa Jain.)

One of the magical things about Twitter is that it opens a window to the world in real-time. An event happens, and just seconds later, it’s shared for people across the planet to see.

Consider, for example, what happened when Flight 1549 crashed in the Hudson.


When Osama bin Laden was killed.


Or when Mitt Romney mentioned binders during the presidential debates.


When each of these events happened, people instantly came to Twitter – and, in particular, Twitter search – to discover what was happening.

From a search and advertising perspective, however, these sudden events pose several challenges:

  1. The queries people perform have never before been seen, so it’s impossible to know beforehand what they mean. How would you know that #bindersfullofwomen refers to politics, and not office accessories, or that people searching for “Horses and Bayonets” are interested in the debates?
  2. Since these spikes in search queries are so short-lived, there’s only a short window of opportunity to learn what they mean.

So an event happens, people instantly come to Twitter to search for the event, and we need to teach our systems what these queries mean as quickly as we can, because in just a few hours those searches will be gone.

How do we do this? We’ll describe a novel real-time human computation engine we built that allows us to find search queries as soon as they’re trending, send these queries to real humans to be judged, and finally incorporate these human annotations into our backend models.

Overview

Before we delve into the details, here’s an overview of how the system works.

(1) First, we monitor for which search queries are currently popular.

Behind the scenes: we run a Storm topology that tracks statistics on search queries.

For example: the query “Big Bird” may be averaging zero searches a day, but at 6pm on October 3, we suddenly see a spike in searches from the US.

(2) Next, as soon as we discover a new popular search query, we send it to our human evaluation systems, where judges are asked a variety of questions about the query.

Behind the scenes: when the Storm topology detects that a query has reached sufficient popularity, it connects to a Thrift API that dispatches the query to Amazon’s Mechanical Turk service, and then polls Mechanical Turk for a response.

For example: as soon as we notice “Big Bird” spiking, we may ask judges on Mechanical Turk to categorize the query, or provide other information (e.g., whether there are likely to be interesting pictures of the query, or whether the query is about a person or an event) that helps us serve relevant tweets and ads.

Finally, after a response from a judge is received, we push the information to our backend systems, so that the next time a user searches for a query, our machine learning models will make use of the additional information. For example, suppose our human judges tell us that “Big Bird” is related to politics; the next time someone performs this search, we know to surface ads by @barackobama or @mittromney, not ads about Dora the Explorer.

Let’s now explore the first two sections above in more detail.

Monitoring for popular queries

Storm is a distributed system for real-time computation. In contrast to batch systems like Hadoop, which often introduce delays of hours or more, Storm allows us to run online data processing algorithms to discover search spikes as soon as they happen.

In brief, running a job on Storm involves creating a Storm topology that describes the processing steps that must occur, and deploying this topology to a Storm cluster. A topology itself consists of three things:

  • Tuple streams of data. In our case, these may be tuples of (search query, timestamp).
  • Spouts that produce these tuple streams. In our case, we attach spouts to our search logs, which get written to every time a search occurs.
  • Bolts that process tuple streams. In our case, we use bolts for operations like updating total query counts, filtering out non-English queries, and checking whether an ad is currently being served up for the query.

Here’s a step-by-step walkthrough of how our popular query topology works:

  1. Whenever you perform a search on Twitter, the search request gets logged to a Kafka queue.
  2. The Storm topology attaches a spout to this Kafka queue, and the spout emits a tuple containing the query and other metadata (e.g., the time the query was issued and its location) to a bolt for processing.
  3. This bolt updates the count of the number of times we’ve seen this query, checks whether the query is “currently popular” (using various statistics like time-decayed counts, the geographic distribution of the query, and the last time this query was sent for annotations), and dispatches it to our human computation pipeline if so.

One interesting feature of our popularity algorithm is that we often rejudge queries that have been annotated before, since the intent of a search can change. For example, perhaps people normally search for “Clint Eastwood” because they’re interested in his movies, but during the Republican National Convention users may have wanted to see tweets that were more political in nature.

Human evaluation of popular search queries

At Twitter, we use human computation for a variety of tasks. (See also Clockwork Raven, an open-source project we built that makes launching tasks easier.) For example, we often run experiments to measure ad relevance and search quality, we use it to gather data to train and evaluate our machine learning models, and in this section we’ll describe how we use it to boost our understanding of popular search queries.

So suppose that our Storm topology has detected that the query “Big Bird” is suddenly spiking. Since the query may remain popular for only a few hours, we send it off to live humans, who can help us quickly understand what it means; this dispatch is performed via a Thrift service that allows us to design our tasks in a web frontend, and later programmatically submit them to Mechanical Turk using any of the different languages we use across Twitter.

On Mechanical Turk, judges are asked several questions about the query that help us serve better ads. Without going into the exact questions, here are flavors of a few possibilities:

  • What category does the query belong to? For example, “Stanford” may typically be an education-related query, but perhaps there’s a football game between Stanford and Berkeley at the moment, in which case the current search intent would be sports.
  • Does the query refer to a person? If so, who, and what is their Twitter handle if they have one? For example, the query “Happy Birthday Harry” may be trending, but it’s hard to know beforehand which of the numerous celebrities named Harry it’s referring to. Is it One Direction’s Harry Styles, in which case the searcher is likely to be interested in teen pop? Harry Potter, in which case the searcher is likely to be interested in fantasy novels? Or someone else entirely?

Turkers in the machine

Since humans are core to this system, let’s describe how our workforce was designed to give us fast, reliable results.

For completing all our tasks, we use a small custom pool of Mechanical Turk judges to ensure high quality. Other typical possibilities in the crowdsourcing world are to use a static set of in-house judges, to use the standard worker filters that Amazon provides, or to go through an outside company like Crowdflower. We’ve experimented with these other solutions, and while they have their own benefits, we found that a custom pool fit our needs best for a few reasons:

  • In-house judges can provide high-quality work as well, but they usually work standard hours (for example, 9 to 5 if they work onsite, or a relatively fixed and limited set of hours if they work from home), it can be difficult to communicate with them and schedule them for work, and it’s hard to scale the hiring of more judges.
  • Using Crowdflower or Amazon’s standard filters makes it easy to scale the workforce, but their trust algorithms aren’t perfect, so an endless problem is that spammy workers get through and many of the judgments will be very poor quality. Two methods of combatting low quality are to seed gold standard examples for which you know the true response throughout your task, or to use statistical analysis to determine which workers are the good ones, but these can be time-consuming and expensive to create, and we often run tasks of a free-response researchy nature for which these solutions don’t work. Another problem is that using these filters gives you a fluid, constantly changing set of workers, which makes them hard to train.

In contrast:

  • Our custom pool of judges work virtually all day. For many of them, this is a full-time job, and they’re geographically distributed, so our tasks complete quickly at all hours; we can easily ask for thousands of judgments before lunch, and have them finished by the time we get back, which makes iterating on our experiments much easier.
  • We have several forums, mailing lists, and even live chatrooms set up, all of which makes it easy for judges to ask us questions and to respond to feedback. Our judges will even give us suggestions on how to improve our tasks; for example, when we run categorization tasks, they’ll often report helpful categories that we should add.
  • Since we only launch tasks on demand, and Amazon provides a ready source of workers if we ever need more, our judges are never idly twiddling their thumbs waiting for tasks or completing busywork, and our jobs are rarely backlogged.
  • Because our judges are culled from the best of Mechanical Turk, they’re experts at the kinds of tasks we send, and can often provide higher quality at a faster rate than what even in-house judges provide. For example, they’ll often use the forums and chatrooms to collaborate amongst themselves to give us the best judgments, and they’re already familiar with the Firefox and Chrome scripts that help them be the most efficient at their work.

All the benefits described above are especially valuable in this real-time search annotation case:

  • Having highly trusted workers means we don’t need to wait for multiple annotations on a single search query to confirm validity, so we can send responses to our backend as soon as a single judge responds. This entire pipeline is design for real-time, after all, so the lower the latency on the human evaluation part, the better.
  • The static nature of our custom pool means that the judges are already familiar with our questions, and don’t need to be trained again.
  • Because our workers aren’t limited to a fixed schedule or location, they can work anywhere, anytime – which is a requirement for this system, since global event spikes on Twitter are not beholden to a 9-to-5.
  • And with the multiple easy avenues of communication we have set up, it’s easy for us to answer questions that might arise when we add new questions or modify existing ones.

Singing telegram summary

Let’s end with an example of the kind of top quality our workers provide, a crowdsourced singing summary we used to celebrate the project’s launch.

This video was created entirely by our workers, from the crowdsourced lyrics, to the crowdsourced graphics, and even the piano playing and singing. Special thank you in particular to our amazing Turker, workasaurusrex, the musician and silky smooth crooner who brought the masterpiece together.

Thanks

Thanks to everyone on the Revenue and Storm teams, as well as our Turkers, for helping us launch this project.

Edge Prediction in a Social Graph: My Solution to Facebook’s User Recommendation Contest on Kaggle

A couple weeks ago, Facebook launched a link prediction contest on Kaggle, with the goal of recommending missing edges in a social graph. I love investigating social networks, so I dug around a little, and since I did well enough to score one of the coveted prizes, I’ll share my approach here.

(For some background, the contest provided a training dataset of edges, a test set of nodes, and contestants were asked to predict missing outbound edges on the test set, using mean average precision as the evaluation metric.)

Exploration

What does the network look like? I wanted to play around with the data a bit first just to get a rough feel, so I made an app to interact with the network around each node.

Here’s a sample:

1 Untrimmed Network

(Go ahead, click on the picture to play with the app yourself. It’s pretty fun.)

The node in black is a selected node from the training set, and we perform a breadth-first walk of the graph out to a maximum distance of 3 to uncover the local network. Nodes are sized according to their distance from the center, and colored according to a chosen metric (a personalized PageRank in this case; more on this later).

We can see that the central node is friends with three other users (in red), two of whom have fairly large, disjoint networks.

There are quite a few dangling nodes (nodes at distance 3 with only one connection to the rest of the local network), though, so let’s remove these to reveal the core structure:

1 Untrimmed Network

And here’s an embedded version you can manipulate inline:

Since the default view doesn’t encode the distinction between following and follower relationships, we can mouse over each node to see who it follows and who it’s followed by. Here, for example, is the following/follower network of one of the central node’s friends:

1 - Friend1

The moused over node is highlighted in black, its friends (users who both follow the node and are followed back in turn) are colored in purple, its followees are teal, and its followers in orange. We can also see that the node shares a friend with the central user (triadic closure, holla!).

Here’s another network, this time of the friend at the bottom:

1 - Friend2

Interestingly, while the first friend had several only-followers (in orange), the second friend has none. (which suggests, perhaps, a node-level feature that measures how follow-hungry a user is…)

And here’s one more node, a little further out (maybe a celebrity, given it has nothing but followers?):

1 - Celebrity

The Quiet One

Let’s take a look at another graph, one whose local network is a little smaller:

4 Network

A Social Butterfly

And one more, whose local network is a little larger:

2 Network

2 Network - Friend

Again, I encourage everyone to play around with the app here, and I’ll come back to the question of coloring each node later.

Distributions

Next, let’s take a more quantitative look at the graph.

Here’s the distribution of the number of followers of each node in the training set (cut off at 50 followers for a better fit – the maximum number of followers is 552), as well as the number of users each node is following (again, cut off at 50 – the maximum here is 1566)

Training Followers

Training Followees

Nothing terribly surprising, but that alone is good to verify. (For people tempted to mutter about power laws, I’ll hold you off with the bitter coldness of baby Gauss’s tears.)

Similarly, here are the same two graphs, but limited to the nodes in the test set alone:

Test Followers

Test Followees

Notice that there are relatively more test set users with 0 followees than in the full training set, and relatively fewer test set users with 0 followers. This information could be used to better simulate a validation set for model selection, though I didn’t end up doing this myself.

Preliminary Probes

Finally, let’s move on to the models themselves.

In order to quickly get up and running on a couple prediction algorithms, I started with some unsupervised approaches. For example, after building a new validation set* to test performance offline, I tried:

  • Recommending users who follow you (but you don’t follow in return)
  • Recommending users similar to you (when representing users as sets of their followers, and using cosine similarity and Jaccard similarity as the similarity metric)
  • Recommending users based on a personalized PageRank score
  • Recommending users that the people you follow also follow

And so on, combining the votes of these algorithms in a fairly ad-hoc way (e.g., by taking the majority vote or by ordering by the number of followers).

This worked quite well actually, but I’d been planning to move on to a more machine learned model-based approach from the beginning, so I did that next.

*My validation set was formed by deleting random edges from the full training set. A slightly better approach, as mentioned above, might have been to more accurately simulate the distribution of the official test set, but I didn’t end up trying this out myself.

Candidate Selection

In order to run a machine learning algorithm to recommend edges (which would take two nodes, a source and a candidate destination, and generate a score measuring the likelihood that the source would follow the destination), it’s necessary to prune the set of candidates to run the algorithm on.

I used two approaches for this filtering step, both based on random walks on the graph.

Personalized PageRank

The first approach was to calculate a personalized PageRank around each source node.

Briefly, a personalized PageRank is like standard PageRank, except that when randomly teleporting to a new node, the surfer always teleports back to the given source node being personalized (rather than to a node chosen uniformly at random, as in the classic PageRank algorithm).

That is, the random surfer in the personalized PageRank model works as follows:

  • He starts at the source node $X$ that we want to calculate a personalized PageRank around.
  • At step $i$: with probability $p$, the surfer moves to a neighboring node chosen uniformly at random; with probability $1-p$, the surfer instead teleports back to the original source node $X$.
  • The limiting probability that the surfer is at node $N$ is then the personalized PageRank score of node $N$ around $X$.

Here’s some Scala code that computes approximate personalized PageRank scores and takes the highest-scoring nodes as the candidates to feed into the machine learning model:

Personalized PageRank
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
/**
 * Calculate a personalized PageRank around the given user, and return 
 * a list of the nodes with the highest personalized PageRank scores.
 *
 * @return A list of (node, probability of landing at this node after
 *         running a personalized PageRank for K iterations) pairs.
 */
def pageRank(user: Int): List[(Int, Double)] = {
  // This map holds the probability of landing at each node, up to the 
  // current iteration.
  val probs = Map[Int, Double]()
  probs(user) = 1 // We start at this user.

  val pageRankProbs = pageRankHelper(start, probs, NumPagerankIterations)
  pageRankProbs.toList
               .sortBy { -_._2 }
               .filter { case (node, score) =>
                  !getFollowings(user).contains(node) && node != user
                }
                .take(MaxNodesToKeep)
}

/**
 * Simulates running a personalized PageRank for one iteration.
 *
 * Parameters:
 * start - the start node to calculate the personalized PageRank around
 * probs - a map from nodes to the probability of being at that node at 
 *         the start of the current iteration
 * numIterations - the number of iterations remaining
 * alpha - with probability alpha, we follow a neighbor; with probability
 *         1 - alpha, we teleport back to the start node
 *
 * @return A map of node -> probability of landing at that node after the
 *         specified number of iterations.
 */
def pageRankHelper(start: Int, probs: Map[Int, Double], numIterations: Int,
                   alpha: Double = 0.5): Map[Int, Double] = {
  if (numIterations <= 0) {
    probs
  } else {
    // Holds the updated set of probabilities, after this iteration.
    val probsPropagated = Map[Int, Double]()

    // With probability 1 - alpha, we teleport back to the start node.
    probsPropagated(start) = 1 - alpha

    // Propagate the previous probabilities...
    probs.foreach { case (node, prob) =>
      val forwards = getFollowings(node)
      val backwards = getFollowers(node)

      // With probability alpha, we move to a follower...
      // And each node distributes its current probability equally to 
      // its neighbors.
      val probToPropagate = alpha * prob / (forwards.size + backwards.size)
      (forwards.toList ++ backwards.toList).foreach { neighbor =>
        if (!probsPropagated.contains(neighbor)) {
          probsPropagated(neighbor) = 0
        }
        probsPropagated(neighbor) += probToPropagate
      }
    }

    pageRankHelper(start, probsPropagated, numIterations - 1, alpha)
  }
}

Propagation Score

Another approach I used, based on a proposal by another contestant on the Kaggle forums, works as follows:

  • Start at a specified user node and give it some score.
  • In the first iteration, this user propagates its score equally to its neighbors.
  • In the second iteration, each user duplicates and keeps half of its score S. It then propagates S equally to its neighbors.
  • In subsequent iterations, the process is repeated, except that neighbors reached via a backwards link don’t duplicate and keep half of their score. (The idea is that we want the score to reach followees and not followers.)

Here’s some Scala code to calculate these propagation scores:

Propagation Score
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
/**
 * Calculate propagation scores around the current user.
 *
 * In the first propagation round, we
 *
 * - Give the starting node N an initial score S.
 * - Propagate the score equally to each of N's neighbors (followers 
 *   and followings).
 * - Each first-level neighbor then duplicates and keeps half of its score
 *   and then propagates the original again to its neighbors.
 *
 * In further rounds, neighbors then repeat the process, except that neighbors 
 * traveled to via a backwards/follower link don't keep half of their score.
 *
 * @return a sorted list of (node, propagation score) pairs.
 */
def propagate(user: Int): List[(Int, Double)] = {
  val scores = Map[Int, Double]()

  // We propagate the score equally to all neighbors.
  val scoreToPropagate = 1.0 / (getFollowings(user).size + getFollowers(user).size)

  (getFollowings(user).toList ++ getFollowers(user).toList).foreach { x =>
    // Propagate the score...
    continuePropagation(scores, x, scoreToPropagate, 1)
    // ...and make sure it keeps half of it for itself.
    scores(x) = scores.getOrElse(x, 0: Double) + scoreToPropagate / 2
  }

  scores.toList.sortBy { -_._2 }
               .filter { nodeAndScore =>
                 val node = nodeAndScore._1
                 !getFollowings(user).contains(node) && node != user
                }
                .take(MaxNodesToKeep)
}

/**
 * In further rounds, neighbors repeat the process above, except that neighbors
 * traveled to via a backwards/follower link don't keep half of their score.
 */
def continuePropagation(scores: Map[Int, Double], user: Int, score: Double,
                        currIteration: Int): Unit = {
  if (currIteration < NumIterations && score > 0) {
    val scoreToPropagate = score / (getFollowings(user).size + getFollowers(user).size)

    getFollowings(user).foreach { x =>
      // Propagate the score...        
      continuePropagation(scores, x, scoreToPropagate, currIteration + 1)
      // ...and make sure it keeps half of it for itself.        
      scores(x) = scores.getOrElse(x, 0: Double) + scoreToPropagate / 2
    }

    getFollowers(user).foreach { x =>
      // Propagate the score...
      continuePropagation(scores, x, scoreToPropagate, currIteration + 1)
      // ...but backward links (except for the starting node's immediate
      // neighbors) don't keep any score for themselves.
    }
  }
}

I played around with tweaking some parameters in both approaches (e.g., weighting followers and followees differently), but the natural defaults (as used in the code above) ended up performing the best.

Features

After pruning the set of candidate destination nodes to a more feasible level, I fed pairs of (source, destination) nodes into a machine learning model. From each pair, I extracted around 30 features in total.

As mentioned above, one feature that worked quite well on its own was whether the destination node already follows the source.

I also used a wide set of similarity-based features, for example, the Jaccard similarity between the source and destination when both are represented as sets of their followers, when both are represented as sets of their followees, or when one is represented as a set of followers while the other is represented as a set of followees.

Similarity Metrics
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
abstract class SimilarityMetric[T] {
  def apply(set1: Set[T], set2: Set[T]): Double;
}

object JaccardSimilarity extends SimilarityMetric[Int] {
  /**
   * Returns the Jaccard similarity between two sets, 0 if both are empty.
   */
  def apply(set1: Set[Int], set2: Set[Int]): Double = {
    val union = (set1.union(set2)).size

    if (union == 0) {
      0
    } else {
      (set1 & set2).size.toFloat / union
    }
  }

}

object CosineSimilarity extends SimilarityMetric[Int] {
  /**
   * Returns the cosine similarity between two sets, 0 if both are empty.
   */
  def apply(set1: Set[Int], set2: Set[Int]): Double = {
    if (set1.size == 0 && set2.size == 0) {
      0
    } else {
      (set1 & set2).size.toFloat / (math.sqrt(set1.size * set2.size))
    }
  }

}

// ************
// * FEATURES *
// ************

/**
 * Returns the similarity between user1 and user2 when both are represented as
 * sets of followers.
 */
def similarityByFollowers(user1: Int, user2: Int)
                         (implicit similarity: SimilarityMetric[Int]): Double = {
  similarity.apply(getFollowersWithout(user1, user2),
                   getFollowersWithout(user2, user1))
}

// etc.

Along the same lines, I also computed a similarity score between the destination node and the source node’s followees, and several variations thereof.

Extended Similarity Scores
1
2
3
4
5
6
7
8
9
10
11
/**
 * Iterate over each of user1's followings, compute their similarity with
 * user2 when both are represented as sets of followers, and return the 
 * sum of these similarities.
 */
def followerBasedSimilarityToFollowing(user1: Int, user2: Int)
  (implicit similarity: SimilarityMetric[Int]): Double = {
    getFollowingsWithout(user1, user2)
                        .map { similarityByFollowers(_, user2)(similarity) }
                        .sum
}

Other features included the number of followers and followees of each node, the ratio of these, the personalized PageRank and propagation scores themselves, the number of followers in common, and triangle/closure-type features (e.g., whether the source node is friends with a node X who in turn is a friend of the destination node).

If I had had more time, I would probably have tried weighted and more regularized versions of some of these features as well (e.g., downweighting nodes with large numbers of followers when computing cosine similarity scores based on followees, or shrinking the scores of nodes we have little information about).

Feature Understanding

But what are these features actually doing? Let’s use the same app I built before to take a look.

Here’s the local network of node 317 (different from the node above), where each node is colored by its personalized PageRank (higher scores are in darker red):

317 - Personalized PageRank

If we look at the following vs. follower relationships of the central node (recall that purple is friends, teal is followings, orange is followers):

317 - Personalized PageRank

…we can see that, as expected (because edges that represented both following and follower were double-weighted in my PageRank calculation), the darkest red nodes are those that are friends with the central node, while those in a following-only or follower-only relationship have a lower score.

How does the propagation score compare to personalized PageRank? Here, I colored each node according to the log ratio of its propagation score and personalized PageRank:

317 - Log Ratio

Comparing this coloring with the local follow/follower network:

317 - Local Network of Node

…we can see that followed nodes (in teal) receive a higher propagation weight than friend nodes (in purple), while follower nodes (in orange) receive almost no propagation score at all.

Going back to node 1, let’s look at a different metric. Here, each node is colored according to its Jaccard similarity with the source, when nodes are represented by the set of their followers:

1 - Similarity by Followers

We can see that, while the PageRank and propagation metrics tended to favor nodes close to the central node, the Jaccard similarity feature helps us explore nodes that are further out.

However, if we look the high-scoring nodes more closely, we see that they often have only a single connection to the rest of the network:

1 - Single Connection

In other words, their high Jaccard similarity is due to the fact that they don’t have many connections to begin with. This suggests that some regularization or shrinking is in order.

So here’s a regularized version of Jaccard similarity, where we downweight nodes with few connections:

1 - Regularized

We can see that the outlier nodes are much more muted this time around.

For a starker difference, compare the following two graphs of the Jaccard similarity metric around node 317 (the first graph is an unregularized version, the second is regularized):

317 - Unregularized

317 - Regularized

Notice, in particular, how the popular node in the top left and the popular nodes at the bottom have a much higher score when we regularize.

And again, there are other networks and features I haven’t mentioned here, so play around and discover them on the app itself.

Models

For the machine learning algorithms on top of my features, I experimented with two types of models: logistic regression (using both L1 and L2 regularization) and random forests. (If I had more time, I would probably have done some more parameter tuning and maybe tried gradient boosted trees as well.)

So what is a random forest? I wrote an old (layman’s) post on it here, but since nobody ever clicks on these links, let’s copy it over:

Suppose you’re very indecisive, so whenever you want to watch a movie, you ask your friend Willow if she thinks you’ll like it. In order to answer, Willow first needs to figure out what movies you like, so you give her a bunch of movies and tell her whether you liked each one or not (i.e., you give her a labeled training set). Then, when you ask her if she thinks you’ll like movie X or not, she plays a 20 questions-like game with IMDB, asking questions like “Is X a romantic movie?”, “Does Johnny Depp star in X?”, and so on. She asks more informative questions first (i.e., she maximizes the information gain of each question), and gives you a yes/no answer at the end.

Thus, Willow is a decision tree for your movie preferences.

But Willow is only human, so she doesn’t always generalize your preferences very well (i.e., she overfits). In order to get more accurate recommendations, you’d like to ask a bunch of your friends, and watch movie X if most of them say they think you’ll like it. That is, instead of asking only Willow, you want to ask Woody, Apple, and Cartman as well, and they vote on whether you’ll like a movie (i.e., you build an ensemble classifier, aka a forest in this case).

Now you don’t want each of your friends to do the same thing and give you the same answer, so you first give each of them slightly different data. After all, you’re not absolutely sure of your preferences yourself – you told Willow you loved Titanic, but maybe you were just happy that day because it was your birthday, so maybe some of your friends shouldn’t use the fact that you liked Titanic in making their recommendations. Or maybe you told her you loved Cinderella, but actually you *really really* loved it, so some of your friends should give Cinderella more weight. So instead of giving your friends the same data you gave Willow, you give them slightly perturbed versions. You don’t change your love/hate decisions, you just say you love/hate some movies a little more or less (you give each of your friends a bootstrapped version of your original training data). For example, whereas you told Willow that you liked Black Swan and Harry Potter and disliked Avatar, you tell Woody that you liked Black Swan so much you watched it twice, you disliked Avatar, and don’t mention Harry Potter at all.

By using this ensemble, you hope that while each of your friends gives somewhat idiosyncratic recommendations (Willow thinks you like vampire movies more than you do, Woody thinks you like Pixar movies, and Cartman thinks you just hate everything), the errors get canceled out in the majority. Thus, your friends now form a bagged (bootstrap aggregated) forest of your movie preferences.

There’s still one problem with your data, however. While you loved both Titanic and Inception, it wasn’t because you like movies that star Leonardio DiCaprio. Maybe you liked both movies for other reasons. Thus, you don’t want your friends to all base their recommendations on whether Leo is in a movie or not. So when each friend asks IMDB a question, only a random subset of the possible questions is allowed (i.e., when you’re building a decision tree, at each node you use some randomness in selecting the attribute to split on, say by randomly selecting an attribute or by selecting an attribute from a random subset). This means your friends aren’t allowed to ask whether Leonardo DiCaprio is in the movie whenever they want. So whereas previously you injected randomness at the data level, by perturbing your movie preferences slightly, now you’re injecting randomness at the model level, by making your friends ask different questions at different times.

And so your friends now form a random forest.

Moving on, I essentially trained scikit-learn’s classifiers on an equal split of true and false edges (sampled from the output of my pruning step, in order to match the distribution I’d get when applying my algorithm to the official test set), and compared performance on the validation set I made, with a small amount of parameter tuning:

Random Forest
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
########################################
# STEP 1: Read in the training examples.
########################################
truths = [] # A truth is 1 (for a known true edge) or 0 (for a false edge).
training_examples = [] # Each training example is an array of features.
for line in open(TRAINING_SET_WITH_FEATURES_FILENAME):
  values = [float(x) for x in line.split(",")]
  truth = values[0]
  training_example_features = values[1:]

  truths.append(truth)
  training_examples.append(training_example_features)

#############################
# STEP 2: Train a classifier.
#############################
rf = RandomForestClassifier(n_estimators = 500, compute_importances = True, oob_score = True)
rf = rf.fit(training_examples, truths)

So let’s look at the variable importance scores as determined by one of my random forest models, which (unsurprisingly) consistently outperformed logistic regression.

Random Forest Importance Scores

The random forest classifier here is one of my earlier models (using a slightly smaller subset of my full suite of features), where the targeting step consisted of taking the top 25 nodes with the highest propagation scores.

We can see that the most important variables are:

  • Personalized PageRank scores. (I put in both normalized and unnormalized versions, where the normalized versions consisted of taking all the candidates for a particular source node, and scaling them so that the maximum personalized PageRank score was 1.)
  • Whether the destination node already follows the source.
  • How similar the source node is to the people the destination node is following, when each node is represented as a set of followers. (Note that this is more or less measuring how likely the destination is to follow the source, which we already saw is a good predictor of whether the source is likely to follow the destination.) Plus several variations on this theme (e.g., how similar the destination node is to the source node’s followers, when each node is represented as a set of followees).

Model Comparison

How do all of these models compare to each other? Is the random forest model universally better than the logistic regression model, or are there some sets of users for which the logistic regression model actually performs better?

To enable these kinds of comparisons, I made a small module that allows you to select two models and then visualize their sliced performance.

PageRank vs. Is Followed By

(Go ahead, play around.)

Above, I bucketed all test nodes into buckets based on (the logarithm of) their number of followers, and compared the mean average precision of two algorithms: one that recommends nodes to follow using a personalized PageRank alone, and one that recommends nodes that are following the source user but are not followed back in return.

We see that except for the case of 0 followers (where the “is followed by” algorithm can do nothing), the personalized PageRank algorithm gets increasingly better in comparison: at first, the two algorithms have roughly equal performance, but as the source node gets more followers, the personalized PageRank algorithm dominates.

And here’s an embedded version you can interact with directly:

Admittedly, building a slicer like this is probably overkill for a Kaggle competition, where the set of variables is fairly limited. But imagine having something similar for a real world model, where new algorithms are tried out every week and we can slice the performance by almost any dimension we can imagine (by geography, to make sure we don’t improve Australia at the expense of the UK; by user interests, to see where we could improve the performance of topic inference; by number of user logins, to make sure we don’t sacrifice the performance on new users for the gain of the core).

Mathematicians do it with Matrices

Let’s switch directions slightly and think about how we could rewrite our computations in a different, matrix-oriented style. (I didn’t do this in the competition – this is more a preview of another post I’m writing.)

Personalized PageRank in Scalding

Personalized PageRank, for example, is an obvious fit for a matrix rewrite. Here’s how it would look in Scalding’s new Matrix library:

(For those who don’t know, Scalding is a Hadoop framework that Twitter released at the beginning of the year; see my post on building a big data recommendation engine in Scalding for an introduction.)

Personalized PageRank, Matrix Style
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
// ***********************************************
// STEP 1. Load the adjacency graph into a matrix.
// ***********************************************

val following = Tsv(GraphFilename, ('user1, 'user2, 'weight))

// Binary matrix where cell (u1, u2) means that u1 follows u2.
val followingMatrix =
  following.toMatrix[Int,Int,Double]('user1, 'user2, 'weight)

// Binary matrix where cell (u1, u2) means that u1 is followed by u2.  
val followerMatrix = followingMatrix.transpose

// Note: we could also form this adjacency matrix differently, by placing
// different weights on the following vs. follower edges.
val undirectedAdjacencyMatrix =
  (followingMatrix + followerMatrix).rowL1Normalize

// Create a diagonal users matrix (to be used in the "teleportation back
// home" step).
val usersMatrix =
  following.unique('user1)
           .map('user1 -> ('user2, 'weight)) { user1: Int => (user1, 1) }
           .toMatrix[Int, Int, Double]('user1, 'user2, 'weight)

// ***************************************************
// STEP 2. Compute the personalized PageRank scores.
// See http://nlp.stanford.edu/projects/pagerank.shtml
// for more information on personalized PageRank.
// ***************************************************

// Compute personalized PageRank by running for three iterations,
// and output the top candidates.
val pprScores = personalizedPageRank(usersMatrix, undirectedAdjacencyMatrix, usersMatrix, 0.5, 3)
pprScores.topRowElems(numCandidates).write(Tsv(OutputFilename))

/**
 * Performs a personalized PageRank iteration. The ith row contains the
 * personalized PageRank probabilities around node i.
 *
 * Note the interpretation: 
 *   - with probability 1 - alpha, we go back to where we started.
 *   - with probability alpha, we go to a neighbor.
 *
 * Parameters:
 *   
 *   startMatrix - a (usually diagonal) matrix, where the ith row specifies
 *                 where the ith node teleports back to.
 *   adjacencyMatrix
 *   prevMatrix - a matrix whose ith row contains the personalized PageRank
 *                probabilities around the ith node.
 *   alpha - the probability of moving to a neighbor (as opposed to
 *           teleporting back to the start).
 *   numIterations - the number of personalized PageRank iterations to run. 
 */
def personalizedPageRank(startMatrix: Matrix[Int, Int, Double],
                         adjacencyMatrix: Matrix[Int, Int, Double],
                         prevMatrix: Matrix[Int, Int, Double],
                         alpha: Double,
                         numIterations: Int): Matrix[Int, Int, Double] = {
    if (numIterations <= 0) {
      prevMatrix
    } else {
      val updatedMatrix = startMatrix * (1 - alpha) +
                          (prevMatrix * adjacencyMatrix) * alpha
      personalizedPageRank(startMatrix, adjacencyMatrix, updatedMatrix, alpha, numIterations - 1)
    }
}

Not only is this matrix formulation a more natural way of expressing the algorithm, but since Scalding (by way of Cascading) supports both local and distributed modes, this code runs just as easily on a Hadoop cluster of thousands of machines (assuming our social network is orders of magnitude larger than the one in the contest) as on a sample of data in a laptop. Big data, big matrix style, BOOM.

Cosine Similarity as L2-Normalized Multiplication

Here’s another example. Calculating cosine similarity between all users is a natural fit for a matrix formulation since, after all, the cosine similarity between two vectors is just their L2-normalized dot product:

Cosine Similarity, Matrix Style
1
2
3
4
5
6
7
// A matrix where the cell (i, j) is 1 iff user i is followed by user j.
val followerMatrix = ...

// A matrix where cell (i, j) holds the cosine similarity between
// user i and user j, when both are represented as sets of their followers.
val followerBasedSimilarityMatrix =
  followerMatrix.rowL2Normalize * followerMatrix.rowL2Normalize.transpose

A Similarity Extension

But let’s go one step further.

To change examples for ease of exposition: suppose you’ve bought a bunch of books on Amazon, and Amazon wants to recommend a new book you’ll like. Since Amazon knows similarities between all pairs of books, one natural way to generate this recommendation is to:

  1. Take every book B.
  2. Calculate the similarity between B and each book you bought.
  3. Sum up all these similarities to get your recommendation score for B.

In other words, the recommendation score for book B on user U is:

DidUserBuy(U, Book 1) * SimilarityBetween(Book B, Book 1) + DidUserBuy(U, Book 2) * SimilarityBetween(Book B, Book2) + … + DidUserBuy(U, Book n) * SimilarityBetween(Book B, Book n)

This, too, is a dot product! So it can also be rewritten as a matrix multiplication:

1
2
3
4
5
6
7
8
9
10
// A matrix where cell (i, j) holds the similarity between books i and j.
val bookSimilarityMatrix = ...

// A matrix where cell (i, j) is 1 if user i has bought book j, 
// and 0 otherwise.
val userPurchaseMatrix = ...

// A matrix where cell (i, j) holds the recommendation score of
// book j to user i.
val recommendationMatrix = userPurchaseMatrix * bookSimilarityMatrix

Of course, there’s a natural analogy between this score and the feature I described a while back above, where I compute a similarity score between a destination node and a source node’s followees (when all nodes are represented as sets of followers):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
/**
 * Iterate over each of user1's followings, compute their similarity
 * with user2 when both are represented as sets of followers, and return
 * the sum of these similarities.
 */
def followerBasedSimilarityToFollowings(user1: Int, user2: Int)
    (implicit similarity: SimilarityMetric[Int]): Double = {
  getFollowingsWithout(user1, user2)
                      .map { similarityByFollowers(_, user2)(similarity) }
                      .sum
}

/**
 * The matrix version of the above function.
 *
 * Why are these the same? Note that the above function simply computes:
 *   DoesUserFollow(User A, User 1) * Similarity(User 1, User B) + 
 *     DoesUserFollow(User A, User 2) * Similarity(User 2, User B) + ... + 
 *     DoesUserFollow(User A, User n) * Similarity(User n, User B)
 */
val followingMatrix = ...
val followerBasedSimilarityMatrix =
  followerMatrix.rowL2Normalize * followerMatrix.rowL2Normalize.transpose

val followerBasedSimilarityToFollowingsMatrix =
  followingMatrix * followerBasedSimilarityMatrix

For people comfortable expressing their computations in a vector manner, writing your computations as matrix manipulations often makes experimenting with different algorithms much more fluid. Imagine, for example, that you want to switch from L1 normalization to L2 normalization, or that you want to express your objects as binary sets rather than weighted vectors. Both of these become simple one-line changes when you have vectors and matrices as first-class objects, but are much more tedious (especially in a MapReduce land where this matrix library was designed to be applied!) when you don’t.

Finish Line

By now, I think I’ve spent more time writing this post than on the contest itself, so let’s wrap up.

I often get asked what kinds of tools I like to use, so for this competition my kit consisted of:

  • Scala, for code that needed to be fast (e.g., extracting features) or that I was going to run repeatedly (e.g., scoring my validation set).
  • Python, for my machine learning models, because scikit-learn is awesome.
  • Ruby, for quick one-off scripts.
  • R, for some data analysis and simple plotting.
  • Coffeescript and d3, for the interactive visualizations.

Finally, I put up a Github repository containing some code, and here are a couple other posts I’ve written that people who like this entry might also enjoy:

Soda vs. Pop with Twitter

One of the great things about Twitter is that it’s a global conversation anyone can join anytime. Eavesdropping on the world, what what!

Of course, it gets even better when you can mine all this chatter to study the way humans live and interact.

For example, how do people in New York City differ from those in Silicon Valley? We tend to think they’re more financially driven and restless with the world – is this true, and if so, how much more?

Or how does language change as you travel to different regions? Recall the classic soda vs. pop. vs. coke question: some people use the word “soda” to describe their soft drinks, others use “pop”, and still others use “coke”. Who says what where?

Let’s take a look.

United States

To make this map, I sampled geo-tagged tweets containing the words “soda”, “pop”, or “coke”, performed some state-of-the-art NLP technology to ensure the tweets were soft drink related (e.g., the tweets had to contain “drink soda” or “drink a pop”), and tried to filter out coke tweets that were specifically about the Coke brand (e.g., Coke Zero).

It’s a little cluttered, though, so let’s clean it up by aggregating nearby tweets.

United States Binned

Here, I bucketed all tweets within a 0.333 latitude/longitude radius, calculated the term distribution within each bucket, and colored each bucket with the word furthest from its overall mean. I also sized each point according to the (log-transformed) number of tweets in the bucket.

We can see that:

  • The South is pretty Coke-heavy.
  • Soda belongs to the Northeast and far West.
  • Pop gets the mid-West, except for some interesting spots of blue around Wisconsin and the Illinois-Missouri border.

For comparison, here’s another map based on a survey at popvssoda.com.

Pop vs. Soda Map

We can see similar patterns, though interestingly, our map has less Coke in the Southeast and less pop in the Northwest.

Finally, here’s a world map of the terms, bucketed again. Notice that “pop” seems to be prevalent only in parts of the United States and Canada.

World

As some astute readers noted, though, the seeming dominance of coke is probably due to the difficulty in distinguishing the generic use of coke for soft drinks in general from the particular use of coke for referring to the Coca-Cola brand.

So let’s instead look at a world map of a couple other soft drink terms (“fizzy drink”, “mineral”, and “tonic”):

Fizzy Drink vs. Mineral vs. Tonic

Notice that:

  • “Fizzy drink” shows up for the UK, New Zealand, and Maine.
  • “Tonic” appears in Massachusetts.
  • While South Africa gets “fizzy drink”, Nigeria gets “mineral”.

I’ve been getting a lot of questions lately about interesting things you can do with the Twitter API, so this was just one small project I’ve worked on to illustrate. This paper contains another awesome application of Twitter data to geographic language variation, and just for fun, here are a few other cute mini-projects:

What do people eat during the Super Bowl? (wings and beer, apparently)

Superbowl Snacks

What do people want for Christmas, compared to what they actually get?

Christmas

What do guys and girls really say?

Shit Guys and Girls Say

When were people losing and gaining power during Hurricane Sandy? (click the image to interact)

Sandy Power Outages

How does information of a geographic-specific nature spread? (click the image to see a dynamic visualization of when and where tweets related to surviving Hurricane Sandy were shared)

Hurricane Sandy Retweets

Can we use Twitter to measure presidential votes? (yes!)

Electoral Map

Making the Most of Mechanical Turk: Tips and Best Practices

(Update: we recently open-sourced Clockwork Raven, one of the human evaluation tools on top of Mechanical Turk that we built at Twitter.)

Big data’s all the rage, but sometimes a couple thousand human-generated labels can be pretty effective as well. And since I’ve been using Amazon’s Mechanical Turk system a lot recently, I figured I’d share some of the things I’ve learned.

What is MTurk?

Mechanical Turk is a crowdsourcing system developed by Amazon that connects you to a relatively cheap source of human labor on the fly.

For example, suppose you have 10,000 websites that you want to classify as spam or not. To get these classifications, you (the Requester):

  1. Create a CSV file containing the links and any other information.
  2. Log onto MTurk and create a HIT (Human Intelligence Task) describing the job (possibly by using Amazon’s WYSIWYG editor or writing your own HTML, which can refer to columns in your CSV). [There’s also an MTurk API, if you don’t want to use the terrible UI.]
  3. Within hours of starting the task, your judgments will be completed by Turkers around the world for pennies each.

More Example Tasks

So what can you use MTurk for? Here are three of my favorite uses:

Lines Mutation

Sheep

Blurry Text

And here are some more practical tasks, from HITs running right now:

  • Categorize the sentiment of a tweet towards Panera Bread

Panera

  • Copy text from a business card

Business Card

  • Judge entity relatedness

Angelina Jolie

Increasing the quality of your judgments

So what will the quality of your judgments look like?

If you don’t do anything special, then your output will contain a lot of garbage. I’ve thrown out entire tasks because of scammers who spend less than 5 seconds on each judgment (Amazon records the time each worker spends) and submit random clicks as output (e.g., labeling Nike as a food category).

Luckily, Amazon provides a few worker filters:

  • You can require that only Turkers who have received at least (say) 99% approval rate on at least 10,000 judgments in the past are allowed to work on your judgment. (If you see bad judgments from a worker, you can reject them and get your money back.)
  • About a year ago, Amazon launched a “categorization masters” and “photo masters” program, which allows only masters to work on your HITs. According to a chat with a member of the MTurk team, Amazon assigns these master badges by creating special tasks (anonymously, and for which Amazon already knows the answer) and measuring the quality of each worker’s response to these tasks.
  • You can also create a custom filter and handpick who gets allowed to work for you, or set up a qualification test that workers are required to take before working on your tasks.

I’ve used different combinations of the first two filters, and gotten excellent results – compared to in-house judges I’ve worked with in person and paid \$20-30 an hour, the judgments on Mechanical Turk have been just as good and sometimes even better. (I often ask my judges to explain their judgments, which makes it easy to detect high quality workers.) For example, here are some typical response I’ve received when asking judges to determine which of two products a given Twitter user might be more interested in:

The user is a female obsessed with Twilight Movies and Rob Pattinson. She tweets and follows both subjects. Movie tickets would be interesting to her.

He doesn’t seem to play video games, and he doesn’t seem technical enough to care about running Windows on a Mac. Neither of these products are a good fit for him.

In fact, I’ll frequently also get emails from Turkers giving me suggestions on how to improve my tasks or asking how they can do them better. (Amazon allows workers to email you. The only way for the requester to initiate a conversation, though, is by paying the worker a small bonus for excellent work, and including a message with the bonus.) Here are excerpts from some emails I’ve received:

I just wanted to check in to be sure that once I figured things out that I was doing your hits the way you intended them to be done. I want to be sure that you are getting the data that you need from the work. Please do not hesitate to let me know if there is anything that I can do to improve the way I am working your HITs. This is my full time job while I stay at home with my kids, so I like to check with the requesters to be sure that I am putting out the work that they are looking for. Any suggestion is welcome.

Frankly, lingerie, makeup, and feminine hygiene are the only male-exclusionary topics I can think of, and it feels knee-jerk sexist to mark any sports-related site for men. That said, should I hew more closely to gender stereotypes or be politically correct? (from a HIT where I was gathering gender classification data)

I do think a few more categories are needed but keeping the number down overall is good - 50 or 60 to choose from can be overwhelming and not worth the time. I may have mentioned I never used the Photography one (and I did a lot of those) so that is a good candidate for elimination.

That said, despite the approval rate filters and masters badges, I do occasionally get a couple scammers in the mix (or even just judges who don’t produce as excellent work). So one suggestion is to run an initial task with these filters applied, find the workers with the best quality, and from then on use a custom pool containing these Turkers alone.

How much to pay

So how much should you pay your workers?

New Turkers and Turkers who don’t meet the strict filters can be paid less, but most of my high-quality workers expect to make about \$8-14 an hour. (You can only specify how much you pay per judgment, but Amazon will tell you how long each item ends up taking on average.) For example, here’s what several Turkers said what I asked them directly how much they make:

Most of the work I do is either writing or editing. When editing work is available, I make \$15-20 per hour. I’m a slower writer than an editor, so I average \$10-12 per hour with writing. I also judge sentiments of messages and average about \$8 per hour with that type of work. I would like to average a minimum of no less than \$8 per hour.

A big factor in deciding to do a task or not comes from the time investment involved. The two big time sinks are either googling/searching/having to go to another site, or having to write something as part of your reply. If I remember correctly, a) your tasks did require looking at another page but either the link was right there OR, better yet, you had that page embedded in the HIT itself so clicking out of the window wasn’t necessary (turkers get very excited about this), and b) the quality of the pay rate was such that it easily outweighed the time it took to leave an explanatory comment.

For me at least, those things can’t be underestimated. Sure, your tasks may be a little time-consuming, but I figure a good task is one I can make 10 to 12 cents a minute on. Your task might take longer but I’m definitely still coming out ahead.

From my own experience, I work hardest and best for a requester that pays well and doesn’t reject (or at least seems to have a reason for a rejection when it happens). If a requester is going to accept the majority of my work, I as a worker feel that obligates me to provide them with the best quality possible. Similarly, although I’m conscientious with all tasks, I’m especially so with a high-paying one: it would be easy to take advantage of a high-pay, low-reject requester - which would ultimately lead them to either lower the pay or change the acceptance criteria. I don’t want that!! That’s the kind of requester I want around. I’m grateful for high pay and fair policies and that kind of requester gets an above-and-beyond effort from me.

For the pay, I have worked on master’s hits that have ranged from \$6-\$16 per hour. Averaging them out works out to around \$9, which isn’t a bad wage. I have two requesters that I work for that don’t use the master qualification but instead have closed qualifications that they’ve assigned to their best workers. Those tasks pay between \$12 and \$15 per hour, so no matter what I’m working on I will stop what I’m doing to work on them. The best paying hits are always done very quickly, so most of the time if you check out mturk and look at the tasks available you won’t get a very good idea of average pay because the terrible paying hits will sit on the board until they expire.

Obviously, this is self-reported, so there’s a strong possibility that the Turkers are artificially inflating their numbers. But this does match what I’ve been told by a manager on the MTurk team, as well as what Turkers self-report on TurkerNation.

A good suggestion regarding pay is to start at the lower end of the scale, around \$6-8 per hour, and increase that until you get both the quality and speed you want.

Other design tips

Interestingly, according to what Turkers (see the excerpts above) and my Amazon contact say, as well as other research I’ve seen (e.g., this paper), pay is not at the absolute forefront of Turkers’ minds when they decide what to work on. Instead, they focus more on requesters they’ve already established a good relationship with, HITs with many items (so they can quickly settle into a rhythm), HITs they know they’ll be paid for (so they’re not worried about rejections), and HITs that they generally enjoy doing more.

So here are a couple suggestions:

  • If your task is hard and there’s no clearly correct answer, even good Turkers might be worried that you’ll reject their judgments (and so they might skip over your HIT). So make it clear in your instructions that you won’t reject any judgments, or that you won’t reject any judgments with an honest effort.
  • Make your instructions collapsible, or link to them in a separate site. Scrolling is kind of annoying on Mechanical Turk (I know – I’ve tried working on HITs myself), so you should minimize the amount workers have to scroll. Ideally, everything fits on a single screen. Plus, the less workers have to scroll, the faster your HITs will get done. For example, here are excerpts from emails I received from two different Turkers when I first started out:

I have a suggestion that would really make things go a little quicker. Is there anyway you could script the twitter link to automatically open in a new tab? It amazes me how much it can slow you down to have to right click and open it manually in another tab, and when you forget, you have to take a few more steps to get back to where you were.

It would be amazing if the Twitter account could be on the same page instead of having to click to get to another screen - the work would go *exponentially* faster! Overall, I’m enjoying them - and I’m not the only one. Despite your stringent requirements these are disappearing pretty quickly.

  • Introduce yourself on TurkerNation, a forum where Turkers and Requesters go to talk about Mechanical Turk. This helps establish your reputation as a good requester who listens to feedback, which will make good Turkers want to work for you. (More on this below.)
  • Approve judgments quickly: Turkers want money now instead of money later. For example, one worker told me:

Quick approval is important, too. Watching that money pile up is a serious motivator; I’ll sometimes choose a lower-paying task that approves in close to real time over a higher-paying one that won’t pay out for several days.

When using my trusted set of workers, I let Amazon auto-approve all judgments within a couple hours.

Reputation

Reputation is pretty important. Turkers love requesters who take the time to respond to emails and incorporate suggestions. Excerpts from emails I’ve received:

I LOVE it when requesters care enough to ask the opinion of us lowly turkers and am more than willing to take a few minutes to help them with anything. I look forward to seeing what you cook up!

Thanks for taking the time to try to make your hits better in both pay and design. It’s great to see a requester that actually cares, when most don’t. If you have any other questions for me, feel free to ask. I hope to work for you again soon.

From my own experience, I work hardest and best for a requester that pays well and doesn’t reject (or at least seems to have a reason for a rejection when it happens). If a requester is going to accept the majority of my work, I as a worker feel that obligates me to provide them with the best quality possible. Similarly, although I’m conscientious with all tasks, I’m especially so with a high-paying one: it would be easy to take advantage of a high-pay, low-reject requester - which would ultimately lead them to either lower the pay or change the acceptance criteria. I don’t want that!! That’s the kind of requester I want around. I’m grateful for high pay and fair policies and that kind of requester gets an above-and-beyond effort from me.

I’ve gotten great suggestions from a lot of Turkers (sometimes, when launching a new type of experiment, I’ll do a quick trial run in order to get some fast feedback before spending more time on the HIT design), and I suspect it’s partly because I’ve taken the time to connect with my workers.

So, as suggested above, one way of quickly garnering some goodwill when you’re first getting started is to make a post introducing yourself on TurkerNation. (There’s a sub-forum devoted to this exact purpose, in fact.)

This is useful because workers will often start new threads recommending particular requesters and encouraging other Turkers to work for them. In the amusing thread praising me, for example, one worker mentioned that she’d been hesitant to work on my HITs until she saw the post confirming I was a good requester.

Also, many Turkers mention that they always refer to Turkopticon, a Firefox extension that displays ratings of requesters by other Turkers, before accepting work from a requester they haven’t worked for before.

This is what TurkOpticon looks like:

Jim Young

ProductRNR

Here are some comments about TurkOpticon on TurkerNation:

I think that it is well worth taking the time to check reputation of requesters via TurkOpticon and/or in this forum. Checking first substantially minimizes your risk of rejection, of being blocked, and of being paid sub-human wages.

Blindly doing hits for requesters that were never heard of before got me with a pretty bad approval rate when I first started turking. After that, I rigorously inspect every requester that doesn’t have any ratings on Turkopticon. Actually, because of that little add-on I’ve been able to maintain a steady 98-99% approval rate ever since I began using it.

Waiting Time

So how long does it take to get judgments? I’ve restricted the available worker pool pretty strongly to ensure high quality, and it’s still only taken a few hours to get a thousand judgments.

That’s pretty awesome. I’ve worked a lot with human evaluation systems before, but always using a small in-house set of judges – and what with constraints on when those judges were available, how much they were able to work each week, and other tasks taking higher priority, it’d invariably take at least a few days before I’d receive any useful data back.

Getting thousands of judgments in a couple hours means I can launch an MTurk task when I leave for work in the morning and have it done before lunch, which makes experimenting with a lot of different ideas much faster and easier.

Scale

So how many judgments can you actually get before you run out of workers? I’m still a small fish in the MTurk system, but I’m told by my MTurk contact at Amazon that there are companies getting over a million judgments each month.

I also asked my pool of workers how much they’re available to work, in case I would need to scale up to more judgments later on, and here are some samples from what they said:

Typically, I work a total of 20-25 hours per week for a small select group of requesters. I could put in at least 20 hours per week for you alone if you were to make a custom qualification for me. If I know that I can continue to do exemplary work beyond 20 hours, I would be willing to put forth more hours of work. I want to make sure that you are getting the quality of work that you need.

On a day when I don’t have those other assignments, I’d guess I’m turking 5 to 7 hours a day (including weekends). I like to look for a large batch of HITs (preferably in the thousands) so that I can settle into a groove of being able to do them fairly quickly and once I find something like that I can happily settle in for several hours at a time.

I spend more time than the average person on mturk. I log on at about 5:30 AM and am constantly checking for work throughout the day. If the work is available, I will spend until 9PM working. Granted, I do have to take some breaks throughout the day to take care of my 3 year old, but for the most part, I am doing my best to earn while the hits are posted. If I take any time off, it is on the weekend (if I reach my earning goals for the week).

Of course, how much I can work varies. My main source of income is transcription for a market research company and mturk fills in my downtime. If I have an audio file from them, that gets my attention. If not, I’m on mturk. As a single mother working from home, I love the flexibility.

End

I’ll end with a couple other notes.

  • How do other companies use human evaluation systems? Google and Bing use human judgments in their search metrics, though I think they use an in-house set of judges rather than Mechanical Turk. I’ve heard Aardvark and Quora used Mechanical Turk to seed answers when they first launched their sites. There’s also a nice set of case studies here (search for the “On-Demand Workforce” section); in particular, Knewton’s use of MTurk for performance and QA testing is pretty interesting.
  • I’ve described one way of finding good workers, namely, using the filters Amazon provides. Another way could be to build a reputation system yourself, perhaps using an EM-style algorithm to determine judge quality.
  • Crowdflower is another crowdsourcing system. There are a couple differences with MTurk:
    • Crowdflower’s worker pool consists of about 20 different sources, including Mechanical Turk, as well as sources like TrialPay (people can opt to complete a MTurk task to receive some kind of TrialPay deal).
    • Crowdflower offers both a self-serve platform (like MTurk), as well as a more enterprise-centric solution (where you work directly with a Crowdflower employee). The enterprise offering is pretty nice, since that means Crowdflower will take care of the lower-level details for you (like actually designing and creating the job), and they can offer suggestions for designing the HIT based on their experience.
    • Crowdflower provides the option of adding gold standard judgments to your task (items where you provide a golden answer, which are then randomly shown to workers; these are then used to monitor judges) and they try to automatically determine judge quality and item accuracy for you (e.g., by having each item judged by three different workers).
  • An excellent crowdsourcing resource is CrowdScope. I also like the Deneme blog (though it hasn’t been updated in a while) for a lot of fun experiments. Panos Ipeirotis’ blog has good information as well.

Infinite Mixture Models with Nonparametric Bayes and the Dirichlet Process

Imagine you’re a budding chef. A data-curious one, of course, so you start by taking a set of foods (pizza, salad, spaghetti, etc.) and ask 10 friends how much of each they ate in the past day.

Your goal: to find natural groups of foodies, so that you can better cater to each cluster’s tastes. For example, your fratboy friends might love wings and beer, your anime friends might love soba and sushi, your hipster friends probably dig tofu, and so on.

So how can you use the data you’ve gathered to discover different kinds of groups?

Clustering Example

One way is to use a standard clustering algorithm like k-means or Gaussian mixture modeling (see this previous post for a brief introduction). The problem is that these both assume a fixed number of clusters, which they need to be told to find. There are a couple methods for selecting the number of clusters to learn (e.g., the gap and prediction strength statistics), but the problem is a more fundamental one: most real-world data simply doesn’t have a fixed number of clusters.

That is, suppose we’ve asked 10 of our friends what they ate in the past day, and we want to find groups of eating preferences. There’s really an infinite number of foodie types (carnivore, vegan, snacker, Italian, healthy, fast food, heavy eaters, light eaters, and so on), but with only 10 friends, we simply don’t have enough data to detect them all. (Indeed, we’re limited to 10 clusters!) So whereas k-means starts with the incorrect assumption that there’s a fixed, finite number of clusters that our points come from, no matter if we feed it more data, what we’d really like is a method positing an infinite number of hidden clusters that naturally arise as we ask more friends about their food habits. (For example, with only 2 data points, we might not be able to tell the difference between vegans and vegetarians, but with 200 data points, we probably could.)

Luckily for us, this is precisely the purview of nonparametric Bayes.*

*Nonparametric Bayes refers to a class of techniques that allow some parameters to change with the data. In our case, for example, instead of fixing the number of clusters to be discovered, we allow it to grow as more data comes in.

A Generative Story

Let’s describe a generative model for finding clusters in any set of data. We assume an infinite set of latent groups, where each group is described by some set of parameters. For example, each group could be a Gaussian with a specified mean $\mu_i$ and standard deviation $\sigma_i$, and these group parameters themselves are assumed to come from some base distribution $G_0$. Data is then generated in the following manner:

  • Select a cluster.
  • Sample from that cluster to generate a new point.

(Note the resemblance to a finite mixture model.)

For example, suppose we ask 10 friends how many calories of pizza, salad, and rice they ate yesterday. Our groups could be:

  • A Gaussian centered at (pizza = 5000, salad = 100, rice = 500) (i.e., a pizza lovers group).
  • A Gaussian centered at (pizza = 100, salad = 3000, rice = 1000) (maybe a vegan group).
  • A Gaussian centered at (pizza = 100, salad = 100, rice = 10000) (definitely Asian).

When deciding what to eat when she woke up yesterday, Alice could have thought girl, I’m in the mood for pizza and her food consumption yesterday would have been a sample from the pizza Gaussian. Similarly, Bob could have spent the day in Chinatown, thereby sampling from the Asian Gaussian for his day’s meals. And so on.

The big question, then, is: how do we assign each friend to a group?

Assigning Groups

Chinese Restaurant Process

One way to assign friends to groups is to use a Chinese Restaurant Process. This works as follows: Imagine a restaurant where all your friends went to eat yesterday…

  • Initially the restaurant is empty.
  • The first person to enter (Alice) sits down at a table (selects a group). She then orders food for the table (i.e., she selects parameters for the group); everyone else who joins the table will then be limited to eating from the food she ordered.
  • The second person to enter (Bob) sits down at a table. Which table does he sit at? With probability $\alpha / (1 + \alpha)$ he sits down at a new table (i.e., selects a new group) and orders food for the table; with probability $1 / (1 + \alpha)$ he sits with Alice and eats from the food she’s already ordered (i.e., he’s in the same group as Alice).
  • The (n+1)-st person sits down at a new table with probability $\alpha / (n + \alpha)$, and at table k with probability $n_k / (n + \alpha)$, where $n_k$ is the number of people currently sitting at table k.

Note a couple things:

  • The more people (data points) there are at a table (cluster), the more likely it is that people (new data points) will join it. In other words, our groups satisfy a rich get richer property.
  • There’s always a small probability that someone joins an entirely new table (i.e., a new group is formed).
  • The probability of a new group depends on $\alpha$. So we can think of $\alpha$ as a dispersion parameter that affects the dispersion of our datapoints. The lower alpha is, the more tightly clustered our data points; the higher it is, the more clusters we have in any finite set of points.

(Also notice the resemblance between table selection probabilities and a Dirichlet distribution…)

Just to summarize, given n data points, the Chinese Restaurant Process specifies a distribution over partitions (table assignments) of these points. We can also generate parameters for each partition/table from a base distribution $G_0$ (for example, each table could represent a Gaussian whose mean and standard deviation are sampled from $G_0$), though to be clear, this is not part of the CRP itself.

Code

Since code makes everything better, here’s some Ruby to simulate a CRP:

Chinese Restaurant Process chinese_restaurant_process.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Generate table assignments for `num_customers` customers, according to
# a Chinese Restaurant Process with dispersion parameter `alpha`.
#
# returns an array of integer table assignments
def chinese_restaurant_process(num_customers, alpha)
 return [] if num_customers <= 0

 table_assignments = [1] # first customer sits at table 1
 next_open_table = 2 # index of the next empty table

 # Now generate table assignments for the rest of the customers.
 1.upto(num_customers - 1) do |i|
   if rand < alpha.to_f / (alpha + i)
     # Customer sits at new table.
     table_assignments << next_open_table
     next_open_table += 1
   else
     # Customer sits at an existing table.
     # He chooses which table to sit at by giving equal weight to each
     # customer already sitting at a table. 
     which_table = table_assignments[rand(table_assignments.size)]
     table_assignments << which_table
   end
 end

 table_assignments
end

And here’s some sample output:

Chinese Restaurant Process chinese_restaurant_process.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
> chinese_restaurant_process(num_customers = 10, alpha = 1)
1, 2, 3, 4, 3, 3, 2, 1, 4, 3 # table assignments from run 1
1, 1, 1, 1, 1, 1, 2, 2, 1, 3 # table assignments from run 2
1, 2, 2, 1, 3, 3, 2, 1, 3, 4 # table assignments from run 3

> chinese_restaurant_process(num_customers = 10, alpha = 3)
1, 2, 1, 1, 3, 1, 2, 3, 4, 5
1, 2, 3, 3, 4, 3, 4, 4, 5, 5
1, 1, 2, 3, 1, 4, 4, 3, 1, 1

> chinese_restaurant_process(num_customers = 10, alpha = 5)
1, 2, 1, 3, 4, 5, 6, 7, 1, 8
1, 2, 3, 3, 4, 5, 6, 5, 6, 7
1, 2, 3, 4, 5, 6, 2, 7, 2, 1

Notice that as we increase $\alpha$, so too does the number of distinct tables increase.

Polya Urn Model

Another method for assigning friends to groups is to follow the Polya Urn Model. This is basically the same model as the Chinese Restaurant Process, just with a different metaphor.

  • We start with an urn containing $\alpha G_0(x)$ balls of “color” $x$, for each possible value of $x$. ($G_0$ is our base distribution, and $G_0(x)$ is the probability of sampling $x$ from $G_0$). Note that these are possibly fractional balls.
  • At each time step, draw a ball from the urn, note its color, and then drop both the original ball plus a new ball of the same color back into the urn.

Note the connection between this process and the CRP: balls correspond to people (i.e., data points), colors correspond to table assignments (i.e., clusters), alpha is again a dispersion parameter (put differently, a prior), colors satisfy a rich-get-richer property (since colors with many balls are more likely to get drawn), and so on. (Again, there’s also a connection between this urn model and the urn model for the (finite) Dirichlet distribution…)

To be precise, the difference between the CRP and the Polya Urn Model is that the CRP specifies only a distribution over partitions (i.e., table assignments), but doesn’t assign parameters to each group, whereas the Polya Urn Model does both.

Code

Again, here’s some code for simulating a Polya Urn Model:

Polya Urn Model polya_urn_model.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Draw `num_balls` colored balls according to a Polya Urn Model
# with a specified base color distribution and dispersion parameter
# `alpha`.
#
# returns an array of ball colors
def polya_urn_model(base_color_distribution, num_balls, alpha)
  return [] if num_balls <= 0

  balls_in_urn = []
  0.upto(num_balls - 1) do |i|
    if rand < alpha.to_f / (alpha + balls_in_urn.size)
      # Draw a new color, put a ball of this color in the urn.
      new_color = base_color_distribution.call
      balls_in_urn << new_color
    else
      # Draw a ball from the urn, add another ball of the same color.
      ball = balls_in_urn[rand(balls_in_urn.size)]
      balls_in_urn << ball
    end
  end

  balls_in_urn
end

And here’s some sample output, using a uniform distribution over the unit interval as the color distribution to sample from:

Polya Urn Model polya_urn_model.rb
1
2
3
4
5
6
> unit_uniform = lambda { (rand * 100).to_i / 100.0 }

> polya_urn_model(unit_uniform, num_balls = 10, alpha = 1)
0.27, 0.89, 0.89, 0.89, 0.73, 0.98, 0.43, 0.98, 0.89, 0.53 # colors in the urn from run 1
0.26, 0.26, 0.46, 0.26, 0.26, 0.26, 0.26, 0.26, 0.26, 0.85 # colors in the urn from run 2
0.96, 0.87, 0.96, 0.87, 0.96, 0.96, 0.87, 0.96, 0.96, 0.96 # colors in the urn from run 3

Code, Take 2

Here’s the same code for a Polya Urn Model, but in R:

Polya Urn Model polya_urn_model.R
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Return a vector of `num_balls` ball colors according to a Polya Urn Model
# with dispersion `alpha`, sampling from a specified base color distribution.
polya_urn_model = function(base_color_distribution, num_balls, alpha) {
  balls = c()

  for (i in 1:num_balls) {
    if (runif(1) < alpha / (alpha + length(balls))) {
      # Add a new ball color.
      new_color = base_color_distribution()
      balls = c(balls, new_color)
    } else {
      # Pick out a ball from the urn, and add back a
      # ball of the same color.
      ball = balls[sample(1:length(balls), 1)]
      balls = c(balls, ball)
    }
  }

  balls
}

Here are some sample density plots of the colors in the urn, when using a unit normal as the base color distribution:

Polya Urn Model, Alpha = 1

Polya Urn Model, Alpha = 5

Polya Urn Model, Alpha = 25

Polya Urn Model, Alpha = 50

Notice that as alpha increases (i.e., we sample more new ball colors from our base; i.e., as we place more weight on our prior), the colors in the urn tend to a unit normal (our base color distribution).

And here are some sample plots of points generated by the urn, for varying values of alpha:

  • Each color in the urn is sampled from a uniform distribution over [0,10]x[0,10] (i.e., a [0, 10] square).
  • Each group is a Gaussian with standard deviation 0.1 and mean equal to its associated color, and these Gaussian groups generate points.

Alpha 0.1

Alpha 0.2

Alpha 0.3

Alpha 0.5

Alpha 1.0

Notice that the points clump together in fewer clusters for low values of alpha, but become more dispersed as alpha increases.

Stick-Breaking Process

Imagine running either the Chinese Restaurant Process or the Polya Urn Model without stop. For each group $i$, this gives a proportion $w_i$ of points that fall into group $i$.

So instead of running the CRP or Polya Urn model to figure out these proportions, can we simply generate them directly?

This is exactly what the Stick-Breaking Process does:

  • Start with a stick of length one.
  • Generate a random variable $\beta_1 \sim Beta(1, \alpha)$. By the definition of the Beta distribution, this will be a real number between 0 and 1, with expected value $1 / (1 + \alpha)$. Break off the stick at $\beta_1$; $w_1$ is then the length of the stick on the left.
  • Now take the stick to the right, and generate $\beta_2 \sim Beta(1, \alpha)$. Break off the stick $\beta_2$ into the stick. Again, $w_2$ is the length of the stick to the left, i.e., $w_2 = (1 - \beta_1) \beta_2$.
  • And so on.

Thus, the Stick-Breaking process is simply the CRP or Polya Urn Model from a different point of view. For example, assigning customers to table 1 according to the Chinese Restaurant Process is equivalent to assigning customers to table 1 with probability $w_1$.

Code

Here’s some R code for simulating a Stick-Breaking process:

Stick-Breaking Process stick_breaking_process.R
1
2
3
4
5
6
7
8
9
10
11
12
13
# Return a vector of weights drawn from a stick-breaking process
# with dispersion `alpha`.
#
# Recall that the kth weight is
#   \beta_k = (1 - \beta_1) * (1 - \beta_2) * ... * (1 - \beta_{k-1}) * beta_k
# where each $\\beta\_i$ is drawn from a Beta distribution
#   \beta_i ~ Beta(1, \alpha)
stick_breaking_process = function(num_weights, alpha) {
  betas = rbeta(num_weights, 1, alpha)
  remaining_stick_lengths = c(1, cumprod(1 - betas))[1:num_weights]
  weights = remaining_stick_lengths * betas
  weights
}

And here’s some sample output:

Stick-Breaking Process, alpha = 1

Stick-Breaking Process, alpha = 3

Stick-Breaking Process, alpha = 5

Notice that for low values of alpha, the stick weights are concentrated on the first few weights (meaning our data points are concentrated on a few clusters), while the weights become more evenly dispersed as we increase alpha (meaning we posit more clusters in our data points).

Dirichlet Process

Suppose we run a Polya Urn Model several times, where we sample colors from a base distribution $G_0$. Each run produces a distribution of colors in the urn (say, 5% blue balls, 3% red balls, 2% pink balls, etc.), and the distribution will be different each time (for example, 5% blue balls in run 1, but 1% blue balls in run 2).

For example, let’s look again at the plots from above, where I generated samples from a Polya Urn Model with the standard unit normal as the base distribution:

Polya Urn Model, Alpha = 1

Polya Urn Model, Alpha = 5

Polya Urn Model, Alpha = 25

Polya Urn Model, Alpha = 50

Each run of the Polya Urn Model produces a slighly different distribution, though each is “centered” in some fashion around the standard Gaussian I used as base. In other words, the Polya Urn Model gives us a distribution over distributions (we get a distribution of ball colors, and this distribution of colors changes each time) – and so we finally get to the Dirichlet Process.

Formally, given a base distribution $G_0$ and a dispersion parameter $\alpha$, a sample from the Dirichlet Process $DP(G_0, \alpha)$ is a distribution $G \sim DP(G_0, \alpha)$. This sample $G$ can be thought of as a distribution of colors in a single simulation of the Polya Urn Model; sampling from $G$ gives us the balls in the urn.

So here’s the connection between the Chinese Restaurant Process, the Polya Urn Model, the Stick-Breaking Process, and the Dirichlet Process:

  • Dirichlet Process: Suppose we want samples $x_i \sim G$, where $G$ is a distribution sampled from the Dirichlet Process $G \sim DP(G_0, \alpha)$.
  • Polya Urn Model: One way to generate these values $x_i$ would be to take a Polya Urn Model with color distribution $G_0$ and dispersion $\alpha$. ($x_i$ would be the color of the ith ball in the urn.)
  • Chinese Restaurant Process: Another way to generate $x_i$ would be to first assign tables to customers according to a Chinese Restaurant Process with dispersion $\alpha$. Every customer at the nth table would then be given the same value (color) sampled from $G_0$. ($x_i$ would be the value given to the ith customer; $x_i$ can also be thought of as the food at table $i$, or as the parameters of table $i$.)
  • Stick-Breaking Process: Finally, we could generate weights $w_k$ according to a Stick-Breaking Process with dispersion $\alpha$. Next, we would give each weight $w_k$ a value (or color) $v_k$ sampled from $G_0$. Finally, we would assign $x_i$ to value (color) $v_k$ with probability $w_k$.

Recap

Let’s summarize what we’ve discussed so far.

We have a bunch of data points $p_i$ that we want to cluster, and we’ve described four essentially equivalent generative models that allow us to describe how each cluster and point could have arisen.

In the Chinese Restaurant Process:

  • We generate table assignments $g_1, \ldots, g_n \sim CRP(\alpha)$ according to a Chinese Restaurant Process. ($g_i$ is the table assigned to datapoint $i$.)
  • We generate table parameters $\phi_1, \ldots, \phi_n \sim G_0$ according to the base distribution $G_0$, where $\phi_k$ is the parameter for the kth distinct group.
  • Given table assignments and table parameters, we generate each datapoint $p_i \sim F(\phi_{g_i})$ from a distribution $F$ with the specified table parameters. (For example, $F$ could be a Gaussian, and $\phi_i$ could be a parameter vector specifying the mean and standard deviation).

In the Polya Urn Model:

  • We generate colors $\phi_1, \ldots, \phi_n \sim Polya(G_0, \alpha)$ according to a Polya Urn Model. ($\phi_i$ is the color of the ith ball.)
  • Given ball colors, we generate each datapoint $p_i \sim F(\phi_i)$.

In the Stick-Breaking Process:

  • We generate group probabilities (stick lengths) $w_1, \ldots, w_{\infty} \sim Stick(\alpha)$ according to a Stick-Breaking process.
  • We generate group parameters $\phi_1, \ldots, \phi_{\infty} \sim G_0$ from $G_0$, where $\phi_k$ is the parameter for the kth distinct group.
  • We generate group assignments $g_1, \ldots, g_n \sim Multinomial(w_1, \ldots, w_{\infty})$ for each datapoint.
  • Given group assignments and group parameters, we generate each datapoint $p_i \sim F(\phi_{g_i})$.

In the Dirichlet Process:

  • We generate a distribution $G \sim DP(G_0, \alpha)$ from a Dirichlet Process with base distribution $G_0$ and dispersion parameter $\alpha$.
  • We generate group-level parameters $x_i \sim G$ from $G$, where $x_i$ is the group parameter for the ith datapoint. (Note: this is not the same as $\phi_i$. $x_i$ is the parameter associated to the group that the ith datapoint belongs to, whereas $\phi_k$ is the parameter of the kth distinct group.)
  • Given group-level parameters $x_i$, we generate each datapoint $p_i \sim F(x_i)$.

Also, remember that each model naturally allows the number of clusters to grow as more points come in.

Inference in the Dirichlet Process Mixture

So we’ve described a generative model that allows us to calculate the probability of any particular set of group assignments to data points, but we haven’t described how to actually learn a good set of group assignments.

Let’s briefly do this now. Very roughly, the Gibbs sampling approach works as follows:

  • Take the set of data points, and randomly initialize group assignments.
  • Pick a point. Fix the group assignments of all the other points, and assign the chosen point a new group (which can be either an existing cluster or a new cluster) with a CRP-ish probability (as described in the models above) that depends on the group assignments and values of all the other points.
  • We will eventually converge on a good set of group assignments, so repeat the previous step until happy.

For more details, this paper provides a good description. Philip Resnick and Eric Hardisty also have a friendlier, more general description of Gibbs sampling (plus an application to naive Bayes) here.

Fast Food Application: Clustering the McDonald’s Menu

Finally, let’s show an application of the Dirichlet Process Mixture. Unfortunately, I didn’t have a data set of people’s food habits offhand, so instead I took this list of McDonald’s foods and nutrition facts.

After normalizing each item to have an equal number of calories, and representing each item as a vector of (total fat, cholesterol, sodium, dietary fiber, sugars, protein, vitamin A, vitamin C, calcium, iron, calories from fat, satured fat, trans fat, carbohydrates), I ran scikit-learn’s Dirichlet Process Gaussian Mixture Model to cluster McDonald’s menu based on nutritional value.

First, how does the number of clusters inferred by the Dirichlet Process mixture vary as we feed in more (randomly ordered) points?

Growth of Number of Clusters

As expected, the Dirichlet Process model discovers more and more clusters as more and more food items arrive. (And indeed, the number of clusters appears to grow logarithmically, which can in fact be proved.)

How many clusters does the mixture model infer from the entire dataset? Running the Gibbs sampler several times, we find that the number of clusters tends around 11:

Number of clusters

Let’s dive into one of these clusterings.

Cluster 1 (Desserts)

Looking at a sample of foods from the first cluster, we find a lot of desserts and dessert-y drinks:

  • Caramel Mocha
  • Frappe Caramel
  • Iced Hazelnut Latte
  • Iced Coffee
  • Strawberry Triple Thick Shake
  • Snack Size McFlurry
  • Hot Caramel Sundae
  • Baked Hot Apple Pie
  • Cinnamon Melts
  • Kiddie Cone
  • Strawberry Sundae

We can also look at the nutritional profile of some foods from this cluster (after z-scaling each nutrition dimension to have mean 0 and standard deviation 1):

Cluster 1

We see that foods in this cluster tend to be high in trans fat and low in vitamins, protein, fiber, and sodium.

Cluster 2 (Sauces)

Here’s a sample from the second cluster, which contains a lot of sauces:

  • Hot Mustard Sauce
  • Spicy Buffalo Sauce
  • Newman’s Own Low Fat Balsamic Vinaigrette

And looking at the nutritional profile of points in this cluster, we see that it’s heavy in sodium and fat:

Cluster 2

Cluster 3 (Burgers, Crispy Foods, High-Cholesterol)

The third cluster is very burgery:

  • Hamburger
  • Cheeseburger
  • Filet-O-Fish
  • Quarter Pounder with Cheese
  • Premium Grilled Chicken Club Sandwich
  • Ranch Snack Wrap
  • Premium Asian Salad with Crispy Chicken
  • Butter Garlic Croutons
  • Sausage McMuffin
  • Sausage McGriddles

It’s also high in fat and sodium, and low in carbs and sugar

Cluster 3

Cluster 4 (Creamy Sauces)

Interestingly, even though we already found a cluster of sauces above, we discover another one as well. These sauces appear to be much more cream-based:

  • Creamy Ranch Sauce
  • Newman’s Own Creamy Caesar Dressing
  • Coffee Cream
  • Iced Coffee with Sugar Free Vanilla Syrup

Nutritionally, these sauces are higher in calories from fat, and much lower in sodium:

Cluster 4

Cluster 5 (Salads)

Here’s a salad cluster. A lot of salads also appeared in the third cluster (along with hamburgers and McMuffins), but that’s because those salads also all contained crispy chicken. The salads in this cluster are either crisp-free or have their chicken grilled instead:

  • Premium Southwest Salad with Grilled Chicken
  • Premium Caesar Salad with Grilled Chicken
  • Side Salad
  • Premium Asian Salad without Chicken
  • Premium Bacon Ranch Salad without Chicken

This is reflected in the higher content of iron, vitamin A, and fiber:

Cluster 5

Cluster 6 (More Sauces)

Again, we find another cluster of sauces:

  • Ketchup Packet
  • Barbeque Sauce
  • Chipotle Barbeque Sauce

These are still high in sodium, but much lower in fat compared to the other sauce clusters:

Cluster 6

Cluster 7 (Fruit and Maple Oatmeal)

Amusingly, fruit and maple oatmeal is in a cluster by itself:

  • Fruit & Maple Oatmeal

Cluster 7

Cluster 8 (Sugary Drinks)

We also get a cluster of sugary drinks:

  • Strawberry Banana Smoothie
  • Wild Berry Smoothie
  • Iced Nonfat Vanilla Latte
  • Nonfat Hazelnut
  • Nonfat Vanilla Cappuccino
  • Nonfat Caramel Cappuccino
  • Sweet Tea
  • Frozen Strawberry Lemonade
  • Coca-Cola
  • Minute Maid Orange Juice

In addition to high sugar content, this cluster is also high in carbohydrates and calcium, and low in fat.

Cluster 8

Cluster 9 (Breakfast Foods)

Here’s a cluster of high-cholesterol breakfast foods:

  • Sausage McMuffin with Egg
  • Sausage Burrito
  • Egg McMuffin
  • Bacon, Egg & Chees Biscuit
  • McSkillet Burrito with Sausage
  • Big Breakfast with Hotcakes

Cluster 9

Cluster 10 (Coffee Drinks)

We find a group of coffee drinks next:

  • Nonfat Cappuccino
  • Nonfat Latte
  • Nonfat Latte with Sugar Free Vanilla Syrup
  • Iced Nonfat Latte

These are much higher in calcium and protein, and lower in sugar, than the other drink cluster above:

Cluster 11

Cluster 11 (Apples)

Here’s a cluster of apples:

  • Apple Dippers with Low Fat Caramel Dip
  • Apple Slices

Vitamin C, check.

Cluster 10

And finally, here’s an overview of all the clusters at once (using a different clustering run):

All Clusters

No More!

I’ll end with a couple notes:

  • Kevin Knight has a hilarious introduction to Bayesian inference that describes some applications of nonparametric Bayesian techniques to computational linguistics (though I don’t think he ever quite says “nonparametric Bayes” directly).
  • In the Chinese Restaurant Process, each customer sits at a single table. The Indian Buffet Process is an extension that allows customers to sample food from multiple tables (i.e., belong to multiple clusters).
  • The Chinese Restaurant Process, the Polya Urn Model, and the Stick-Breaking Process are all sequential models for generating groups: to figure out table parameters in the CRP, for example, you wait for customer 1 to come in, then customer 2, then customer 3, and so on. The equivalent Dirichlet Process, on the other hand, is a parallel model for generating groups: just sample $G \sim DP(G_0, alpha)$, and then all your group parameters can be independently generated by sampling from $G$ at once. This duality is an instance of a more general phenomenon known as de Finetti’s theorem.

And that’s it.

Instant interactive visualization with d3 + ggplot2

It’s often easier to understand a chart than a table. So why is it still so hard to make a simple data graphic, and why am I still bombarded by mind-numbing reams of raw numbers?

(Yeah, I love ggplot2 to death. But sometimes I want a little more interaction, and sometimes all I want is to drag-and-drop and be done.)

So I’ve been experimenting with a small, ggplot2-inspired d3 app.

Simply drop a file, and bam! Instant scatterplot:

Swiss Roll B&W

But wait – that’s only 2 dimensions. You can add some more through color, size, and groups:

Swiss Roll Edit

(Click here to play with the data yourself.)

And you can easily switch which variables are getting plotted, and see all the information associated with each point.

Swiss Roll Pivot

(Same dataset, different aesthetic assignments.)

I’m thinking of adding more kinds of charts, support for categorical variables, more interactivity (sliders to interact with other dimensions?!), and making the UI even easier (e.g., simplify column naming). In the meantime, the code is here on Github, and tips and suggestions are welcome!

Movie recommendations and more via MapReduce and Scalding

Scalding is an in-house MapReduce framework that Twitter recently open-sourced. Like Pig, it provides an abstraction on top of MapReduce that makes it easy to write big data jobs in a syntax that’s simple and concise. Unlike Pig, Scalding is written in pure Scala – which means all the power of Scala and the JVM is already built-in. No more UDFs, folks!

This is going to be an in-your-face introduction to Scalding, Twitter’s (Scala + Cascading) MapReduce framework.

In 140: instead of forcing you to write raw map and reduce functions, Scalding allows you to write natural code like

1
2
// Create a histogram of tweet lengths.
tweets.map('tweet -> 'length) { tweet : String => tweet.size }.groupBy('length) { _.size }

Not much different from the Ruby you’d write to compute tweet distributions over small data? Exactly.

Two notes before we begin:

Movie Similarities

Imagine you run an online movie business, and you want to generate movie recommendations. You have a rating system (people can rate movies with 1 to 5 stars), and we’ll assume for simplicity that all of the ratings are stored in a TSV file somewhere.

Let’s start by reading the ratings into a Scalding job.

Input MovieSimilarities.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
/**
 * The input is a TSV file with three columns: (user, movie, rating).
 */
val INPUT_FILENAME = "data/ratings.tsv"

/**
 * Read in the input and give each field a type and name.
 */
val ratings = Tsv(INPUT_FILENAME, ('user, 'movie, 'rating))

/**
 * Let's also keep track of the total number of people who rated each movie.
 */
val numRaters =
  ratings
    // Put the number of people who rated each movie into a field called "numRaters".    
    .groupBy('movie) { _.size }.rename('size -> 'numRaters)

// Merge `ratings` with `numRaters`, by joining on their movie fields.
val ratingsWithSize =
  ratings.joinWithSmaller('movie -> 'movie, numRaters)

// ratingsWithSize now contains the following fields: (user, movie, rating, numRaters).

You want to calculate how similar pairs of movies are, so that if someone watches The Lion King, you can recommend films like Toy Story. So how should you define the similarity between two movies?

One way is to use their correlation:

  • For every pair of movies A and B, find all the people who rated both A and B.
  • Use these ratings to form a Movie A vector and a Movie B vector.
  • Calculate the correlation between these two vectors.
  • Whenever someone watches a movie, you can then recommend the movies most correlated with it.

Let’s start with the first two steps.

Find rating pairs MovieSimilarities.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
/**
 * To get all pairs of co-rated movies, we'll join `ratings` against itself.
 * So first make a dummy copy of the ratings that we can join against.
 */
val ratings2 =
  ratingsWithSize
    .rename(('user, 'movie, 'rating, 'numRaters) -> ('user2, 'movie2, 'rating2, 'numRaters2))

/**
 * Now find all pairs of co-rated movies (pairs of movies that a user has rated) by
 * joining the duplicate rating streams on their user fields, 
 */
val ratingPairs =
  ratingsWithSize
    .joinWithSmaller('user -> 'user2, ratings2)
    // De-dupe so that we don't calculate similarity of both (A, B) and (B, A).
    .filter('movie, 'movie2) { movies : (String, String) => movies._1 < movies._2 }
    .project('movie, 'rating, 'numRaters, 'movie2, 'rating2, 'numRaters2)

// By grouping on ('movie, 'movie2), we can now get all the people who rated any pair of movies.

Before using these rating pairs to calculate correlation, let’s stop for a bit.

Since we’re explicitly thinking of movies as vectors of ratings, it’s natural to compute some very vector-y things like norms and dot products, as well as the length of each vector and the sum over all elements in each vector. So let’s compute these:

Vector calculations MovieSimilarities.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
/**
 * Compute dot products, norms, sums, and sizes of the rating vectors.
 */
val vectorCalcs =
  ratingPairs
    // Compute (x*y, x^2, y^2), which we need for dot products and norms.
    .map(('rating, 'rating2) -> ('ratingProd, 'ratingSq, 'rating2Sq)) {
      ratings : (Double, Double) =>
      (ratings._1 * ratings._2, math.pow(ratings._1, 2), math.pow(ratings._2, 2))
    }
    .groupBy('movie, 'movie2) { group =>
        group.size // length of each vector
        .sum('ratingProd -> 'dotProduct)
        .sum('rating -> 'ratingSum)
        .sum('rating2 -> 'rating2Sum)
        .sum('ratingSq -> 'ratingNormSq)
        .sum('rating2Sq -> 'rating2NormSq)
        .max('numRaters) // Just an easy way to make sure the numRaters field stays.
        .max('numRaters2)                
        // All of these operations chain together like in a builder object.
    }

To summarize, each row in vectorCalcs now contains the following fields:

  • movie, movie2
  • numRaters, numRaters2: the total number of people who rated each movie
  • size: the number of people who rated both movie and movie2
  • dotProduct: dot product between the movie vector (a vector of ratings) and the movie2 vector (also a vector of ratings)
  • ratingSum, rating2sum: sum over all elements in each ratings vector
  • ratingNormSq, rating2Normsq: squared norm of each vector

So let’s go back to calculating the correlation between movie and movie2. We could, of course, calculate correlation in the standard way: find the covariance between the movie and movie2 ratings, and divide by their standard deviations.

But recall that we can also write correlation in the following form:

$Corr(X, Y) = \frac{n \sum xy - \sum x \sum y}{\sqrt{n \sum x^2 - (\sum x)^2} \sqrt{n \sum y^2 - (\sum y)^2}}$

(See the Wikipedia page on correlation.)

Notice that every one of the elements in this formula is a field in vectorCalcs! So instead of using the standard calculation, we can use this form instead:

Correlation MovieSimilarities.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
val correlations =
  vectorCalcs
    .map(('size, 'dotProduct, 'ratingSum, 'rating2Sum, 'ratingNormSq, 'rating2NormSq) -> 'correlation) {
      val fields : (Double, Double, Double, Double, Double, Double) =>
      correlation(fields._1, fields._2, fields._3, fields._4, fields._5, fields._6)
    }

def correlation(size : Double, dotProduct : Double, ratingSum : Double,
  rating2Sum : Double, ratingNormSq : Double, rating2NormSq : Double) = {

  val numerator = size * dotProduct - ratingSum * rating2Sum
  val denominator = math.sqrt(size * ratingNormSq - ratingSum * ratingSum) * math.sqrt(size * rating2NormSq - rating2Sum * rating2Sum)

  numerator / denominator
}

And that’s it! To see the full code, check out the Github repository here.

Book Similarities

Let’s run this code over some real data. Unfortunately, I didn’t have a clean source of movie ratings available, so instead I used this dataset of 1 million book ratings.

I ran a quick command, using the handy scald.rb script that Scalding provides…

1
2
# Send the job off to a Hadoop cluster
scald.rb MovieSimilarities.scala --input ratings.tsv --output similarities.tsv

…and here’s a sample of the top output I got:

Top Book-Crossing Pairs

As we’d expect, we see that

  • Harry Potter books are similar to other Harry Potter books
  • Lord of the Rings books are similar to other Lord of the Rings books
  • Tom Clancy is similar to John Grisham
  • Chick lit (Summer Sisters, by Judy Blume) is similar to chick lit (Bridget Jones)

Just for fun, let’s also look at books similar to The Great Gatsby:

Great Gatsby

(Schoolboy memories, exactly.)

More Similarity Measures

Of course, there are lots of other similarity measures we could use besides correlation.

Cosine Similarity

Cosine similarity is a another common vector-based similarity measure.

Cosine Similarity MovieSimilarities.scala
1
2
3
def cosineSimilarity(dotProduct : Double, ratingNorm : Double, rating2Norm : Double) = {
  dotProduct / (ratingNorm * rating2Norm)
}

Correlation, Take II

We can also also add a regularized correlation, by (say) adding N virtual movie pairs that have zero correlation. This helps avoid noise if some movie pairs have very few raters in common (for example, The Great Gatsby had an unlikely raw correlation of 1 with many other books, due simply to the fact that those book pairs had very few ratings).

Regularized Correlation MovieSimilarities.scala
1
2
3
4
5
6
7
8
9
def regularizedCorrelation(size : Double, dotProduct : Double, ratingSum : Double,
  rating2Sum : Double, ratingNormSq : Double, rating2NormSq : Double,
  virtualCount : Double, priorCorrelation : Double) = {

  val unregularizedCorrelation = correlation(size, dotProduct, ratingSum, rating2Sum, ratingNormSq, rating2NormSq)
  val w = size / (size + virtualCount)

  w * unregularizedCorrelation + (1 - w) * priorCorrelation
}

Jaccard Similarity

Recall that one of the lessons of the Netflix prize was that implicit data can be quite useful – the mere fact that you rate a James Bond movie, even if you rate it quite horribly, suggests that you’d probably be interested in similar action films. So we can also ignore the value itself of each rating and use a set-based similarity measure like Jaccard similarity.

Jaccard Similarity MovieSimilarities.scala
1
2
3
4
def jaccardSimilarity(usersInCommon : Double, totalUsers1 : Double, totalUsers2 : Double) = {
  val union = totalUsers1 + totalUsers2 - usersInCommon
  usersInCommon / union
}

Incorporation

Finally, let’s add all these similarity measures to our output.

Similarity Measures MovieSimilarities.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
val PRIOR_COUNT = 10
val PRIOR_CORRELATION = 0

val similarities =
  vectorCalcs
    .map(('size, 'dotProduct, 'ratingSum, 'rating2Sum, 'ratingNormSq, 'rating2NormSq, 'numRaters, 'numRaters2) ->
      ('correlation, 'regularizedCorrelation, 'cosineSimilarity, 'jaccardSimilarity)) {

      fields : (Double, Double, Double, Double, Double, Double, Double, Double) =>

      val (size, dotProduct, ratingSum, rating2Sum, ratingNormSq, rating2NormSq, numRaters, numRaters2) = fields

      val corr = correlation(size, dotProduct, ratingSum, rating2Sum, ratingNormSq, rating2NormSq)
      val regCorr = regularizedCorrelation(size, dotProduct, ratingSum, rating2Sum, ratingNormSq, rating2NormSq, PRIOR_COUNT, PRIOR_CORRELATION)
      val cosSim = cosineSimilarity(dotProduct, math.sqrt(ratingNormSq), math.sqrt(rating2NormSq))
      val jaccard = jaccardSimilarity(size, numRaters, numRaters2)

      (corr, regCorr, cosSim, jaccard)
    }

Book Similarities Revisited

Let’s take another look at the book similarities above, now that we have these new fields.

Here are some of the top Book-Crossing pairs, sorted by their shrunk correlation:

Top Book-Crossing Pairs

Notice how regularization affects things: the Dark Tower pair has a pretty high raw correlation, but relatively few ratings (reducing our confidence in the raw correlation), so it ends up below the others.

And here are books similar to The Great Gatsby, this time ordered by cosine similarity:

Great Gatsby

Input Abstraction

So our code right now is tied to our specific ratings.tsv input. But what if we change the way we store our ratings, or what if we want to generate similarities for something entirely different?

Let’s abstract away our input. We’ll create a VectorSimilarities class that represents input data in the following format:

Input abstraction VectorSimilarities.scala
1
2
3
4
5
6
7
// This is an abstract method that returns a Pipe (aka, a stream of rating tuples).
// It takes in three symbols that name the user, item, and rating fields.
def input(userField : Symbol, itemField : Symbol, ratingField : Symbol) : Pipe

val ratings = input('user, 'item, 'rating)
// ...
// The rest of the code remains essentially the same.

Whenever we want to define a new input format, we simply subclass VectorSimilarities and provide a concrete implementation of the input method.

Book-Crossings

For example, here’s a class I could have used to generate the book recommendations above:

BookCrossing similarities BookCrossing.scala
1
2
3
4
5
6
7
8
9
10
class BookCrossing(args : Args) extends VectorSimilarities(args) {
  override def input(userField : Symbol, itemField : Symbol, ratingField : Symbol) : Pipe = {
    val bookCrossingRatings =
      Tsv("book-crossing-ratings.tsv")
        .read
        .mapTo((0, 1, 2) -> (userField, itemField, ratingField)) { fields : (String, String, Double) => fields }

    bookCrossingRatings
  }
}

The input method simply reads from a TSV file and lets the VectorSimilarities superclass do all the work. Instant recommendations, BOOM.

Song Similarities with Twitter + iTunes

But why limit ourselves to books? We do, after all, have Twitter at our fingertips…

Since iTunes lets you send a tweet whenever you rate a song, we can use these to generate music recommendations!

Again, we create a new class that overrides the abstract input defined in VectorSimilarities

Song similarities with Twitter + iTunes ITunes.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
class ITunes(args : Args) extends VectorSimilarities(args) {
  // Example tweet:
  // rated New Kids On the Block: Super Hits by New Kids On the Block 5 stars http://itun.es/iSg3Fc #iTunes
  val ITUNES_REGEX = """rated (.+?) by (.+?) (\d) stars .*? #iTunes""".r

  override def input(userField : Symbol, itemField : Symbol, ratingField : Symbol) : Pipe = {
    val itunesRatings =
      // This is a Twitter-internal tweet source, but you could just as easily scrape 
      // Twitter yourself and provide your own source of tweets: https://dev.twitter.com/docs
      TweetSource()
        .mapTo('userId, 'text) { s => (s.getUserId, s.getText) }
        .filter('text) { text : String => text.contains("#iTunes") }
        .flatMap('text -> ('song, 'artist, 'rating)) {
          text : String =>
          ITUNES_REGEX.findFirstMatchIn(text).map { _.subgroups }.map { l => (l(0), l(1), l(2)) }
        }
        .rename(('userId, 'song, 'rating) -> (userField, itemField, ratingField))
        .project(userField, itemField, ratingField)

    itunesRatings
  }
}

…and snap! Here are some songs you might like if you recently listened to Beyoncé:

Jason Mraz

And some recommended songs if you like Lady Gaga:

Lady Gaga

GG Pandora.

Location Similarities with Foursquare Check-ins

But what if we don’t have explicit ratings? For example, we could be a news site that wants to generate article recommendations, and maybe we only have user visits on each story.

Or what if we want to generate restaurant or tourist recommendations, when all we know is who visits each location?

Let’s finally make Foursquare check-ins useful. (I kid, I kid.)

Instead of using an explicit rating given to us, we can simply generate a dummy rating of 1 for each check-in. Correlation doesn’t make sense any more, but we can still pay attention to a measure like Jaccard simiilarity.

So we simply create a new class that scrapes tweets for Foursquare check-in information…

Location similarities with Foursquare Foursquare.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class Foursquare(args : Args) extends VectorSimilarities(args) {
  // Example tweet: I'm at The Ambassador (673 Geary St, btw Leavenworth & Jones, San Francisco) w/ 2 others http://4sq.com/xok3rI
  // Let's limit to New York for simplicity.
  val FOURSQUARE_REGEX = """I'm at (.+?) \(.*? New York""".r

  override def input(userField : Symbol, itemField : Symbol, ratingField : Symbol) : Pipe = {
    val foursquareCheckins =
      TweetSource()
        .mapTo('userId, 'text) { s => (s.getUserId.toLong, s.getText) }
        .flatMap('text -> ('location, 'rating)) {
          text : String =>
          FOURSQUARE_REGEX.findFirstMatchIn(text).map { _.subgroups }.map { l => (l(0), 1) }
        }
        .rename(('userId, 'location, 'rating) -> (userField, itemField, ratingField))
        .unique(userField, itemField, ratingField)

    foursquareCheckins
  }
}

…and bam! Here are locations similar to the Empire State Building:

Empire State Building

Here are places you might want to check out, if you check-in at Bergdorf Goodman:

Bergdorf Goodman

And here’s where to go after the Statue of Liberty:

Statue of Liberty

Power of Twitter, yo.

RottenTomatoes Similarities

UPDATE: I found some movie data after all…

So let’s use RottenTomatoes tweets to recommend movies! Here’s the code for a class that searches for RottenTomatoes tweets:

Movie similarities with RottenTomatoes RottenTomatoes.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
class RottenTomatoes(args : Args) extends VectorSimilarities(args) {
  /**
   * Example tweets:
   * My review for 'Hop' on Rotten Tomatoes: 1 star > http://bit.ly/AB7Tl4
   * My review for 'The Bothersome Man (Den Brysomme mannen)' on Rotten Tomatoes: 3 stars-A muddled Playtime in Paris,... http://tmto.es/AvPoO2
   */
  val ROTTENTOMATOES_REGEX = """My review for '(.+?)' on Rotten Tomatoes: (\d) star""".r

  override val MIN_NUM_RATERS = 2
  override val MAX_NUM_RATERS = 1000
  override val MIN_INTERSECTION = 2

  override def input(userField : Symbol, itemField : Symbol, ratingField : Symbol) : Pipe = {
    val rottenTomatoesRatings =
      TweetSource()
        .mapTo('userId, 'text) { s => (s.getUserId.toLong, s.getText) }
        .flatMap('text -> ('movie, 'rating)) {
          text : String =>
          ROTTENTOMATOES_REGEX.findFirstMatchIn(text).map { _.subgroups }.map { x => (x(0), x(1).toInt) }
        }
        .rename(('userId, 'movie, 'rating) -> (userField, itemField, ratingField))
        .unique(userField, itemField, ratingField)

    rottenTomatoesRatings
  }
}

And here are the most similar movies discovered:

Top RottenTomatoes Movies

We see that

  • Lord of the Rings, Harry Potter, and Star Wars movies are similar to other Lord of the Rings, Harry Potter, and Star Wars movies
  • Big science fiction blockbusters (Avatar) are similar to big science fiction blockbusters (Inception)
  • People who like one Justin Timberlake movie (Bad Teacher) also like other Justin Timberlake Movies (In Time). Similarly with Michael Fassbender (A Dangerous Method, Shame)
  • Art house movies (The Tree of Life) stick together (Tinker Tailor Soldier Spy)

Let’s also look at the movies with the most negative correlation:

Negative RottenTomatoes Movies

(The more you like loud and dirty popcorn movies (Thor) and vamp romance (Twilight), the less you like arthouse? SGTM.)

Next Steps

Hopefully I gave you a taste of the awesomeness of Scalding. To learn even more:

Watch out for more documentation soon, and you should most definitely follow @Scalding on Twitter for updates or to ask any questions.

Mad Props

And finally, a huge shoutout to Argyris Zymnis, Avi Bryant, and Oscar Boykin, the mastermind hackers who have spent (and continue spending!) unimaginable hours making Scalding a joy to use.

@argyris, @avibryant, @posco: Thanks for it all. #awesomejobguys #loveit

Quick Introduction to ggplot2

This is a bare-bones introduction to ggplot2, a visualization package in R. It assumes no knowledge of R.

For a better-looking version of this post, see this Github repository, which also contains some of the example datasets I use and a literate programming version of this tutorial.

Preview

Let’s start with a preview of what ggplot2 can do.

Given Fisher’s iris data set and one simple command…

1
qplot(Sepal.Length, Petal.Length, data = iris, color = Species)

…we can produce this plot of sepal length vs. petal length, colored by species.

Sepal vs. Petal, Colored by Species

Installation

You can download R here. After installation, you can launch R in interactive mode by either typing R on the command line or opening the standard GUI (which should have been included in the download).

R Basics

Vectors

Vectors are a core data structure in R, and are created with c(). Elements in a vector must be of the same type.

1
2
numbers = c(23, 13, 5, 7, 31)
names = c("edwin", "alice", "bob")

Elements are indexed starting at 1, and are accessed with [] notation.

1
2
numbers[1] # 23
names[1] # edwin

Data frames

Data frames are like matrices, but with named columns of different types (similar to database tables).

1
2
3
4
5
books = data.frame(
  title = c("harry potter", "war and peace", "lord of the rings"), # column named "title"
  author = c("rowling", "tolstoy", "tolkien"),
  num_pages = c("350", "875", "500")
)

You can access columns of a data frame with $.

1
2
books$title # c("harry potter", "war and peace", "lord of the rings")
books$author[1] # "rowling"

You can also create new columns with $.

1
2
3
4
books$num_bought_today = c(10, 5, 8)
books$num_bought_yesterday = c(18, 13, 20)
  
books$total\_num\_bought = books$num_bought_today + books$num_bought_yesterday

read.table

Suppose you want to import a TSV file into R as a data frame.

tsv file without header

For example, consider the data/students.tsv file (with columns describing each student’s age, test score, and name).

1
2
3
13   100 alice
14   95  bob
13   82  eve

We can import this file into R using read.table().

1
2
3
4
5
students = read.table("data/students.tsv",
  header = F, # file does not contain a header (`F` is short for `FALSE`), so we must manually specify column names                    
  sep = "\t", # file is tab-delimited        
  col.names = c("age", "score", "name") # column names
)

We can now access the different columns in the data frame with students$age, students$score, and students$name.

csv file with header

For an example of a file in a different format, look at the data/studentsWithHeader.tsv file.

1
2
3
4
age,score,name
13,100,alice
14,95,bob
13,82,eve

Here we have the same data, but now the file is comma-delimited and contains a header. We can import this file with

1
2
3
4
students = read.table("data/students.tsv",
  sep = ",",
  header = T  # first line contains column names, so we can immediately call `students$age`        
)

(Note: there is also a read.csv function that uses sep = "," by default.)

help

There are many more options that read.table can take. For a list of these, just type help(read.table) (or ?read.table) at the prompt to access documentation.

1
2
3
# These work for other functions as well.
help(read.table)
?read.table

ggplot2

With these R basics in place, let’s dive into the ggplot2 package.

Installation

One of R’s greatest strengths is its excellent set of packages. To install a package, you can use the install.packages() function.

1
install.packages("ggplot2")

To load a package into your current R session, use library().

1
library(ggplot2)

Scatterplots with qplot()

Let’s look at how to create a scatterplot in ggplot2. We’ll use the iris data frame that’s automatically loaded into R.

What does the data frame contain? We can use the head function to look at the first few rows.

1
2
3
4
5
6
7
8
9
10
head(iris) # by default, head displays the first 6 rows. see `?head`
head(iris, n = 10) # we can also explicitly set the number of rows to display

Sepal.Length Sepal.Width Petal.Length Petal.Width Species
         5.1         3.5          1.4         0.2  setosa
         4.9         3.0          1.4         0.2  setosa
         4.7         3.2          1.3         0.2  setosa
         4.6         3.1          1.5         0.2  setosa
         5.0         3.6          1.4         0.2  setosa
         5.4         3.9          1.7         0.4  setosa

(The data frame actually contains three types of species: setosa, versicolor, and virginica.)

Let’s plot Sepal.Length against Petal.Length using ggplot2’s qplot() function.

1
2
3
4
5
qplot(Sepal.Length, Petal.Length, data = iris)
# Plot Sepal.Length vs. Petal.Length, using data from the `iris` data frame.
# * First argument `Sepal.Length` goes on the x-axis.
# * Second argument `Petal.Length` goes on the y-axis.
# * `data = iris` means to look for this data in the `iris` data frame.    

Sepal Length vs. Petal Length

To see where each species is located in this graph, we can color each point by adding a color = Species argument.

1
qplot(Sepal.Length, Petal.Length, data = iris, color = Species) # dude!

Sepal vs. Petal, Colored by Species

Similarly, we can let the size of each point denote sepal width, by adding a size = Sepal.Width argument.

1
2
qplot(Sepal.Length, Petal.Length, data = iris, color = Species, size = Petal.Width)
# We see that Iris setosa flowers have the narrowest petals.

Sepal vs. Petal, Sized by Petal Width

1
2
qplot(Sepal.Length, Petal.Length, data = iris, color = Species, size = Petal.Width, alpha = I(0.7))
# By setting the alpha of each point to 0.7, we reduce the effects of overplotting.

Sepal vs. Petal, with Transparency

Finally, let’s fix the axis labels and add a title to the plot.

1
2
3
qplot(Sepal.Length, Petal.Length, data = iris, color = Species,
  xlab = "Sepal Length", ylab = "Petal Length",
  main = "Sepal vs. Petal Length in Fisher's Iris data")

Sepal vs. Petal, Titled

Other common geoms

In the scatterplot examples above, we implicitly used a point geom, the default when you supply two arguments to qplot().

1
2
3
# These two invocations are equivalent.
qplot(Sepal.Length, Petal.Length, data = iris, geom = "point")
qplot(Sepal.Length, Petal.Length, data = iris)

But we can also easily use other types of geoms to create more kinds of plots.

Barcharts: geom = “bar”

1
2
3
4
5
6
7
8
9
movies = data.frame(
  director = c("spielberg", "spielberg", "spielberg", "jackson", "jackson"),
  movie = c("jaws", "avatar", "schindler's list", "lotr", "king kong"),
  minutes = c(124, 163, 195, 600, 187)
)

# Plot the number of movies each director has.
qplot(director, data = movies, geom = "bar", ylab = "# movies")
# By default, the height of each bar is simply a count.

# Movies

1
2
3
# But we can also supply a different weight.
# Here the height of each bar is the total running time of the director's movies.
qplot(director, weight = minutes, data = movies, geom = "bar", ylab = "total length (min.)")

Total Running Time

Line charts: geom = “line”

1
2
qplot(Sepal.Length, Petal.Length, data = iris, geom = "line", color = Species)
# Using a line geom doesn't really make sense here, but hey.

Sepal vs. Petal, Lined

1
2
3
4
# `Orange` is another built-in data frame that describes the growth of orange trees.
qplot(age, circumference, data = Orange, geom = "line",
  colour = Tree,
  main = "How does orange tree circumference vary with age?")

Orange Tree Growth

1
2
# We can also plot both points and lines.
qplot(age, circumference, data = Orange, geom = c("point", "line"), colour = Tree)

Orange Tree with Points

And that’s it with what I’ll cover.

Next Steps

I skipped over a lot of aspects of R and ggplot2 in this intro.

For example,

  • There are many geoms (and other functionalities) in ggplot2 that I didn’t cover, e.g., boxplots and histograms.
  • I didn’t talk about ggplot2’s layering system, or the grammar of graphics it’s based on.

So I’ll end with some additional resources on R and ggplot2.

Introduction to Conditional Random Fields

Imagine you have a sequence of snapshots from a day in Justin Bieber’s life, and you want to label each image with the activity it represents (eating, sleeping, driving, etc.). How can you do this?

One way is to ignore the sequential nature of the snapshots, and build a per-image classifier. For example, given a month’s worth of labeled snapshots, you might learn that dark images taken at 6am tend to be about sleeping, images with lots of bright colors tend to be about dancing, images of cars are about driving, and so on.

By ignoring this sequential aspect, however, you lose a lot of information. For example, what happens if you see a close-up picture of a mouth – is it about singing or eating? If you know that the previous image is a picture of Justin Bieber eating or cooking, then it’s more likely this picture is about eating; if, however, the previous image contains Justin Bieber singing or dancing, then this one probably shows him singing as well.

Thus, to increase the accuracy of our labeler, we should incorporate the labels of nearby photos, and this is precisely what a conditional random field does.

Part-of-Speech Tagging

Let’s go into some more detail, using the more common example of part-of-speech tagging.

In POS tagging, the goal is to label a sentence (a sequence of words or tokens) with tags like ADJECTIVE, NOUN, PREPOSITION, VERB, ADVERB, ARTICLE.

For example, given the sentence “Bob drank coffee at Starbucks”, the labeling might be “Bob (NOUN) drank (VERB) coffee (NOUN) at (PREPOSITION) Starbucks (NOUN)”.

So let’s build a conditional random field to label sentences with their parts of speech. Just like any classifier, we’ll first need to decide on a set of feature functions $f_i$.

Feature Functions in a CRF

In a CRF, each feature function is a function that takes in as input:

  • a sentence s
  • the position i of a word in the sentence
  • the label $l_i$ of the current word
  • the label $l_{i-1}$ of the previous word

and outputs a real-valued number (though the numbers are often just either 0 or 1).

(Note: by restricting our features to depend on only the current and previous labels, rather than arbitrary labels throughout the sentence, I’m actually building the special case of a linear-chain CRF. For simplicity, I’m going to ignore general CRFs in this post.)

For example, one possible feature function could measure how much we suspect that the current word should be labeled as an adjective given that the previous word is “very”.

Features to Probabilities

Next, assign each feature function $f_j$ a weight $\lambda_j$ (I’ll talk below about how to learn these weights from the data). Given a sentence s, we can now score a labeling l of s by adding up the weighted features over all words in the sentence:

$score(l | s) = \sum_{j = 1}^m \sum_{i = 1}^n \lambda_j f_j(s, i, l_i, l_{i-1})$

(The first sum runs over each feature function $j$, and the inner sum runs over each position $i$ of the sentence.)

Finally, we can transform these scores into probabilities $p(l | s)$ between 0 and 1 by exponentiating and normalizing:

$p(l | s) = \frac{exp[score(l|s)]}{\sum_{l’} exp[score(l’|s)]} = \frac{exp[\sum_{j = 1}^m \sum_{i = 1}^n \lambda_j f_j(s, i, l_i, l_{i-1})]}{\sum_{l’} exp[\sum_{j = 1}^m \sum_{i = 1}^n \lambda_j f_j(s, i, l’_i, l’_{i-1})]}$

Example Feature Functions

So what do these feature functions look like? Examples of POS tagging features could include:

  • $f_1(s, i, l_i, l_{i-1}) = 1$ if $l_i =$ ADVERB and the ith word ends in “-ly”; 0 otherwise.
    • If the weight $\lambda_1$ associated with this feature is large and positive, then this feature is essentially saying that we prefer labelings where words ending in -ly get labeled as ADVERB.
  • $f_2(s, i, l_i, l_{i-1}) = 1$ if $i = 1$, $l_i =$ VERB, and the sentence ends in a question mark; 0 otherwise.
    • Again, if the weight $\lambda_2$ associated with this feature is large and positive, then labelings that assign VERB to the first word in a question (e.g., “Is this a sentence beginning with a verb?”) are preferred.
  • $f_3(s, i, l_i, l_{i-1}) = 1$ if $l_{i-1} =$ ADJECTIVE and $l_i =$ NOUN; 0 otherwise.
    • Again, a positive weight for this feature means that adjectives tend to be followed by nouns.
  • $f_4(s, i, l_i, l_{i-1}) = 1$ if $l_{i-1} =$ PREPOSITION and $l_i =$ PREPOSITION.
    • A negative weight $\lambda_4$ for this function would mean that prepositions don’t tend to follow prepositions, so we should avoid labelings where this happens.

And that’s it! To sum up: to build a conditional random field, you just define a bunch of feature functions (which can depend on the entire sentence, a current position, and nearby labels), assign them weights, and add them all together, transforming at the end to a probability if necessary.

Now let’s step back and compare CRFs to some other common machine learning techniques.

Smells like Logistic Regression…

The form of the CRF probabilities $p(l | s) = \frac{exp[\sum_{j = 1}^m \sum_{i = 1}^n f_j(s, i, l_i, l_{i-1})]}{\sum_{l’} exp[\sum_{j = 1}^m \sum_{i = 1}^n f_j(s, i, l’_i, l’_{i-1})]}$ might look familiar.

That’s because CRFs are indeed basically the sequential version of logistic regression: whereas logistic regression is a log-linear model for classification, CRFs are a log-linear model for sequential labels.

Looks like HMMs…

Recall that Hidden Markov Models are another model for part-of-speech tagging (and sequential labeling in general). Whereas CRFs throw any bunch of functions together to get a label score, HMMs take a generative approach to labeling, defining

$p(l,s) = p(l_1) \prod_i p(l_i | l_{i-1}) p(w_i | l_i)$

where

  • $p(l_i | l_{i-1})$ are transition probabilities (e.g., the probability that a preposition is followed by a noun);
  • $p(w_i | l_i)$ are emission probabilities (e.g., the probability that a noun emits the word “dad”).

So how do HMMs compare to CRFs? CRFs are more powerful – they can model everything HMMs can and more. One way of seeing this is as follows.

Note that the log of the HMM probability is $\log p(l,s) = \log p(l_0) + \sum_i \log p(l_i | l_{i-1}) + \sum_i \log p(w_i | l_i)$. This has exactly the log-linear form of a CRF if we consider these log-probabilities to be the weights associated to binary transition and emission indicator features.

That is, we can build a CRF equivalent to any HMM by…

  • For each HMM transition probability $ p(l_i = y | l_{i-1} = x) $, define a set of CRF transition features of the form $f_{x,y}(s, i, l_i, l_{i-1}) = 1$ if $l_i = y$ and $l_{i-1} = x$. Give each feature a weight of $w_{x,y} = \log p(l_i = y | l_{i-1} = x)$.
  • Similarly, for each HMM emission probability $p(w_i = z | l_{i} = x)$, define a set of CRF emission features of the form $g_{x,y}(s, i, l_i, l_{i-1}) = 1$ if $w_i = z$ and $l_i = x$. Give each feature a weight of $w_{x,z} = \log p(w_i = z | l_i = x)$.

Thus, the score $p(l|s)$ computed by a CRF using these feature functions is precisely proportional to the score computed by the associated HMM, and so every HMM is equivalent to some CRF.

However, CRFs can model a much richer set of label distributions as well, for two main reasons:

  • CRFs can define a much larger set of features. Whereas HMMs are necessarily local in nature (because they’re constrained to binary transition and emission feature functions, which force each word to depend only on the current label and each label to depend only on the previous label), CRFs can use more global features. For example, one of the features in our POS tagger above increased the probability of labelings that tagged the first word of a sentence as a VERB if the end of the sentence contained a question mark.
  • CRFs can have arbitrary weights. Whereas the probabilities of an HMM must satisfy certain constraints (e.g., $0 <= p(w_i | l_i) <= 1, \sum_w p(w_i = w | l_1) = 1)$, the weights of a CRF are unrestricted (e.g., $\log p(w_i | l_i)$ can be anything it wants).

Learning Weights

Let’s go back to the question of how to learn the feature weights in a CRF. One way is (surprise) to use gradient ascent.

Assume we have a bunch of training examples (sentences and associated part-of-speech labels). Randomly initialize the weights of our CRF model. To shift these randomly initialized weights to the correct ones, for each training example…

  • Go through each feature function $f_i$, and calculate the gradient of the log probability of the training example with respect to $\lambda_i$: $\frac{\partial}{\partial w_j} \log p(l | s) = \sum_{j = 1}^m f_i(s, j, l_j, l_{j-1}) - \sum_{l’} p(l’ | s) \sum_{j = 1}^m f_i(s, j, l’_j, l’_{j-1})$
  • Note that the first term in the gradient is the contribution of feature $f_i$ under the true label, and the second term in the gradient is the expected contribution of feature $f_i$ under the current model. This is exactly the form you’d expect gradient ascent to take.
  • Move $\lambda_i$ in the direction of the gradient: $\lambda_i = \lambda_i + \alpha [\sum_{j = 1}^m f_i(s, j, l_j, l_{j-1}) - \sum_{l’} p(l’ | s) \sum_{j = 1}^m f_i(s, j, l’_j, l’_{j-1})]$ where $\alpha$ is some learning rate.
  • Repeat the previous steps until some stopping condition is reached (e.g., the updates fall below some threshold).

In other words, every step takes the difference between what we want the model to learn and the model’s current state, and moves $\lambda_i$ in the direction of this difference.

Finding the Optimal Labeling

Suppose we’ve trained our CRF model, and now a new sentence comes in. How do we do label it?

The naive way is to calculate $p(l | s)$ for every possible labeling l, and then choose the label that maximizes this probability. However, since there are $k^m$ possible labels for a tag set of size k and a sentence of length m, this approach would have to check an exponential number of labels.

A better way is to realize that (linear-chain) CRFs satisfy an optimal substructure property that allows us to use a (polynomial-time) dynamic programming algorithm to find the optimal label, similar to the Viterbi algorithm for HMMs.

A More Interesting Application

Okay, so part-of-speech tagging is kind of boring, and there are plenty of existing POS taggers out there. When might you use a CRF in real life?

Suppose you want to mine Twitter for the types of presents people received for Christmas:

(Yes, I just embedded a tweet. BOOM.)

How can you figure out which words refer to gifts?

To gather data for the graphs above, I simply looked for phrases of the form “I want XXX for Christmas” and “I got XXX for Christmas”. However, a more sophisticated CRF variant could use a GIFT part-of-speech-like tag (even adding other tags like GIFT-GIVER and GIFT-RECEIVER, to get even more information on who got what from whom) and treat this like a POS tagging problem. Features could be based around things like “this word is a GIFT if the previous word was a GIFT-RECEIVER and the word before that was ‘gave’” or “this word is a GIFT if the next two words are ‘for Christmas’”.

Fin

I’ll end with some more random thoughts:

  • I explicitly skipped over the graphical models framework that conditional random fields sit in, because I don’t think they add much to an initial understanding of CRFs. But if you’re interested in learning more, Daphne Koller is teaching a free, online course on graphical models starting in January.
  • Or, if you’re more interested in the many NLP applications of CRFs (like part-of-speech tagging or named entity extraction), Manning and Jurafsky are teaching an NLP class in the same spirit.
  • I also glossed a bit over the analogy between CRFs:HMMs and Logistic Regression:Naive Bayes. This image (from Sutton and McCallum’s introduction to conditional random fields) sums it up, and shows the graphical model nature of CRFs as well:

CRF Diagram

Winning the Netflix Prize: A Summary

How was the Netflix Prize won? I went through a lot of the Netflix Prize papers a couple years ago, so I’ll try to give an overview of the techniques that went into the winning solution here.

Normalization of Global Effects

Suppose Alice rates Inception 4 stars. We can think of this rating as composed of several parts:

  • A baseline rating (e.g., maybe the mean over all user-movie ratings is 3.1 stars).
  • An Alice-specific effect (e.g., maybe Alice tends to rate movies lower than the average user, so her ratings are -0.5 stars lower than we normally expect).
  • An Inception-specific effect (e.g., Inception is a pretty awesome movie, so its ratings are 0.7 stars higher than we normally expect).
  • A less predictable effect based on the specific interaction between Alice and Inception that accounts for the remainder of the stars (e.g., Alice really liked Inception because of its particular combination of Leonardo DiCaprio and neuroscience, so this rating gets an additional 0.7 stars).

In other words, we’ve decomposed the 4-star rating into: 4 = [3.1 (the baseline rating) - 0.5 (the Alice effect) + 0.7 (the Inception effect)] + 0.7 (the specific interaction)

So instead of having our models predict the 4-star rating itself, we could first try to remove the effect of the baseline predictors (the first three components) and have them predict the specific 0.7 stars. (I guess you can also think of this as a simple kind of boosting.)

More generally, additional baseline predictors include:

  • A factor that allows Alice’s rating to (linearly) depend on the (square root of the) number of days since her first rating. (For example, have you ever noticed that you become a harsher critic over time?)
  • A factor that allows Alice’s rating to depend on the number of days since the movie’s first rating by anyone. (If you’re one of the first people to watch it, maybe it’s because you’re a huge fan and really excited to see it on DVD, so you’ll tend to rate it higher.)
  • A factor that allows Alice’s rating to depend on the number of people who have rated Inception. (Maybe Alice is a hipster who hates being part of the crowd.)
  • A factor that allows Alice’s rating to depend on the movie’s overall rating.
  • (Plus a bunch of others.)

And, in fact, modeling these biases turned out to be fairly important: in their paper describing their final solution to the Netflix Prize, Bell and Koren write that

Of the numerous new algorithmic contributions, I would like to highlight one – those humble baseline predictors (or biases), which capture main effects in the data. While the literature mostly concentrates on the more sophisticated algorithmic aspects, we have learned that an accurate treatment of main effects is probably at least as signficant as coming up with modeling breakthroughs.

(For a perhaps more concrete example of why removing these biases is useful, suppose you know that Bob likes the same kinds of movies that Alice does. To predict Bob’s rating of Inception, instead of simply predicting the same 4 stars that Alice rated, if we know that Bob tends to rate movies 0.3 stars higher than average, then we could first remove Alice’s bias and then add in Bob’s: 4 + 0.5 + 0.3 = 4.8.)

Neighborhood Models

Let’s now look at some slightly more sophisticated models. As alluded to in the section above, one of the standard approaches to collaborative filtering is to use neighborhood models.

Briefly, a neighborhood model works as follows. To predict Alice’s rating of Titanic, you could do two things:

  • Item-item approach: find a set of items similar to Titanic that Alice has also rated, and take the (weighted) mean of Alice’s ratings on them.
  • User-user approach: find a set of users similar to Alice who rated Titanic, and again take the mean of their ratings of Titanic.

(See also my post on item-to-item collaborative filtering on Amazon.)

The main questions, then, are (let’s stick to the item-item approach for simplicity):

  • How do we find the set of similar items?
  • How do we weight these items when taking their mean?

The standard approach is to take some similarity metric (e.g., correlation or a Jaccard index) to define similarities between pairs of movies, take the K most similar movies under this metric (where K is perhaps chosen via cross-validation), and then use the same similarity metric when computing the weighted mean.

This has a couple problems:

  • Neighbors aren’t independent, so using a standard similarity metric to define a weighted mean overcounts information. For example, suppose you ask five friends where you should eat tonight. Three of them went to Mexico last week and are sick of burritos, so they strongly recommend against a taqueria. Thus, your friends’ recommendations have a stronger bias than what you’d get if you asked five friends who didn’t know each other at all. (Compare with the situation where all three Lord of the Rings Movies are neighbors of Harry Potter.)
  • Different movies should perhaps be using different numbers of neighbors. Some movies may be predicted well by only one neighbor (e.g., Harry Potter 2 could be predicted well by Harry Potter 1 alone), some movies may require more, and some movies may have no good neighbors (so you should ignore your neighborhood algorithms entirely and let your other ratings models stand on their own).

So another approach is the following:

  • You can still use a similarity metric like correlation or cosine similarity to choose the set of similar items.
  • But instead of using the similarity metric to define the interpolation weights in the mean calculations, you essentially perform a (sparse) linear regression to find the weights that minimize the squared error between an item’s rating and a linear combination of the ratings of its neighbors. Note that these weights are no longer constrained, so that if all neighbors are weak, then their weights will be close to zero and the neighborhood model will have a low effect.

(A slightly more complicated user-user approach, similar to this item-item neighborhood approach, is also useful.)

Implicit Data

Adding on to the neighborhood approach, we can also let implicit data influence our predictions. The mere fact that a user rated lots of science fiction movies but no westerns, suggests that the user likes science fiction better than cowboys. So using a similar framework as in the neighborhood ratings model, we can learn for Inception a set of offset weights associated to Inception’s movie neighbors.

Whenever we want to predict how Bob rates Inception, we look at whether Bob rated each of Inception’s neighbors. If he did, we add in the corresponding offset; if not, then we add nothing (and, thus, Bob’s rating is implicitly penalized by the missing weight).

Matrix Factorization

Complementing the neighborhood approach to collaborative filtering is the matrix factorization approach. Whereas the neighborhood approach takes a very local approach to ratings (if you liked Harry Potter 1, then you’ll like Harry Potter 2!), the factorization approach takes a more global view (we know that you like fantasy movies and that Harry Potter has a strong fantasy element, so we think that you’ll like Harry Potter) that decomposes users and movies into a set of latent factors (which we can think of as categories like “fantasy” or “violence”).

In fact, matrix factorization methods were probably the most important class of techniques for winning the Netflix Prize. In their 2008 Progress Prize paper, Bell and Koren write

It seems that models based on matrix-factorization were found to be most accurate (and thus popular), as evident by recent publications and discussions on the Netflix Prize forum. We definitely agree to that, and would like to add that those matrix-factorization models also offer the important flexibility needed for modeling temporal effects and the binary view. Nonetheless, neighborhood models, which have been dominating most of the collaborative filtering literature, are still expected to be popular due to their practical characteristics - being able to handle new users/ratings without re-training and offering direct explanations to the recommendations.

The typical way to perform matrix factorizations is to perform a singular value decomposition on the (sparse) ratings matrix (using stochastic gradient descent and regularizing the weights of the factors, possibly constraining the weights to be positive to get a type of non-negative matrix factorization). (Note that this “SVD” is a little different from the standard SVD learned in linear algebra, since not every user has rated every movie and so the ratings matrix contains many missing elements that we don’t want to simply treat as 0.)

Some SVD-inspired methods used in the Netflix Prize include:

  • Standard SVD: Once you’ve represented users and movies as factor vectors, you can dot product Alice’s vector with Inception’s vector to get Alice’s predicted rating of Inception.
  • Asymmetric SVD: Instead of users having their own notion of factor vectors, we can represent users as a bag of items they have rated (or provided implicit feedback for). So Alice is now represented as a (possibly weighted) sum of the factor vectors of the items she has rated, and to get her predicted rating of Titanic, we can dot product this representation with the factor vector of Titanic. From a practical perspective, this model has an added benefit in that no user parameterizations are needed, so we can use this approach to generate recommendations as soon as a user provides some feedback (which could just be views or clicks on an item, and not necessarily ratings), without needing to retrain the model to factorize the user.
  • SVD++: Incorporate both the standard SVD and the asymmetric SVD model by representing users both by their own factor representation and as a bag of item vectors.

Regression

Some regression models were also used in the predictions. The models are fairly standard, I think, so I won’t spend too long here. Basically, just as with the neighborhood models, we can take a user-centric approach and a movie-centric approach to regression:

  • User-centric approach: We learn a regression model for each user, using all the movies that the user rated as the dataset. The response is the movie’s rating, and the predictor variables are attributes associated to that movie (which can be derived from, say, PCA, MDS, or an SVD).
  • Movie-centric approach: Similarly, we can learn a regression model for each movie, using all the users that rated the movie as the dataset.

Restricted Boltzmann Machines

Restricted Boltzmann Machines provide another kind of latent factor approach that can be used. See this paper for a description of how to apply them to the Netflix Prize. (In case the paper’s a little difficult to read, I wrote an introduction to RBMs a little while ago.)

Temporal Effects

Many of the models incorporate temporal effects. For example, when describing the baseline predictors above, we used a few temporal predictors that allowed a user’s rating to (linearly) depend on the time since the first rating he ever made and on the time since a movie’s first rating. We can also get more fine-grained temporal effects by, say, binning items into a couple months’ worth of ratings at a time, and allowing movie biases to change within each bin. (For example, maybe in May 2006, Time Magazine nominated Titanic as the best movie ever made, which caused a spurt in glowing ratings around that time.)

In the matrix factorization approach, user factors were also allowed to be time-dependent (e.g., maybe Bob comes to like comedy movies more and more over time). We can also give more weight to recent user actions.

Regularization

Regularization was also applied throughout pretty much all the models learned, to prevent overfitting on the dataset. Ridge regression was heavily used in the factorization models to penalize large weights, and lasso regression (though less effective) was useful as well. Many other parameters (e.g., the baseline predictors, similarity weights and interpolation weights in the neighborhood models) were also estimated using fairly standard shrinkage techniques.

Ensemble Methods

Finally, let’s talk about how all of these different algorithms were combined to provide a single rating that exploits the strengths of each model. (Note that, as mentioned above, many of these models were not trained on the raw ratings data directly, but rather on the residuals of other models.)

In the paper detailing their final solution, the winners describe using gradient boosted decision trees to combine over 500 models; previous solutions used instead a linear regression to combine the predictors.

Briefly, gradient boosted decision trees work by sequentially fitting a series of decision trees to the data; each tree is asked to predict the error made by the previous trees, and is often trained on slightly perturbed versions of the data. (For a longer description of a similar technique, see my introduction to random forests.)

Since GBDTs have a built-in ability to apply different methods to different slices of the data, we can add in some predictors that help the trees make useful clusterings:

  • Number of movies each user rated
  • Number of users that rated each movie
  • Factor vectors of users and movies
  • Hidden units of a restricted Boltzmann Machine

(For example, one thing that Bell and Koren found (when using an earlier ensemble method) was that RBMs are more useful when the movie or the user has a low number of ratings, and that matrix factorization methods are more useful when the movie or user has a high number of ratings.)

Here’s a graph of the effect of ensemble size from early on in the competition (in 2007), and the authors’ take on it:

Ensemble Size vs. RMSE

However, we would like to stress that it is not necessary to have such a large number of models to do well. The plot below shows RMSE as a function of the number of methods used. One can achieve our winning score (RMSE=0.8712) with less than 50 methods, using the best 3 methods can yield RMSE < 0.8800, which would land in the top 10. Even just using our single best method puts us on the leaderboard with an RMSE of 0.8890. The lesson here is that having lots of models is useful for the incremental results needed to win competitions, but practically, excellent systems can be built with just a few well-selected models.