How to implement the activity stream in a social network

I'm developing my own social network, and I haven't found on the web examples of implementation the stream of users' actions... For example, how to filter actions for each users? How to store the action events? Which data model and object model can I use for the actions stream and for the actions itselves?


Summary: For about 1 million active users and 150 million stored activities, I keep it simple:

  • Use a relational database for storage of unique activities (1 record per activity / "thing that happened") Make the records as compact as you can. Structure so that you can quickly grab a batch of activities by activity ID or by using a set of friend IDs with time constraints.
  • Publish the activity IDs to Redis whenever an activity record is created, adding the ID to an "activity stream" list for every user who is a friend/subscriber that should see the activity.

Query Redis to get the activity stream for any user and then grab the related data from the db as needed. Fall back to querying the db by time if the user needs to browse far back in time (if you even offer this)


I use a plain old MySQL table for dealing with about 15 million activities.

It looks something like this:

id             
user_id       (int)
activity_type (tinyint)
source_id     (int)  
parent_id     (int)
parent_type   (tinyint)
time          (datetime but a smaller type like int would be better) 

activity_type tells me the type of activity, source_id tells me the record that the activity is related to. So if the activity type means "added favorite" then I know that the source_id refers to the ID of a favorite record.

The parent_id/parent_type are useful for my app - they tell me what the activity is related to. If a book was favorited, then parent_id/parent_type would tell me that the activity relates to a book (type) with a given primary key (id)

I index on (user_id, time) and query for activities that are user_id IN (...friends...) AND time > some-cutoff-point. Ditching the id and choosing a different clustered index might be a good idea - I haven't experimented with that.

Pretty basic stuff, but it works, it's simple, and it is easy to work with as your needs change. Also, if you aren't using MySQL you might be able to do better index-wise.


For faster access to the most recent activities, I've been experimenting with Redis. Redis stores all of its data in-memory, so you can't put all of your activities in there, but you could store enough for most of the commonly-hit screens on your site. The most recent 100 for each user or something like that. With Redis in the mix, it might work like this:

  • Create your MySQL activity record
  • For each friend of the user who created the activity, push the ID onto their activity list in Redis.
  • Trim each list to the last X items

Redis is fast and offers a way to pipeline commands across one connection - so pushing an activity out to 1000 friends takes milliseconds.

For a more detailed explanation of what I am talking about, see Redis' Twitter example: http://redis.io/topics/twitter-clone

Update February 2011 I've got 50 million active activities at the moment and I haven't changed anything. One nice thing about doing something similar to this is that it uses compact, small rows. I am planning on making some changes that would involve many more activities and more queries of those activities and I will definitely be using Redis to keep things speedy. I'm using Redis in other areas and it really works well for certain kinds of problems.

Update July 2014 We're up to about 700K monthly active users. For the last couple years, I've been using Redis (as described in the bulleted list) for storing the last 1000 activity IDs for each user. There are usually about 100 million activity records in the system and they are still stored in MySQL and are still the same layout. These records let us get away with less Redis memory, they serve as the record of activity data, and we use them if users need to page further back in time to find something.

This wasn't a clever or especially interesting solution but it has served me well.


This is my implementation of an activity stream, using mysql. There are three classes: Activity, ActivityFeed, Subscriber.

Activity represents an activity entry, and its table looks like this:

id
subject_id
object_id
type
verb
data
time

Subject_id is the id of the object performing the action, object_id the id of the object that receives the action. type and verb describes the action itself (for example, if a user add a comment to an article they would be "comment" and "created" respectively), data contains additional data in order to avoid joins (for example, it can contain the subject name and surname, the article title and url, the comment body etc.).

Each Activity belongs to one or more ActivityFeeds, and they are related by a table that looks like this:

feed_name
activity_id

In my application I have one feed for each User and one feed for each Item (usually blog articles), but they can be whatever you want.

A Subscriber is usually an user of your site, but it can also be any object in your object model (for example an article could be subscribed to the feed_action of his creator).

Every Subscriber belongs to one or more ActivityFeeds, and, like above, they are related by a link table of this kind:

feed_name
subscriber_id
reason

The reason field here explains why the subscriber has subscribed the feed. For example, if a user bookmark a blog post, the reason is 'bookmark'. This helps me later in filtering actions for notifications to the users.

To retrieve the activity for a subscriber, I do a simple join of the three tables. The join is fast because I select few activities thanks to a WHERE condition that looks like now - time > some hours. I avoid other joins thanks to data field in Activity table.

Further explanation on reason field. If, for example, I want to filter actions for email notifications to the user, and the user bookmarked a blog post (and so he subscribes to the post feed with the reason 'bookmark'), I don't want that the user receives email notifications about actions on that item, while if he comments the post (and so it subscribes to the post feed with reason 'comment') I want he is notified when other users add comments to the same post. The reason field helps me in this discrimination (I implemented it through an ActivityFilter class), together with the notifications preferences of the user.


There is a current format for activity stream that is being developed by a bunch of well-know people.

http://activitystrea.ms/.

Basically, every activity has an actor (who performs the activity), a verb (the action of the activity), an object (on which the actor performs on), and a target.

For example: Max has posted a link to Adam's wall.

Their JSON's Spec has reached version 1.0 at the time of writing, which shows the pattern for the activity that you can apply.

Their format has already been adopted by BBC, Gnip, Google Buzz Gowalla, IBM, MySpace, Opera, Socialcast, Superfeedr, TypePad, Windows Live, YIID, and many others.


I think that an explanation on how notifications system works on large websites can be found in the stack overflow question how does social networking websites compute friends updates?, in the Jeremy Wall's answer. He suggests the use of Message Qeue and he indicates two open source softwares that implement it:

  1. RabbitMQ
  2. Apache QPid

See also the question What’s the best manner of implementing a social activity stream?


You absolutely need a performant & distributed message queue. But it does not end there, you'll have to make decisions on what to store as persistent data and what as transient and etc.

Anyway, it is really a difficult task my friend if you are after a high performance and scalable system. But, of course some generous engineers have shared their experience on this. LinkedIn lately made its message queue system Kafka open source. Before that, Facebook had already provided Scribe to the open source community. Kafka is written in Scala and at first it takes some time to make it run but i tested with a couple of virtual servers. It is really fast.

http://blog.linkedin.com/2011/01/11/open-source-linkedin-kafka/

http://incubator.apache.org/kafka/index.html