Question

So assume im building netflix and I want to log each view by the userID and the movie ID

so its like viewID , userID, timestamp,

however inorder to scale this, assume were getting 1000 views a second (just to be crazy). Would it make sense to queue these views to SQS and then our queue readers can un-queue one by one and write it to the mysql database. This way the database isnt overloaded with write requests.

Does this look like it would work?

Was it helpful?

Solution

Faisal,

This is a reasonable architecture; however, you should know that writing to SQS is going to be many times slower than writing to something like RabbitMQ (or any local) message queue.

By default, SQS FIFO queues support up to 3,000 messages per second with batching, or up to 300 messages per second (300 send, receive, or delete operations per second) without batching. To request a limit increase, you need to file a support request.

That being said, starting with SQS wouldn't be a bad idea since it is easy to use and debug.

Additionally, you may want to investigate MongoDB for logging...check out the following references:

MongoDB is Fantastic for Logging

http://blog.mongodb.org/post/172254834/mongodb-is-fantastic-for-logging

Capped Collections

http://blog.mongodb.org/post/116405435/capped-collections

Using MongoDB for Real-time Analytics

http://blog.mongodb.org/post/171353301/using-mongodb-for-real-time-analytics

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top