samedi 25 juillet 2020

Architecture Patterns for User-Specific Login Content

Been watching a lot of system design videos and a concept isn't clicking.

Consider a system like Twitter in which a User's feed displays 20 recent tweets from their friends. These tweets are actually pre-computed and ready when the user comes online, thus the feed page loads quickly.

My question: how can each of twitter's 330 million users have a personalized queue ready? At first, I thought maybe like a literal amazon SQS existed for every twitter user, but seems that this many SQS queues is not feasible/possible (although maybe through virtual queues...?}). Then I figured, well maybe page is actuallying reading from some optimized DB... unlikely.

So what are some patterns to queueing user-specific messages in a distributed system? The same applies for example to Gmail, What's App, Facebook feed etc... anything that, when user logs in, they have predetermiend content waiting. Where specifically do these messages live? Are they in a KV store for example? Are they in a literal queue somewhere?

I do understand this is part of the fanout pattern, but am more so interested on where the content lives after it's been fanned out.

Here's a really great tutorial on Twitter Architecture where KV store is used, and wondering if this is industry-standard? If not, what else?

Aucun commentaire:

Enregistrer un commentaire