Scale to Millions of Users
How to design a system that supports millions of users? What are the challenges to scaling it up to serve millions of users?
How to design a system that supports millions of users? What are the challenges to scaling it up to serve millions of users?
Here's the single server setup. The web server hosts the source codes, assets like media files, and database installed in a single instance.
If this single server breaks, the entire system is down.
Now let's see how we can break it down into scalable components and scale it horizontally.
- The web tier must be stateless. Move session data and users' files to shared storage, i.e., S3.
- Add a load balancer. Evenly distributes incoming traffic to stateless web servers. So we can add more servers to the pool.
- Database replication (master-slave model). The Master database supports only write operations. Slave databases only support read operations. This way, we can improve performance and site reliability.
- NoSQL database for non-relational databases like caching and message queue (explain below).
- Cache storage for frequently accessed data. It will help reduce calls to the database and improve response time. So cache data as much as we can.
- Host static content in CDN. Improve load time when users get static files like media from the nearest CDN server.
- Message queue to support asynchronous communication. Decouple the heavy stuff like photo processing to the background.
- Send logs to external services like ElasticSearch. Monitor logs from one central dashboard. The same applies to the server metrics (CPU, Memory, disk I/O, health checks) and business metrics (users, revenue analytics).
Not enough? Here's the API gateway to the rescue.
Read more: