Beruflich Dokumente
Kultur Dokumente
Agenda
Why is Scalability important Introduction to the Variables and Factors Building our own Scalable Architecture (in incremental steps)
Vertical Scaling Vertical Partitioning Horizontal Scaling Horizontal Partitioning etc
RoR / Grails Dynamic language landscape gaining popularity In the end you want to build a Web 2.0 app that can serve millions of users with ZERO downtime
The Variables
Scalability - Number of users / sessions / transactions /
operations the entire system can perform
Performance Optimal utilization of resources Responsiveness Time taken per operation Availability - Probability of the application or a portion of the
application being available at any given point in time
Cost Maintenance Effort High: scalability, availability, performance & responsiveness Low: downtime impact, cost & maintenance effort
Creative Commons Sharealike Attributions Noncommercial 4
The Factors
Platform selection Hardware Application Design Database/Datastore Structure and Architecture Deployment Architecture Storage Architecture Abuse prevention Monitoring mechanisms and more
Lets Start
We will now build an example architecture for an example app using the following iterative incremental steps
Inspect current Architecture Identify Scalability Bottlenecks Identify SPOFs and Availability Issues Identify Downtime Impact Risk Zones Apply one of
Vertical Scaling Vertical Partitioning Horizontal Scaling Horizontal Partitioning
Repeat process
CPU CPU
Advantages
Simple to implement
Disadvantages
Finite limit Hardware does not scale linearly (diminishing returns for each incremental unit) Requires downtime Increases Downtime Impact Incremental costs increase exponentially
Introduction
Deploying each service on a separate node
Positives
Increases per application Availability Task-based specialization, optimization and tuning possible Reduces context switching Simple to implement for out of band processes No changes to App required Flexibility increases
DBServer
Negatives
Sub-optimal resource utilization May not increase overall availability Finite Scalability
10
Vertical Partitioning can be performed at various layers (App / Server / Data / Hardware etc)
11
Load Balancer
Introduction
Increasing the number of nodes of the App Server through Load Balancing Referred to as Scaling out the App Server
AppServer
AppServer
AppServer
DBServer
12
Horizontal Scaling can be performed for any particular type of node (AppServer / DBServer etc)
13
14
Sticky Sessions
Requests for a given user are sent to a fixed App Server Observations Asymmetrical load distribution
(especially during downtimes) Downtime Impact Loss of session data Sticky Sessions
User 1 User 2
Load Balancer
AppServer
AppServer
AppServer
15
AppServer
AppServer
AppServer
Session Store
16
AppServer
AppServer
AppServer
17
Load Balancer
AppServer
AppServer
AppServer
18
Recommendation
Use Clustered Session Management if you have Smaller Number of App Servers Fewer Session writes Use a Central Session Store elsewhere Use sticky sessions only if you have to
19
Load Balancer
Load Balancer
AppServer
AppServer
AppServer
Active-Active LB
Users
Load Balancer
Load Balancer
AppServer
AppServer
AppServer
20
DBServer
Negatives
Finite Scalability
21
Introduction
Partitioning out the Storage function using a SAN
DBServer
Positives
Allows Scaling Up the DB Server Boosts Performance of DB Server
SAN
Negatives
Increases Cost
22
Introduction
Increasing the number of DB nodes Referred to as Scaling out the DB Server
Options
Shared nothing Cluster Real Application Cluster (or Shared Storage Cluster)
SAN
23
DBServer
Database
DBServer
Database
DBServer
Database
24
Replication Considerations
Master-Slave
Writes are sent to a single master which replicates the data to multiple slave nodes Replication maybe cascaded Simple setup No conflict management required
Multi-Master
Writes can be sent to any of the multiple masters which replicate them to other masters and slaves Conflict Management required Deadlocks possible if same data is simultaneously modified at multiple places
25
Replication Considerations
Asynchronous
Guaranteed, but out-of-band replication from Master to Slave Master updates its own db and returns a response to client Replication from Master to Slave takes place asynchronously Faster response to a client Slave data is marginally behind the Master Requires modification to App to send critical reads and writes to master, and load balance all other reads
Synchronous
Guaranteed, in-band replication from Master to Slave Master updates its own db, and confirms all slaves have updated their db before returning a response to client Slower response to a client Slaves have the same data as the Master at all times Requires modification to App to send writes to master and load balance all reads
Creative Commons Sharealike Attributions Noncommercial 26
Replication Considerations
27
DBServer
DBServer
DBServer
Database SAN
28
Recommendation
Try and choose a DB which natively supports Master-Slave replication Use Master-Slave Async replication Write your DAO layer to ensure
writes are sent to a single DB reads are load balanced Critical reads are sent to a master
29
DB DB DB DB Cluster
Negatives
Finite limit
SAN
30
31
Introduction
Increasing the number of DB Clusters by dividing the data
Options
Vertical Partitioning - Dividing tables / columns Horizontal Partitioning - Dividing by rows (value)
DB DB DB
DB Cluster
SAN
32
Each DB Cluster has different tables Application code or DAO / Driver code or a proxy knows where a given table is and directs queries to the appropriate DB Can also be done at a column level by moving a set of columns into a separate table
DB Cluster 1
Table 1 Table 2
DB Cluster 2
Table 3 Table 4
33
Negatives
One cannot perform SQL joins or maintain referential integrity (referential integrity is as such overrated) Finite Limit
App Cluster
DB Cluster 1
Table 1 Table 2
DB Cluster 2
Table 3 Table 4
34
Each DB Cluster has identical tables Application code or DAO / Driver code or a proxy knows where a given row is and directs queries to the appropriate DB Negatives
SQL unions for search type queries must be performed within code
DB Cluster 1
Table 1 Table 2 Table 3 Table 4 1 million users
DB Cluster 2
Table 1 Table 2 Table 3 Table 4 1 million users
35
Techniques
FCFS 1st million users are stored on cluster 1 and the next on cluster 2 Round Robin Least Used (Balanced) Each time a new user is added, a DB cluster with the least users is
chosen
Hash based A hashing function is used to determine the DB Cluster in which the
user data should be inserted
Value Based User ids 1 to 1 million stored in cluster 1 OR all users with names starting from A-M on cluster 1 Except for Hash and Value based all other techniques also require an independent lookup map mapping user to Database Cluster This map itself will be stored on a separate DB (which may 36 further need to be replicated) Creative Commons Sharealike Attributions Noncommercial
DB DB DB
DB DB DB
DB Cluster
DB Cluster
SAN
Note: This is not the same as table partitioning provided by the db (eg MSSQL) We may actually want to further segregate these into Sets, each serving a collection of users (refer next slide
37
Lookup Map
Lookup Map
DB DB DB
DB DB DB
DB DB DB
DB DB DB
DB Cluster
SAN
DB Cluster
DB Cluster
SAN
DB Cluster
Creating Sets
The goal behind creating sets is easier manageability Each Set is independent and handles transactions for a set of users Each Set is architecturally identical to the other Each Set contains the entire application with all its data structures Sets can even be deployed in separate datacenters Users may even be added to a Set that is closer to them in terms of network latency
39
Global Redirector
Negatives
Aggregation of data across sets is complex Users may need to be moved across Sets if sizing is improper Global App settings and preferences need to be replicated across Sets
DB Cluster DB Cluster
SAN
DB Cluster DB Cluster
SAN
SET 1
SET 2
40
Step 9 Caching
Software
Memcached Teracotta (Java only) Coherence (commercial expensive data grid by Oracle)
41
Solutions
Nginx (HTTP / IMAP) Perlbal Hardware accelerators plus Load Balancers
42
Grid computing
Java GridGain Erlang natively built in
43
RDBMS
MySQL, MSSQL and Oracle support native replication Postgres supports replication through 3rd party software (Slony) Oracle supports Real Application Clustering MySQL uses locking and arbitration, while Postgres/Oracle use MVCC (MSSQL just recently introduced MVCC)
Cache
Teracotta vs memcached vs Coherence
Creative Commons Sharealike Attributions Noncommercial 44
Tips
All the techniques we learnt today can be applied in any order Try and incorporate Horizontal DB partitioning by value from the beginning into your design Loosely couple all modules Implement a REST-ful framework for easier caching Perform application sizing ongoingly to ensure optimal utilization of hardware
45
46