Beruflich Dokumente
Kultur Dokumente
Table of Contents
Chapter 4: Conclusion
Chapter
Chapter 1:
Enterprise Challenges
High information system productivity demands low latency, and the requirements for I/O latency are most
stringent when running mission-critical business applications. For any IT deployment, the greatest challenge is
achieving a balance between low latency, high efficiency, and optimized system-utilization rate.
Degree of I/O latency in a storage system is determined by two factors: I/O workload pattern and storage media
capabilities. The majority of business applications (e.g. OLTP databases or email services) involve random IOPS,
which access data stored non-contiguously on system disks. As required bits of data are not located within
physical proximity to one another, the system produces numerous seeking processes, thereby increasing I/O
latency.
Traditionally, to overcome high I/O latency as a result of random IOPS workloads, a larger than necessary
number of disks may be deployed to increase the number of heads and reduce the chance of two consecutive
reads on the same disk, thereby boosting access performance. However, there are several drawbacks to overdeployment, including lower efficiency and overall system-utilization. More specifically, increasing the number of
disks necessarily involves increasing number of enclosures, amount of space required, power consumed for
operating and cooling, and ultimately leads to higher maintenance costs. Moreover, system-utilization rate may
diminish as unnecessary capacity is added to reach the requisite amount of heads.
--------1
Please visit Synology website (www.synology.com) for applied models supporting SSD cache.
Chapter
Chapter 2:
In most applications, there are observable patterns in data retrieval and workload due to the particular I/O
characteristics of the applications behavior. For instance, in an OLTP database workload, some tables in the
database are more frequently read than others. The most frequently accessed data is termed hot data.
Of this hot data subset, the most recent data has an even higher probability of being frequently accessed. In the
majority of critical, business workloads, the most recently accessed data is also the most relevant and therefore
in need of timely retrieval.
As the hot data is merely a portion of the whole data set with the most intensive I/O requests, a small number of
SSDs can be used to cache all hot data, thereby leveraging its superior I/O capabilities to significantly improve
system performance.
--------1
To ensure the best performance, the SSDs should be of the same size, brand, and model, and listed on Synologys official Hardware Compatibility List
If further requests for the same data are generated, read operations will be conducted on SSD in what is termed
a cache hit. As data is retrieved from cache, reading performance is enhanced.
Conversely, when a reading request is sent for data and the data is not located on the SSD Cache, the situation
is termed cache miss. A disk reading operation will be triggered after cache miss, and a copy of the data
requested will be made on the SSD Cache to accelerate the reading speed of any future requests.
The SSD Cache manipulates all data in block-level. For instance, when reading a 400 KB piece of data from a 4
GB file, only the relevant 400 KB of data will be accessed. Furthermore, if these 400 KB of data are absent, the
system will read from disks and copy it into SSD Cache.
As the memory map in the SSD Cache is empty at the start, almost every data reading operations will cause a
cache miss. Meanwhile, copies of data will be added to the SSD Cache continually. This period is called warmup and is mainly composed of copy operations. The warm-up period can also occur if the working data set
changes drastically, so that currently cached data is no longer requested. A high cache hit rate indicates that the
SSD Cache is being fully utilized. This indicator will grow through the warm-up period.
System hit rate is dependent on two factors: the size of SSD Cache and size of hot data. Higher hit rate requires
that more hot data is stored on the SSD Cache. For instance, in a 2 TB file server with 100 GB of frequently
accessed data, the recommended cache size would be slightly above 100 GB. However, as the size of hot data
is 100 GB, the benefit of setting up a 500 GB SSD Cache configuration would be limited.
You can view read hit rate information in the management user interface of Storage Manager.
Figure 8 Cache read hit rate information displayed in management user interface
--------1
Additionally, a mapping table will be created in the RAM module to track the SSD cache data. Therefore, RAM
size must be proportional to the size of SSD cache. Every 1 GB of SSD cache requires approximately 400 KB of
system memory. For Intel-based Synology NAS, only 1/4 of the pre-installed system memory can be used for
SSD cache. For Annapurna-based Synology NAS, the total size of SSD cache is limited to 1 TB.
You can also measure the IOPS in Windows and VMware environment using built-in Performance Monitor tools.
For more information, please refer the blog posts:
10
Chapter
Chapter 3:
Test Results
Test Case
In this performance test, IOMeter was used to generate workload to simulate applications or virtual machines
which require intensive I/O and low latency, with many re-read operations in a small portion of a larger data-set.
As a result, enabling SSD cache is expected to improve performance.
Testing Configurations
Tests were conducted with the following configurations.
Storage Server Configurations #1:
Model: RS3614xs+
Hard drive: WD4000FYYZ x 10
SSD cache: Intel 520 Series SSDSC2CW24 240GB
RAID type: RAID 5
IOMeter Settings:
The number of outstanding I/Os: 32 per target
Worker: 1 (per share)
Running time: 3 minutes (3 times)
Ramp up time: 30 seconds
Data size: 20 GB
SSD cache size: 10 GB and 20 GB
Workload: Random 4KB IOPS
11
No Cache
10 GB SSD Cache
(Data Size = 2 x Cache
Size)
20 GB SSD Cache
(Data Size = Cache Size)
100% Write
562.023
1373.807
11077.01
586.03
1336.607
9785.493
100% Read
1015.73
1434.947
14425.953
1600
1434.947
1373.807
1400
1336.607
1200
1015.73
1000
800
600
562.023
586.03
400
200
0
100% Write
No Cache
100% Read
10 GB Cache
12
Test Configuration #2
No Cache
10 GB SSD Cache
(Data Size = 2 x Cache
Size)
20 GB SSD Cache
(Data Size = Cache Size)
100% Write
446.547
1209.883
4321.457
587.557
1452.51
4811.27
100% Read
1164.66
1634.89
6468.373
1800
1634.89
1600
1452.51
1400
1209.883
1164.66
1200
1000
800
600
587.557
446.547
400
200
0
100% Write
No Cache
100% Read
10 GB Cache
13
Chapter 3:
Conclusion
The test results demonstrated that performance can be significantly improved by leveraging SSD Cache
technology, without the need to add more HDDs to boost IOPS capability. SSD cache can improve IOPS and
provide more system capacity to boost service performance up to 2.7x faster.
Please note that testing results were derived through restricted conditions and specific configurations of our
testing lab and that the results might vary in different environments.
14
Chapter 3: Conclusion