Beruflich Dokumente
Kultur Dokumente
Abstract
NetApp Flash Cache and Flash Cache 2 cards, and caching software embedded in the
Data ONTAP operating system, enable better performance with NetApp storage systems.
This guide describes how Flash Cache and Flash Cache 2 work, provides essential
information for successful implementation, and explains how to measure the effectiveness of
Flash Cache in a deployed system.
TABLE OF CONTENTS
1 Overview ................................................................................................................................................ 4
1.1 NetApp Virtual Storage Tier ............................................................................................................................4
2.2 Data ONTAP Clearing Space in System Memory for More Data ....................................................................6
LIST OF FIGURES
Figure 1) The NetApp VST product family. .....................................................................................................................4
Figure 2) A read on a Data ONTAP storage system that is not configured with Flash Cache. .......................................6
Figure 3) Clearing memory on a Data ONTAP storage system that is not configured with Flash Cache. ......................6
Figure 4) Inserting data into a Flash Cache cache. ........................................................................................................6
Figure 5) Reads from Flash Cache are typically 10 times faster than from disk. ............................................................7
Figure 6) Metadata caching. ...........................................................................................................................................8
Figure 7) Normal data caching; both metadata and user data are cached. ....................................................................8
Figure 8) Low-priority data caching; metadata, user data, and low-priority data are cached. .........................................9
Figure 9) Flash Cache operating in metadata mode with the cache=keep priority enabled for Vol1. .........................10
Figure 10) Flash Cache operating in normal mode with the cache=reuse priority enabled for Vol2. .........................10
Figure 11) Direct data access with clustered Data ONTAP. .........................................................................................11
Figure 12) Indirect data access with clustered Data ONTAP. .......................................................................................12
Figure 13) Hit rates with constant workload and increasing cache size for two working set sizes. ...............................17
Figure 14) Additional hit rates achieved by adding more cache. ..................................................................................17
2.2 Data ONTAP Clearing Space in System Memory for More Data
As more data is inserted into memory from read requests for other data, eventually all memory buffers
become full. Data ONTAP determines which data is the least valuable to retain in memory, and that data
is evicted from memory buffers to make room for new data that is being read from disk.
Figure 3) Clearing memory on a Data ONTAP storage system that is not configured with Flash Cache.
Figure 5) Reads from Flash Cache are typically 10 times faster than from disk.
normal_data_blocks=off
Normal user data
(metadata)
Metadata
normal_data_blocks=on
Normal user data
(metadata)
Metadata
Figure 8) Low-priority data caching; metadata, user data, and low-priority data are cached.
lopri_blocks=on
Low priority user data
normal_data_blocks=on
Normal user data
(metadata)
Metadata
Figure 9) Flash Cache operating in metadata mode with the cache=keep priority enabled for Vol1.
lopri_blocks=off
Low priority user data
normal_data_blocks=off
Normal user data Vol1 cache
=keep
(metadata)
Metadata
The second option is used when Flash Cache is operating in normal or low-priority mode, and the
objective is to retain only metadata for a specific volume. In this case, the priority command to use is:
> priority set volume Vol2 cache=reuse
Figure 10 shows this caching behavior for a volume named Vol2.
Figure 10) Flash Cache operating in normal mode with the cache=reuse priority enabled for Vol2.
lopri_blocks=off
Low priority user data
normal_data_blocks=on
Normal user data Vol2
cache=reuse
(metadata)
Metadata
The System Performance and Resources section in the System Administration Guide contains more
information about administering FlexShare. Additionally, NetApp TR-3459, FlexShare Design and
Implementation Guide, discusses FlexShare in detail: http://www.netapp.com/us/library/technical-
reports/tr-3459.html.
4.1 Management
Clustered Data ONTAP is managed from the cluster shell by default. Flash Cache is managed per node
in the node shell. The node shell is reached by using the following command from the cluster shell:
Cluster::> node run node <node_name>
At the node shell, the same commands covered in the previous section are used to modify caching
settings. For example, normal data caching mode would be enabled on node nodeA with the following
commands:
4.2 Operation
With Flash Cache, a volumes data is cached on the node (controller) that is managing the aggregate on
which the volume is provisioned. The following examples illustrate Flash Cache caching when data is
accessed directly through the node where a volume is provisioned, and when it is accessed indirectly
through a different node. Both examples depict a four-node (two HA pair) cluster.
5.1 Deduplication
Deduplicated blocks are cachable in the same way as nondeduplicated blocks. In addition, blocks cached
in Flash Cache are deduplicated just as they are on disk; that is, only one physical block is needed to
cache multiple logical references to the same data. In this way, deduplication enables Flash Cache to
store more unique data, thereby increasing the effectiveness of the cache.
5.2 FlexClone
Flash Cache caches blocks from file, LUN, and volume clones, and only one physical block is needed to
cache multiple references to a cloned block. As with deduplication, Flash Cache is able to cache more
unique data when FlexClone is used, which increases the effectiveness of the cache.
5.3 Compression
Flash Cache can cache data from volumes on which compression has been enabled. However, only the
uncompressed blocks from these volumes are cachable; compressed blocks are not inserted into Flash
Cache.
In the second scenario, the cache is at 100% effectiveness only at the point at which it is full. The usage
does not indicate how busy the cache is, but rather how full it is. In many cases, the cache never fills to
100% but hovers at slightly less than 100% usage because of the algorithm that Data ONTAP uses to
manage the cache.
To determine how long it might take to completely fill up the cache use the following simple calculation:
time to fill the cache = (size of cache) / (read throughput per second)
For example:
512GB/100MB per second = 5,120 seconds = 85 minutes
At first glance this scenario might be daunting, but the missing part here is the behavior of a cache and
cached data over time. As a cache fills up and is used by the system, the data most likely to be reused is
the data most recently placed in the cache. As the cache fills, the relative effect of more data in the
system lessens. Because of this, a caching technology has the most effect in the first 10% of usage and
the least effect in the last 10% as the cache is filled. Caches need to be large to have the largest possible
effect and to maximize the benefit for large datasets, but the benefit of the cache begins with the first
reusable entries placed into it.
Takeover Events
When designing a solution that includes Flash Cache, keep these points in mind:
NetApp recommends a symmetric configuration of the cache on each node of an HA pair. If one of
the systems has 2TB of cache, for example, the other system should have 2TB as well, to enable
more consistent performance in the event of a takeover.
In the event of a takeover, the partner node becomes responsible for caching data. Upon giveback,
the cache might need to be reinitialized, depending on the reason for the takeover. If cache
rewarming is possible, warming time might be reduced.
Nondisruptive Failures
Flash Cache is designed to fail in a nonfatal way. In the event of a hardware problem, the module is taken
offline and the system continues to operate without it. This allows data availability until downtime can be
taken to replace the failed module.
7 Cache Sizing
Flash Cache can benefit a variety of workloads, but the overall improvement depends on the workload
characteristics and working set size. There are many variables that affect the ability to cache data in
Flash Cache. However, the more data of the working set that can be cached, the higher the probability
that data will be served out of Flash Cache instead of going to disk. For most workloads, the number of
hits continues to increase asymptotically as the cache size is increased toward the maximum rate
possible given the workload characteristics. Figures 13 and 14 are examples of a workload executed
against two different working set sizes (300GB and 500GB) at different caching points. The trend shows
diminishing returns as more cache is added; however, overall benefit continues to increase. The first
graph shows the overall hit percentage. The second graph shows the incremental benefit of more cache,
or the additional hit percentage achieved by adding the extra cache.
Hit % Achieved
70%
60%
50%
Hit %
40%
30% 300GB
20% 500GB
10%
0%
0 200 400 600 800 1000 1200
Cache Size (GB)
20%
15%
10% 300GB
5% 500GB
0%
0 200 400 600 800 1000 1200
Cache Size (GB)
8 Conclusion
Flash Cache improves performance for a wide variety of application workloads by caching repetitive
random reads. Flash Cache is supported with clustered Data ONTAP and Data ONTAP operating in 7-
Mode it is simple to configure and use. Flash Cache works with block (SAN) and file (NAS) data and
interoperates with most other the other data management features of Data ONTAP.
Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact
product and feature versions described in this document are supported for your specific environment. The
NetApp IMT defines the product components and versions that can be used to construct configurations
that are supported by NetApp. Specific results depend on each customer's installation in accordance with
published specifications.
2013 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior
written consent of NetApp, Inc. Specifications are subject to change without notice. NetApp, the NetApp
logo, Go further, faster, Data ONTAP, Flash Accel, Flash Cache, Flash Pool, FlexClone, FlexScale,
18
FlexShare, SnapMirror, and SnapVault are trademarks or registered trademarks of NetApp, Inc. in the
Flash Cache Best Practices
United States and/or other countries. All other brands or products are trademarks or registered
trademarks of their respective holders and should be treated as such. TR-3832-1113