Sie sind auf Seite 1von 16

There are plenty of NetApp documents out there explaining how dedupe work, and Dr.

Dedupe has
already covered this aspect in detail in his blog. I have rather tried to assemble some of the
questions that would naturally cross your mind if you are planning to test/implement dedupe in your
environment.

What exactly dedupe does on NetApp storage array?


The NetApp deduplication technology allows duplicate 4KB blocks anywhere in the flexible volume
to be deleted. The maximum sharing for a block are 255. This means, for example, that if there are
500 duplicate blocks, deduplication would reduce that to only 2 blocks. As per the latest TR-3958, on
8.1 & 8.2 the maximum sharing for a block is 32,767. This means, for example, that if there are
64,000 duplicate blocks, deduplication would reduce that to only 2 blocks.

What is the core enabling technology of deduplication?


The core enabling technology is 'Fingerprints' [These are unique digital signatures for every 4KB
data block in the flexible volume]

1. There is a fingerprint record for every 4KB data block, and the fingerprints for all the data blocks in
the volume are stored in the fingerprint database file.

2. Fingerprints are not deleted from the fingerprint file automatically when data blocks are freed.
When a threshold of 20% new fingerprints is reached, the stale fingerprints are deleted. This can
also be done by a manual operation from the command line.

Is the fingerprint created for each 4KB logical or physical block?


Starting in Data ONTAP 8.2, fingerprint will only be for each 4K data block physically present in the
volume as opposed to each 4KB logically in the volume in versions prior to 8.2.

What is the dedupe metadata [overhead space] space requirement for running the dedupe
process?
Every process has some overhead space requirement, and that is true for dedupe as well. Like, for
every file there is a metadata, similarly for dedupe also there is a metadata which is called
'fingerprint'.

Where this metadata resides for different ONTAP version?


Yes. When I first started reading about this stuff, I was surprised it has changed quite a bit for
different ONTAP versions 7.2.x, 7.3.x and 8.0.1, and 8.1.1. There have been updates to this
requirement. In Data ONTAP 7.2.X, all the deduplication metadata [fingerprint database, change log
files, temp files] resides in the flexible volume.

Starting with Data ONTAP 7.3.x to 8.0.1, the fingerprint database and the change log files
that are used in the deduplication process are located outside of the volume inside the
aggregate and are therefore not captured in Snapshot copies. However, temporary
metadata files created during the deduplication operation are still placed inside the volume.
These temporary metadata files are deleted once the deduplication operation is complete.

Starting with Data ONTAP 8.1, another change has been made; now two copies of
deduplication metadata are maintained per volume. A copy of the deduplication metadata
resides in the volume and another copy is in the aggregate. The deduplication metadata in
the aggregate is used as the working copy for all the deduplication operations. The change
log entries are appended in the deduplication metadata copy residing in the volume.

How much space is required for dedupe metadata on netapp storage volume & aggregate?
Deduplication metadata can occupy up to 7 percent of the total physical data contained within the
volume, as follows:

In a volume, deduplication metadata can occupy up to 4 percent of the total amount of data
contained within the volume.
In an aggregate, deduplication metadata can occupy up to 3 percent of the total physical
data contained within the volume.

For example If 2 TB aggregate has four volumes, each 400 GB in size, in the aggregate. You need
three volumes to be deduplicated with varying savings percentage on each volume.

The space required in the different volumes for deduplication metadata is as follows:

2 GB [4% (50% of 100 GB)] for a 100 GB of logical data with 50 percent savings
6 GB [4% (75% of 200 GB)] for a 200 GB of logical data with 25 percent saving
3 GB [4% (25% of 300 GB)] for a 300 GB of logical data with 75 percent savings

The aggregate needs a total of 8.25 GB [(3% (50% of 100 GB)) + (3% (75% of 200 GB)) + (3%
(25% of 300 GB)) = 1.5+4.5+2.25= 8.25 GB] of space available in the aggregate for deduplication
metadata.

Is fingerprint and change log files upgraded automatically during ONTAP upgrade? Or Do I need to
re-run the dedupe scan from scratch?
During an upgrade of a major Data ONTAP release such as 8.0 to 8.1 or 8.1 to 8.2, the fingerprint and
change log files are automatically upgraded to the new fingerprint and change log structure the first
time sis operations start after the upgrade completes. In other words, savings are retained.
However, an upgrade from Data ONTAP 7.x will delete all of the deduplication metadata and thus it
will be required to recreate from scratch using the sis start s command to provide optimal savings.

Are fingerprints also freed (deleted) and when the corresponding physical blocks is freed?
Fingerprints are not deleted from the fingerprint file automatically when data blocks are freed.
When a threshold of the number of new fingerprints is 20% greater than the number of data blocks
used in the volume, the stale fingerprints are deleted. This can also be done by a manual operation
using the advanced mode command sis check.

Is dedupe allowed on both BCS (Block checksum scheme) & ZCS (zone checksum scheme)?
Before 8.1.1 dedupe was only supported on BCS, but starting 8.1.1 it is also supported on ZCS.

Is dedupe supported on traditional volume?


No. Dedupe supports FlexVol only, no traditional volumes

Is dedupe supported on both 32 & 64 bit volume?


Yes.

Is there a Maximum volume size for dedupe?


Starting with Data ONTAP 8.1, deduplication does not impose a limit on the maximum volume size
supported; therefore, the maximum volume limit is determined by the type of storage system
regardless of whether deduplication is enabled.

What the maximum Logical Data Size Processing Limits for dedupe?
In Data ONTAP 8.1, the maximum logical data size that will be processed by postprocess
compression and deduplication is equal to the maximum volume size supported on the storage
system regardless of the size of the volume or data constituent created.

As an example in Data ONTAP 8.1, if you had a FAS6240 that has a 100TB maximum volume
size and you created a 100TB volume, the first 100TB of logical data will deduplicate as
normal. However, any additional new data written to the volume after the first 100TB will
not be deduplicated until the logical data becomes less than 100TB.

A second example in Data ONTAP 8.1, if you had a FAS3250 that has 70TB volume size limit
and you create a 25TB volume. In this case the first 70TB of logical data will deduplicate as
normal; however, any additional new data written to the volume after the first 70TB will not
be deduplicated until the amount of logical data is less than 70TB.

Starting in Data ONTAP 8.2, the maximum logical data size that will be processed by deduplication is
640TB regardless of the size of the volume created.

As an example in Data ONTAP 8.2, if you had a FAS6270 that has a 100TB maximum volume
size and you created a 100TB volume or data constituent, the first 640TB of logical data will
deduplicate as normal. However, any additional new data written to the volume or data
constituent after the first 640TB will not be postprocess compressed or deduplicated until
the logical data becomes less than 640TB.

When Should I Enable Deduplication?


Choosing when to enable deduplication involves balancing the benefits of space savings against the
potential overhead.
Examples of when not to use deduplication on a volume include:

Savings less than the amount of deduplication metadata


Data is being overwritten at a rapid rate

What are the potential dedupe savings across the different data set/types?
As per NetApp research [lab testing] and feedbacks from customer deployments.

Data set type

Dedupe savings

File service

30

Virtual (boot vol)

70

Database (oracle-OLTP)

00

Database (oracle-DW)

15

EMAIL (Exchange-2k3/2k7)

03

EMAIL (Exchange-2k10)

15

Engineering Data

30

Geoseismic

03

Archival Data

25

Backup Data

95

What data types are not a good candidate for dedupe?


Some nonrepeating archival data such as image files and encrypted data is not considered a good
candidate for deduplication. Additional examples of when not to enable deduplication include
applications that overwrite data at a rapid rate and applications that perform small writes and add
unique headers; these are not good candidates for deduplication. An example of this would be an
Oracle Database that is configured with an 8KB block size.

Data that is already compressed by a hardware appliance or an application, including a backup or an


archive application, and encrypted data are generally not considered good candidates for
compression.

How does dedupe space savings affect if I have a already existing large volume with lots of
snapshots?
When you first run deduplication on a existing flexible volume with snapshots, the storage savings
will probably be rather small or even non-existent. Although deduplication has processed the data
within the volume, including data within Snapshot copies, the Snapshot copies will continue to
maintain "locks" on the original duplicate data. As previous Snapshot copies expire, deduplication
savings will be realized

For example, consider a volume that contains duplicate data and there are 10 Snapshot copies of the
data in existence before deduplication is run. If deduplication is run on this existing data there will
be no savings when deduplication completes, because the 10 Snapshot copies will maintain their
locks on the freed duplicate blocks. Now consider deleting a single Snapshot copy. Because the other
9 Snapshot copies are still maintaining their lock on the data, there will still be no deduplication
savings. However, when all 10 Snapshot copies have been removed, all the deduplication savings will
be realized at once, which could result in significant savings.

So, the bottom line is - snapshot copies affects dedupe savings?


Yes, Snapshot copies lock data, and thus the savings are not realized until the lock is freed by either
the Snapshot copy expiring or being deleted.

Therefore the next question is when to run deduplication?


In order to achieve maximum capacity savings, deduplication should be run, and allowed to
complete, before the creation of each and every Snapshot copy; this provides the greatest storage
savings benefit. However, depending on the flexible volume size and the possible performance
impact on the system, this may not always be advisable.

How long does a single deduplication process takes to complete?


For example - On a FAS3140 system with 100MB/sec processing speed, a 1TB volume will take about
2.5 to 3 hours to complete.

What is the impact on CPU during deduplication?


It depends from system to system; following observation was made on FAS3140:
When one deduplication process is running, there is 0% to 15% performance degradation on other
applications. With eight deduplication processes running, there may be a 15% to more than a 50%
performance penalty on other applications running on the system.

What is the IO performance impact on a deduped volume?


For deduplicated volumes, if the load on a system is lowfor instance, systems in which the CPU
utilization is around 50% or lowerthere is a small to negligible difference in performance when
writing data to a deduplicated volume; there is no noticeable impact on other applications running
on the system. On heavily used systems in which the system is CPU-bound, the impact on write
performance may be noticeable. For example, in an extreme case with 100% random overwrites
with over 95% savings, a FAS3140 showed a performance impact of 15%. On high-end systems such
as the FAS6080 system, the same scenario showed a performance impact of 1530% for random
writes.

What is the read performance impact on a deduped volume?


There is minimal impact on random reads. However, there is a performance impact on sequential
read workloads.

How can sequential read workload performance be improved?


Data ONTAP 8.1 has specific optimizations, referred to as intelligent cache, that reduce the
performance impact that deduplication has on sequential read workloads. Because deduplication
alters the data layout on the disk, using deduplication without intelligent cache could affect the
performance of sequential read applications such as dump source, qtree SnapMirror or SnapVault
source, SnapVault restore, and other sequential readheavy applications.

In addition, the Flash Cache cards as well as Flash Pool also utilize intelligent caching to optimize
performance, and should be highly considered when using deduplication.

How to Interpret Space Usage and Savings?


The df S command shows savings as well as actual physical space used per volume. To determine
the amount of logical space used you would add the values Used + Total-Saved.

I have heard dedupe enables efficient use of Flash Cache cards, how does this work?
It does this by retaining the deduplication savings on the Flash Cache cards that exist on disk. In that
way, if you have 32k duplicate blocks on disk, after you run deduplication only one block will be used
on disk and, if it is randomly accessed, only one block will be used in the Flash Cache cards as well.
This can significantly increase the amount of data that can be stored on Flash Cache cards.

What if I see slow-write performance after enabling dedupe or when the dedupe process is
triggered?
If write performance appears to be degraded, check the NetApp system resources (CPU, memory,
and I/O) to determine that they are not saturated. This can be done easily with NetApp OnCommand
Unified Manager or via CLI. If resources are saturated you can consider stopping deduplication
operations to see if performance resumes.

Will stopping dedupe operations force me to re-run from scratch?


NO. Stopping deduplication operations will generate a checkpoint and it can be resumed at a time
when the system is less busy. If system resources are still saturated, you can consider disabling it and
see if resource usage levels drop sufficiently.

How dedupe savings work in NAS vs. SAN environment?

In file-based NAS environment, it is straightforward and automatic, as duplicate blocks are


freed from deduplication, they are marked as available, as blocks of free space become
available, the NetApp system recognizes these free blocks and makes them available to the
volume.

In block-based SAN (LUN) environment, it is slightly complicated. This is because of the space
guarantees and fractional reservations used by LUNs. For instance, consider a volume that
contains a 500GB LUN, and the LUN has LUN reserves enabled. The LUN reserve causes the
space for the LUN to be reserved when the LUN is created. Now consider that 500GB of data
is written to the LUN. The consumed space is exactly 500GB of physical disk space. If the
data in the LUN is reduced through deduplication, the LUN still reserves the same physical
space capacity of 500GB, and the space savings are not apparent to the user.

******* DEMO on testing / estimating Dedupe is given below*******

Objective: Lets assume a production volume is nearly full and in need of space extension. However,
this volume is not yet turned on for storage efficiency feature called dedupe. The objective is to
estimate how much space we can save through a dedupe feature without affecting production filer.

Problem: We cannot test dedupe production volume on the source filer as it is already running
high on system resource [memory, cpu & IO disk utilization], and your IT management wouldn't let
you do this.

Solution: FlexClone the snapmirrored volume on the DR filer and then estimate the dedupe
savings.

Task:
1. We create a FlexClone from a parent volume i.e. snapmirrored live volume on the destination dr
filer [It simply creates another set of data pointers to the same data that the parent owns - just like a
snapshot, but writable one]
2. On the DR filer, run the dedupe on the cloned volume named -> cl_vol_live
3. On the DR filer, check the dedupe savings using the command -> df -s
4. If the savings are really encouraging, then implement the dedupe on the production live volume.

Steps to perform:

Note: It is advisable to create clone from the most recent snapshot copy. This is to ensure that the
future snapmirror updates do not fail.
Why is that - ? Bcos depending upon the snapshot schedule retention policy [snap sched volume]
on the volume, you might find that the snapshot that you used for flexcloning purpose has been
deleted by ontap at the source and when the snapmirror update kicks in it fails to do the update
bcos the same snapshot is locked on FlexClone [busy], and is failed to get deleted. Snapmirror
principal is simple whatever stuff I have, destination should have the same, in other words; if the
given snapshot does not exist in the source volume, then its task is to get rid of the same on the
destination volume as well.

Therefore, it is always advisable to use the recent snapmirror snapshot that exists on the source and
is not likely to be deleted during the course of this testing or simple let FlexClone choose the
snapshot which is by default the first one in the list when you create FlexClone via system manager
GUI.

IMPORTANT: Deduplication metadata can occupy up to 7 percent of the total physical data
contained within the volume, as follows:

In a volume, deduplication metadata can occupy up to 4 percent of the total amount of data
contained within the volume.
In an aggregate, deduplication metadata can occupy up to 3 percent of the total physical
data contained within the volume.

TIP: If you are running tight on the space and unsure of the dedupe savings you will get out of this
process, then safely turned the vol 'auto-grow' function on the FlexCloned volume, this will
automatically expand the volume in case it runs out of space due to dedupe metadata processing.
This however assumes the fact that there is enough free space in the aggregate.

Flexcloning can be dong via GUI, and its much simple this way; however you can also do it via
command line as shown in the example below. Its your choice!

1. On the source filer, run the following command to check & pick the recent snapshot.
src_Filer> snap list vol_live
Volume vol_dms_live
working...

%/used

%/total date

name

---------- ---------- ------------ -------0% ( 0%) 0% ( 0%) Feb 07 14:05 dr_filer(1574092601)_vol_live.21030 (snapmirror)
0% ( 0%) 0% ( 0%) Feb 07 14:00 hourly.0
0% ( 0%) 0% ( 0%) Feb 07 13:00 hourly.1
0% ( 0%) 0% ( 0%) Feb 07 12:00 hourly.2
0% ( 0%) 0% ( 0%) Feb 07 11:00 hourly.3
0% ( 0%) 0% ( 0%) Feb 07 10:00 hourly.4

2. On the DR filer, run the same command to list the snapshots and check if they match up the
source snapshots.[of course they will]

dr_filer> snap list vol_live


Volume vol_live
working...

%/used

%/total date

name

---------- ---------- ------------ -------0% ( 0%) 0% ( 0%) Feb 07 14:05 dr_filer(1574092601)_vol_live.21030


0% ( 0%) 0% ( 0%) Feb 07 14:00 hourly.0
0% ( 0%) 0% ( 0%) Feb 07 13:00 hourly.1
0% ( 0%) 0% ( 0%) Feb 07 12:00 hourly.2
0% ( 0%) 0% ( 0%) Feb 07 11:00 hourly.3

As you can see the snapshots are identical. All we need to do is, choose the snapshot which is fairly
recent, or, FlexClone will automatically do this for you.

Please Note: If you choose the snapmirror snapshot, for ex dr_filer(1574092601)_vol_live.21030,


then this will be locked as long as you have FlexCloned volume in the destination filer.

In this case, we go with most recent snapshot as we are sure that this will not be deleted soon.
Snapshot -> hourly.0'.

2. Now that we have a snapshot, lets create vol clone of /vol_live/

Example command:
filer>vol clone create clone_name -b parent_name [parent_snap]

Actual command on the filer06:


dr_filer>vol clone create cl_vol_live -b vol_live hourly.0

This process should take a second or two.

3. Ensure that the new cloned volume 'cl_vol_live' has been created successfully.
dr_filer>vol status

4. Run the following primilary commands on the dr_filer.

dr_filer>sis on /vol/cl_vol_live

[turns dedupe on]

dr_filer>sis config -s - /vol/cl_vol_live

[this disables the schedule]

Before kicking in the process, check the saved space, just to ensure that none is deduped yet, just to
start clean.

dr_filer>df -s
Filesystem used saved %saved
/vol/cl_vol_live/ 6300372436 0 0

Disable the snap schedule; we don't need it, as it is test volume.

dr_filer> snap sched volExisting 0 0 0

5. Manually run the duplication process during a low system usage time.

dr_filer>sis start -s /vol/cl_vol_live


The file system will be scanned to process existing data in /vol/cl_vol_live
This operation may initialize related existing metafiles.
Are you sure you want to proceed (y/n)? y
The SIS operation for "/vol/cl_vol_live" is started.
dr_filer> Wed xx HR:MM:SS EDT [wafl.scan.start:info]: Starting SIS volume scan on volume
volExisting.
6. Use sis status to monitor the progress of deduplication.
dr_filer>sis status /vol/cl_vol_live
Path State Status Progress
/vol/volExisting Enabled Active xx GB Scanned
Note: If you experience high system resource usage [CPU/memory/IO disk utilization] and you think
it might affect the users/application then you can safely stop the dedupe process by running this
command:
dr_filer> sis stop /vol/cl_vol_live
The operation on "/vol/cl_vol_live" is being stopped.

dr_filer> Sun Feb 9 23:56:15 GMT [dst-filer:sis.op.stopped: error]: SIS operation for /vol/cl_vol_live
has stopped

Note: Even after stopping, if the system usage is still high, just turn it off completely by running the
following command.
dr_filer> sis off /vol/cl_vol_live
SIS for "/vol/cl_vol_live" is disabled.

7. When sis status indicates that the flexible volume is once again in the Idle state, deduplication
has finished running, and you can check the additional space savings it provided in the flexible
volume.
dr_filer> df s
Filesystem used saved %saved
/vol/cl_vol_dms_live xxxxxxx xxxxxx xx%

8. After reviewing the space savings in step 7, if the savings percentage is satisfactory [Anything
above 30% savings depending upon the size of the volume is good enough] for example - 30 %
savings on a 6 TB volume would mean 1.8 TB of space saved.

9. As a final step, the cloned volume cl_vol_live can be safely deleted


dr_filer> vol offline /vol/cl_vol_live
Volume 'cl_vol_live' is now offline.
dr_filer> Mon Feb 10 00:03:41 GMT [dst-filer:wafl.vvol.offline:info]: Volume 'cl_vol_live' has been
set temporarily offline
dr_filer> vol destroy cl_vol_live
Are you sure you want to destroy volume 'cl_vol_live'? y
Mon Feb 10 00:03:50 GMT [dst-filer:wafl.vvol.destroyed:info]: Volume cl_vol_live destroyed.
Volume 'cl_vol_live' destroyed.

For more information please read the latest [Feb, 2014] TR on dedupe from NetApp

NetApp Data Compression and Deduplication Deployment and Implementation Guide


Data ONTAP 8.1 and 8.2 Operating in 7-Mode
http://www.netapp.com/us/system/pdf-reader.aspx?m=tr-3958.pdf

Courtesy: NetApp

ashwinwriter@gmail.com

Das könnte Ihnen auch gefallen