Sie sind auf Seite 1von 6

Position: Data Should Not Be Forever. Data Must Decay!

Paper #150

Abstract Layer Sources of Uncorrectable Errors


Application Redundancy, Checksum
Digital storage is nearly synonymous with persistence and Filesystem Metadata, Checksum
reliability, and our entire storage stack is designed from the Interconnect Link, Checksum
ground up with the assumption that applications require these RAID Parity, Redundancy
properties to operate. We observe that not only do some ap- Device Media, Metadata, ECC
plications not require these properties, they sometimes go to
Table 1: To protect data integrity, each layer may replicate or add
great lengths to ensure that these properties do not hold. The parity hidden to the layer above it. Uncorrectable errors occur when
source of the problem is that the data lifecycle is merely an af- these mechanisms fail. For example, if parity in a RAID deploy-
terthought - we only think about deleting data after it has been ment isn’t enough to correct bit errors, then the objects are usually
created, rather than expressing how long the data is needed considered lost.
when it is created. In this paper, we advance a controversial
position - data should decay by default, and the storage stack storage stack. When such errors occur, it is typically assumed
should be reengineered with the mechanisms necessary to that the application cannot handle the error and that the layers
support decaying data. By embracing decay, we will greatly below are failing and the device should be replaced to prevent
simplify data management and build a more efficient and further data loss. At the same time, each layer is free to dupli-
manageable storage stack. cate and generate metadata to prevent these failures, hidden
to the layers above.
1 Introduction Despite the effort put into the storage stack to ensure that
Storage stacks today are built from the ground up to reliably data is reliably persisted, applications, and often users, some-
persist data and to protect against data loss. Historically, this times require the exact opposite: assurance that data has been
design stems from storage devices being fundamentally unre- reliably deleted. Data deletion is no longer just a security and
liable. Mechanical devices such as tape and disk are prone to privacy issue, but also a legal requirement: legislation such
failure, wear and tear as well as bit rot, and even solid state as the General Data Protection Requirement (GDPR) [4] in
technologies such as NAND flash are vulnerable to a variety Europe and the Health Insurance Portability and Accountabil-
of idiosyncratic behaviors such as read and program disturb ity Act (HIPAA) [5] and California Consumer Privacy Act
which introduce errors [1, 2]. One of the primary functions (CCPA) [6] in the United States require that data controllers
of today’s storage stack is to protect against these errors and and processors adhere to strict rules on how long data can
provide applications with reliable persistence, which is what be kept, and to respect the rights of users who ask that their
users have come to expect [3]. data be deleted. These requirements are at odds with a stor-
age stack built to reliably persist data forever. Existing work
The every layer in the storage stack actively manages er-
has already shown the difficulty of reliably erasing data from
rors and abstracts problems away with the assumption that a
storage devices [7, 8]. In order the ensure the deletion of data,
storage error is fatal to the layer above it, as shown in Table 1.
users must fight with multiple layers of storage abstraction
For example, device controllers such as a flash translation
which transparently duplicate and generate parity which can
layer (FTL) in a SSD manages errors by using error correct-
be used to reconstruct data.
ing codes and replicating data across flash chips. Storage
interconnects such as NVMe and PCIe protect data through Moreover, the implicit assumption that a storage stack
checksums, and systems such as RAID duplicate data and gen- should persist data forever error-free limits the efficiency of
erate parity transparently. Many modern filesystems protect storage itself. For example, GDPR requires that companies
data with checksums, while some even actively scrub data. keep data no longer than it is needed. Data which is needed
Applications themselves may checksum and replicate their for only 30 days, for example, may not require replication, or
own data. Only unrecoverable errors are propagated up the may be able to use weaker error correcting codes compared to

1
data which is needed to be retained for 10 years. Data center impossible. Newer devices may implement a “secure erase”
operators also spend considerable effort trying to predict phys- command, which instructs the drive to erase all data, either by
ical device failures so that they may be removed before errors deleting a cryptographic key (known as cryptographic erase)
propagate to higher layers in the stack [9, 10, 11, 12, 13, 14], or physically erasing the data medium, avoiding the need for
effort that could be avoided if the persistence requirement is overwriting. These methods, however, can be ineffective [8],
relaxed. Even the design of physical devices may be different: due to the presence of over-provisioned area, where data can
for instance, short-term data retention may potentially use hide inaccessible to the user, or devices which fail to imple-
lower programming voltages or different, denser gate designs. ment erase commands correctly. NIST 800-88 goes as far to
Errors over time may be acceptable for some types of data suggest that destroying the device may be the only option if
and could even be a mechanism for deletion, as data may the device is not reliable.
be considered deleted once the error rate exceeds a certain
threshold.
We make an argument which some may find controversial: Secure erase is primarily focused on erasing the entire
instead of persisting forever, storage should decay by default. device, which is typically only necessary when equipment
Our storage systems are littered with data which we have “for- changes hands or is near the end of its useful life. More re-
gotten” to delete. By defaulting to a limited lifespan for data, cently, regulations such as the GDPR, HIPAA and CCPA have
we eliminate much of the overhead associated with deleting begun to require data controllers and processors to keep data
and managing data. Defaulting to decay will force users and only as long as it necessary. This requirement introduces
developers to think about the data lifecycle as soon as data is the need to track the lifecycle of small amounts of data (i.e.
created, rather than deferring the decision to when capacity is single records) and erase it on a much more granular basis
limited or when costs get out of hand. than can be met with whole device secure erase methods.
We argue that supporting this requirement is not a problem The storage stack already has some support for deleting at
for any one layer in the storage stack. Rather, it is neces- smaller granularities: ATA supports TRIM, and NVMe sup-
sary to rethink and redesign the entire storage stack, from ports deallocate, both of which inform the device that a sec-
the way devices are built to how the operating system and tor has been freed. Most modern filesystems will send those
filesystems handle errors and how applications leverage them. commands to the device when a file is deleted, and applica-
The fundamental problem is that each layer of the stack is tions can inform the filesystem that blocks are no longer neces-
designed with the assumption that the layer above it requires sary through fallocate(2)’s FALLOC_FL_PUNCH_HOLE. Un-
that data is reliably persisted forever. Each layer must be fortunately, very few applications send these commands to
redesigned to do away with that assumption, and provide flex- the filesystem, likely for performance reasons. Recent work
ible persistence and reliability guarantees. Instead of pushing has shown that retrofitting existing applications such as Re-
unrecoverable errors up the stack, we believe that exceptions dis and Postgres to be regulation-compliant results in per-
should be thrown instead, with the data in the potential case formance slowdowns of 2 − 4× and even space overheads
the higher layers could recover. of 3 − 5× [18, 19]. Issuing these commands may not even
This paper takes the following positions: result in actual deletion of all copies of the data: for example,
the NVMe specification explicitly states that these operations
1. Data should decay by default instead of reliably persist-
only guarantee that data will no longer be user-accessible,
ing forever.
but copies may still remain in other areas on the disk (over-
2. We must be able to express flexible reliability and per- provisioned area, bad blocks).
sistence requirements to each layer of the storage stack.
3. Each layer of the storage stack should throw exceptions Other works, such as Vanish [20] and Ephemerizer [21, 22]
instead of treating them all errors as unrecoverable. try to use cryptography to solve the deletion problem. The ba-
sic approach is to encrypt data and throw away the encryption
2 Background key when the data should be deleted, also known as crypto-
Securely erasing data so that no traces remain has been a shredding. Encrypting also ensures limited blast radius of data
concern of the security community for decades. These con- breaches, as data cannot be read without key access. While
cerns are encoded in media sanitization guidelines, such as encryption can provide some guarantees, it is not always prac-
the NIST 800-88 guidelines for media sanitization [15]. Early tical due to a reliance on unbreakable schemes [23, 24, 25],
“secure erase” protocols such as the US DoD 5220.22-M [16] high computational cost of encryption, and a reliance on
or the Gutmann 35-Pass Method [17] focused on prevent- trusted key managers. For example, researchers showed that
ing the recovery of analog data by writing obfuscation pat- they were able to break Vanish by attacking the key man-
terns designed to add noise that renders recovery even by agers [26], and a centralized key manager is an acknowledged
sophisticated means (such as a magnetic force microscope) shortcoming of the Ephemerizer.

2
2.1 The Problem 1.0e0
1-Day retention error

Raw Bit Error Rate


The key problem with existing approaches is that deletion is 1.0e-1 3-Week retention error
3-Month retention error
an afterthought - it is considered at the end of the data lifecycle 1.0e-2 Program-Interference error
rather than the beginning. The contract the application and 1.0e-3 Read error

user has with the storage stack is that anything that is written 1.0e-4
to it must be able to be read back with perfect fidelity forever, 1.0e-5
unless deletion is requested. 1.0e-6
1.0e-7
We know from regulatory requirements that for many com- 1.0e-8
panies, this is already not the case. Data is only to be kept as 100 1000 10000 100000
long as it is needed, and for most data items, the answer is
certainly not forever. For the end user, while the answer may P/E cycles
be forever for some key documents (pictures, an important
letter) it is probably not the case for others (the menu for the Figure 1: Probability of different types of errors, depending on P/E
restaurant you ate at last month, or your copy of Linux 2.14). cycle (age) of the SSD. Program-interference errors occur when
a cell’s voltage is disturbed by the programming of nearby pages.
If we were able to express at the beginning of the data life
Similarly, read errors occur when a cell’s voltage is disturbed by
cycle how long we needed data for and with what fidelity, the repeated reads to nearby pages. Data reproduced from [1].
storage stack may be able to automatically able to manage
this data lifecycle for us. Moreover, by expressing the lifespan 3 A Stack for Decaying Data
of data when it is created, the need to explicitly delete data,
The storage stack has been designed to persist data forever.
which can have performance implications, is reduced because
In this section, we first explore how to achieve self-decaying
the data will decay on its own through the storage stack. As a
data at the physical media layer. Then we present policies that
side effect, the storage stack will become more efficient: for
applications would express in the face of such auto-decaying
instance, the stack doesn’t need to replicate data which will
data. Finally, we propose mechanisms to propagate exceptions
be only read several times over the course of an hour, and it
generated by decaying data, thereby enabling enforcement of
could focus that for data which will be kept for decades.
the policies.
To address this problem, we believe that data should decay
by default, instead of being reliably persisted forever. This
will force applications and users to take an active role in ex-
3.1 Self-Decaying Data
pressing what data they want to keep, rather than waiting to Status Quo. At a high level, SSDs are a number of raw
delete files once a device has reached capacity. To effectively flash chips encapsulated with a firmware, or Flash Translation
control decay, we will need a method for each layer of the Layer (FTL). In addition to wear leveling, the most important
stack to express reliability and persistence requirements, so functionality of the FTL is to abstract read errors at the hard-
that a user or application can dictate the lifespan of data when ware level from the user. The FTL employs many mechanisms
it is created. Finally, instead of treating all errors as unrecov- to moderate the Raw Bit Error Rate (RBER) of raw NAND
erable, devices should throw exceptions, exposing potentially flash, which can be as low as 10−6 − 10−7 for new chips and
corrupt data, as it may still be usable by the application or as high as 10−2 , depending on the type of flash (SLC vs. MLC
end user. An application can decide whether such exceptions vs. TLC vs. QLC, 2D vs 3D) and how old it is – the older a
can be recoverable or not. This will allow an application to device, the higher the RBER. The mechanisms try to reduce
leverage decaying data and pave the way for self-decaying the RBER such that the error rate exposed by the device is
data, which reduce the burden of managing the data life cycle at most 10−17 and often enterprise devices have error rates
and allow us to build much more efficient devices. as low as 10−20 . To give a sense of how low these traditional
error rates are, an error rate of 10−17 − 10−20 means the user
can expect to hit a bit error once in every 10PB - 1EB read
from the device. For a drive size of 1TB, for example, this
Threat model. We believe that it is also important to con- means on average the user expects only one error in 10,000
sider a threat model when it comes to enforcing persistence and 10,000,000 full drive reads.
and reliability. For the purposes of this paper, we do not con- Decay by Default. The main observation behind self-
sider secure deletion, which is the process of deleting data decaying data is that we can leverage the natural data deletion
so that is not recoverable by a sophisticated adversary which mechanisms in raw flash, in the form of the wide variety of
may be able to recover data from areas which are not user- bit errors that can affect NAND. Figure 1 shows the rate of
accessible. We assume that each layer of the stack is not a variety of possible errors, based on the age of an SSD in
malicious (and will respect the persistence and reliability re- P/E cycles. The measurements in Figure 1 are for an MLC
quirements given). device that has an advertised lifetime of 3,000 P/E cycles [1],

3
correct read-back. Charge leakage, which reduces the voltage
11 10 00 01 of a cell and occurs naturally over time as electrons leak due
to imperfect capacitance, is the primary reason for raw bit
errors [1, 2, 29], as shown in Figure 2. By undervolting cells,
the FTL can explicitly mirror the effects of charge leakage
by programming a cell to a voltage that is already less than
the default voltage threshold. The extent to which the FTL
V undervolts can be determined by the data object’s lifetime:
Figure 2: Raw bit errors are caused by voltage mismatches. The red the shorter the lifetime, the more drastic the undervolt. As a
lines indicate voltage cutoffs that are used to assign a voltage to bits corollary, the device should by default disable charge refresh,
(in this example, we consider 2-bit cells, which results in 4 possible which the device occasionally executes to combat charge
values). The dotted lines indicate default voltage distributions. The leakage [30].
solid lines indicate actual voltage distributions, derived from either
charge leakage or undervolting. Any cells with voltages in the shaded
Second, the device should by default completely eliminate
area are misclassified and appear as raw bit errors. read retries, which an FTL issues if it cannot correct all the
raw bit errors in a flash page. Typically, the read retry will use
and each of the measured errors can easily generate an RBER a different read voltage that yields lower raw bit errors [31].
around 10−6 within the lifetime of the device, which is about Retries obviously compound the latency of a read by however
one error in every 1MB read. This suggests raw flash errors many retries are issued, which is why optimizing read retries
can generate enough corruption for data to be considered receives considerable scrutiny in academia [29, 32, 33]. By
deleted, which means that data will eventually self-corrupt eliminating retries, the FTL can simultaneously enable self-
(destruct) without explicit intervention from higher layers. decaying data while improving device performance.
Incentives. Data that decays by default provides several Finally, to reclaim space and aid in automatic garbage col-
nice security guarantees. First, it can defend against buggy lection, we propose that the FTL mark a time-to-live (TTL)
software delete implementations – even if a higher-level delete for every page in the spare area that already exists for every
is incomplete, the data will eventually rot on the physical me- page in a flash device. In this way, the device is semantically
dia. Second, self-decaying data can defend against adversarial aware of block lifetimes, which can be in turn used to enforce
data controllers. In particular, it increases the data controller’s application-level retention policies, such as TTL policies in
cost of retaining data, forcing controllers to build additional GDPR [4]. Having the FTL manage object lifetime can be
replication or parity mechanisms at the software level, which very efficient, as opposed to enforcing TTL at the application
can impact performance and complexity. Third, self-decaying layer, which can slow down applications by 2 − 4× [18].
data also defends against inadvertent data copies. For example,
by the time a third-party indexes personal data, the data might 3.2 Expressing Persistence Policies
have decayed sufficiently that the integrity of the original data
An interesting policy enabled by the self-decaying data mech-
is lost, thereby ensuring privacy for the user.
anism is probabilistic deletion guarantees.
Finally, self-decaying data can help trusty-worthy data-
center operators that are seeking to reduce the total cost of inherently enables probabilistic deletion, even without in-
ownership of their storage, in terms of both hardware and tervening effort.
code complexity of data management. Current deletion mechanisms are inherently probabilistic
Mechanisms. To ensure that an SSD is self-decaying, we because there is no way of controlling dataflow during the
first propose removing any part of the FTL that deals with lifetime of the data. Right now: whatever can be deleted is
error correction, turning the device interface into a thin layer guaranteed to be deleted in X days. Instead, everything is
over the underlying raw NAND flash. Note that this is dif- guaranteed to be deleted in P(X) days, where we have some
ferent from proposals such as OpenChannel SSD [27] and distribution that gets smaller as X increases.
software-defined flash [28], which are platforms that enable In this section we discuss a variety of persistence policies
users to implement co-design of tasks such as error-handling that One of the benefits of our stack is that rocksdb ? No WAL,
and block placement1 . in particular Crash consistency is no longer a problem
Two additional hardware mechanisms would further enable Postgres – also no WAL. No need for recovery (?)
self-decaying data. First, depending on the lifetime of a data
object, which can be predetermined at creation, the device 3.3 Propagating Exceptions
can undervolt cells in order to achieve faster degradation. The Recent research investigations have observed that upper layers
error rates shown in Figure 1 are for cells that are charged at in the software stack do not, but can and should, correct uncor-
a default voltage threshold to enable highest probability of rectable errors from lower layers in the stack [34, 35, 36]. We
1 OpenChannel SSD is a good development platform that we will use to argue that propagating exceptions is simple because we can
prototype these FTL modifications. simply reuse the infrastructure in place to meticulously throw

4
can be passed down the storage stack. We hope that this paper
application
opens a discussion on the how we can achieve decaying data,
an effort which will require coordination from all parts of
A’ the storage stack. Data Should Not Be Forever. Data Must
libc + libraries error
Decay! mechanism
junk A’ policy
kernel error error A 5 Discussion
desired reliability
hardware We’d like to discuss what kind of policies we can support
with these new mechanisms, and which layer(s) in the stack
(pass-through)
that wedesired
have presented
reliability where it makes sense to enforce these
policies.
(a) (b) (c) We’d also like to discuss the potential benefits of prob-
mechanism
abilistic data deletion guarantees. Regulations are currently
Figure 3: Propagating errors: (a) Currently, with standards such as ambiguous about A data deletion guarantees such as time-to-live
+
the POSIX specification, if an operation results in error, there is (TTL) and the ECCright to be forgotten. Current deletion mech-
no guarantee regarding data propagation. (b) Instead, we propose anisms are inherently probabilistic because there is no way
treating errors as exceptions, which contain corrupted data, in the
of controlling dataflow during the lifetime of the data. Right
hopes that higher layers in the stack can resolve the corruption. (c)
Propagating exceptions applies to errors that originate in any layer
now: whatever can be deleted is guaranteed to be deleted in X
of the stack. days. Instead, everything is guaranteed to be deleted in P(X)
days, where we have some distribution that gets smaller as X
and handle uncorrectable errors. The only modification we increases.
have to make is to propagate data alongside the error status,
which would include modifying standards such as POSIX. References
For example, currently POSIX makes no guarantees about [1] Yu Cai, Erich F Haratsch, Onur Mutlu, and Ken Mai. Error patterns
data returned in a buffer for a read that ends in error. in MLC NAND flash memory: Measurement, characterization, and
analysis. In 2012 Design, Automation & Test in Europe Conference &
We built a preliminary implementation to propagate a spe-
Exhibition (DATE), pages 521–526. IEEE, 2012.
cific kind of hardware error: flash drives have a final CRC32
mechanism that checks the fidelity of a page before it is [2] Yu Cai, Saugata Ghose, Erich F Haratsch, Yixin Luo, and Onur Mutlu.
handed back to the user. If this CRC32 check fails, then no Error characterization, mitigation, and recovery in flash-memory-based
solid-state drives. Proceedings of the IEEE, 105(9):1666–1704, 2017.
data is given to the user, only an error status, even though this
corruption error could potentially be corrected by redundancy [3] Postgresql’s handling of fsync() errors is unsafe and risks data loss at
at the application level. least on xfs. https://www.postgresql.org/message-id/flat/
In our implementation, we propagated the corruption error CAMsr%2BYHh%2B5Oq4xziwwoEfhoTZgr07vdGG%2Bhu%3D1adXx59a
TeaoQ%40mail.gmail.com.
notification in addition to the corrupted data through a cus-
tom libc that we linked against our custom version of Java’s [4] Regulation (eu) 2016/679 of the european parliament and of the council
HotSpot engine, which throws a custom corruption exception of 27 april 2016 on the protection of natural persons with regard to the
in the IO package. We emulated the hardware errors with a processing of personal data and on the free movement of such data,
and repealing directive 95/46/ec (general data protection regulation).
special FUSE library [37] that was able to pass data alongside https://gdpr-info.eu/. Accessed: February 2020.
the corruption notification. With this simple, modified stack,
we were able to propagate exceptions from the hardware to [5] Health Insurance Portability and Accountability Act of 1996, pub. l. no.
the application (Java programmer), which would allow ap- 104-191.
plications to deal with the error with an application-specific [6] The California Consumer Privacy Act of 2018. California Civil Code,
policy. We anticipate that the same methodology can be used Section 1798.100.
to propagate a variety of errors originating in hardware or any
layer of the stack. [7] Joel Reardon, David Basin, and Srdjan Capkun. Sok: Secure data
deletion. In 2013 IEEE symposium on security and privacy, pages
301–315. IEEE, 2013.
4 Conclusion
[8] Michael Yung Chung Wei, Laura M Grupp, Frederick E Spada, and
For decades, we have built storage stacks which try to persist Steven Swanson. Reliably erasing data from flash-based solid state
data reliably forever, without realizing that applications and drives. In USENIX Conference on File and Storage Technologies
users often don’t need or even want data to persist that long. (FAST), 2011.
In this paper, we argued for rethinking storage by turning the [9] Brian D Strom, SungChang Lee, George W Tyndall, and Andrei Khur-
requirement upside down, forcing data to decay by default and shudov. Hard disk drive reliability modeling and failure prediction.
enabling a choice as to how long data should persist which IEEE Transactions on Magnetics, 43(9):3676–3684, 2007.

5
[10] Bianca Schroeder and Garth A Gibson. Disk failures in the real world: [26] Scott Wolchok, Owen S Hofmann, Nadia Heninger, Edward W Felten,
What does an MTTF of 1, 000, 000 hours mean to you? In USENIX J Alex Halderman, Christopher J Rossbach, Brent Waters, and Emmett
Conference on File and Storage Technologies (FAST), 2007. Witchel. Defeating vanish with low-cost sybil attacks against large
DHTs. In NDSS, 2010.
[11] Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. Flash reliabil- [27] Matias Bjørling, Javier González, and Philippe Bonnet. Lightnvm: The
ity in production: The expected and the unexpected. In 14th USENIX Linux open-channel SSD subsystem. In 15th USENIX Conference on
Conference on File and Storage Technologies (FAST 16), pages 67–80, File and Storage Technologies (FAST 17), pages 359–374, 2017.
2016.
[28] Jian Ouyang, Shiding Lin, Song Jiang, Zhenyu Hou, Yong Wang, and
[12] Justin Meza, Qiang Wu, Sanjev Kumar, and Onur Mutlu. A large-scale Yuanzheng Wang. SDF: software-defined flash for web-scale internet
study of flash memory failures in the field. In Proceedings of the storage systems. In Proceedings of the 19th international conference
2015 ACM SIGMETRICS International Conference on Measurement on Architectural support for programming languages and operating
and Modeling of Computer Systems, pages 177–190, Portland, Oregon, systems, pages 471–484, 2014.
2015.
[29] Yu Cai, Yixin Luo, Erich F Haratsch, Ken Mai, and Onur Mutlu. Data
[13] Iyswarya Narayanan, Di Wang, Myeongjae Jeon, Bikash Sharma, Laura
retention in MLC NAND flash memory: Characterization, optimization,
Caulfield, Anand Sivasubramaniam, Ben Cutler, Jie Liu, Badriddine
and recovery. In 2015 IEEE 21st International Symposium on High
Khessib, and Kushagra Vaid. SSD failures in Datacenters: What?
Performance Computer Architecture (HPCA), pages 551–563. IEEE,
When? and Why? In Proceedings of the 9th ACM International on
2015.
Systems and Storage Conference, page 7. ACM, 2016.

[14] Sidi Lu, Bing Luo, Tirthak Patel, Yongtao Yao, Devesh Tiwari, and [30] Yu Cai, Gulay Yalcin, Onur Mutlu, Erich F Haratsch, Adrian Cristal,
Weisong Shi. Making disk failure predictions SMARTer! In 18th Osman S Unsal, and Ken Mai. Flash correct-and-refresh: Retention-
USENIX Conference on File and Storage Technologies (FAST 20), aware error management for increased flash memory lifetime. In 2012
pages 151–167, 2020. IEEE 30th International Conference on Computer Design (ICCD),
pages 94–101. IEEE.
[15] Richard Kissel, Matthew A Scholl, Steven Skolochenko, and Xing Li.
Sp 800-88 rev. 1. guidelines for media sanitization, 2006. [31] NAND flash media management algorithms. https://
www.flashmemorysummit.com/English/Collaterals/
[16] DOE DoD and NRC CIA. Dod 5220.22-m, national industrial security Proceedings/2016/20160808_PreConfH_Haratsch.pdf.
program operating manual,(january, 1995).
[32] Qiao Li, Min Ye, Yufei Cui, Liang Shi, Xiaoqiang Li, and Chun Jason
[17] Peter Gutmann. Secure deletion of data from magnetic and solid-state
Xue. Sentinel cells enabled fast read for NAND flash. In 11th USENIX
memory. In Proceedings of the Sixth USENIX Security Symposium,
Workshop on Hot Topics in Storage and File Systems (HotStorage 19),
San Jose, CA, volume 14, pages 77–89, 1996.
2019.
[18] Aashaka Shah, Vinay Banakar, Supreeth Shastri, Melissa Wasserman,
and Vijay Chidambaram. Analyzing the impact of GDPR on storage [33] Yixin Luo, Saugata Ghose, Yu Cai, Erich F Haratsch, and Onur Mutlu.
systems. In 11th USENIX Workshop on Hot Topics in Storage and File Heatwatch: improving 3d nand flash memory device reliability by
Systems (HotStorage 19), 2019. exploiting self-recovery and temperature awareness. In 2018 IEEE
International Symposium on High Performance Computer Architecture
[19] Supreeth Shastri, Vinay Banakar, Melissa Wasserman, Arun Kumar, (HPCA), pages 504–517. IEEE, 2018.
and Vijay Chidambaram. Understanding and benchmarking the impact
of GDPR on database systems. Proceedings of the VLDB Endowment, [34] Amy Tai, Andrew Kryczka, Shobhit O Kanaujia, Kyle Jamieson,
13(7), 2020. Michael J Freedman, and Asaf Cidon. Who’s afraid of uncorrectable bit
errors? Online recovery of flash errors with distributed redundancy. In
[20] Roxana Geambasu, Tadayoshi Kohno, Amit A Levy, and Henry M 2019 USENIX Annual Technical Conference (USENIS ATC 19), pages
Levy. Vanish: Increasing data privacy with self-destructing data. In 977–992, 2019.
USENIX Security Symposium, volume 316, 2009.
[35] Aishwarya Ganesan, Ramnatthan Alagappan, Andrea C Arpaci-
[21] Radia Perlman. The Ephemerizer: Making data disappear, 2005. Dusseau, and Remzi H Arpaci-Dusseau. Redundancy does not imply
fault tolerance: Analysis of distributed storage reactions to single errors
[22] Radia Perlman. File system design with assured delete. In Third IEEE
and corruptions. In 15th USENIX Conference on File and Storage
International Security in Storage Workshop (SISW’05), pages 6–pp.
Technologies (FAST 17), pages 149–166, 2017.
IEEE, 2005.

[23] Nicole Perlroth, Jeff Larson, and Scott Shane. N.s.a. able to foil basic [36] Ramnatthan Alagappan, Aishwarya Ganesan, Eric Lee, Aws Albargh-
safeguards of privacy on web. https://www.nytimes.com/2013/ outhi, Vijay Chidambaram, Andrea C Arpaci-Dusseau, and Remzi H
09/06/us/nsa-foils-much-internet-encryption.html. Arpaci-Dusseau. Protocol-aware recovery for consensus-based dis-
tributed storage. ACM Transactions on Storage (TOS), 14(3):1–30,
[24] Thomas C Hales. The NSA back door to NIST. Notices of the AMS, 2018.
61(2):190–19, 2013.
[37] FUSE. https://www.kernel.org/doc/html/latest/filesyst
[25] Shattered. https://shattered.io/. ems/fuse.html.

Das könnte Ihnen auch gefallen