Beruflich Dokumente
Kultur Dokumente
Introduction
This work describes a cooperative cache algorithm used in zFS and explores the effectiveness of this algorithm and of zFS as a file system This is done by comparing the systems performance to NFS using the IOZONE benchmark
zFS
It is a distributed file system that uses Object Store Devices (OSD) and a set of cooperating machines The objectives of zFS design are :
The Architecture
zFS has six components : Front End (FE) Cooperative Cache (Cache) File Manager (FMGR) Lease Manager (LMGR) Transaction Server (TSVR) Object Store (ObS)
The Components
Object Store
It is the storage device on which files and directories are created and from where they are retrieved It handles the physical disk chores of block allocation and mapping ObS API enables creation and deletion of objects (files)
Front End
Runs on every workstation on which client wants to use zFS Provides access to zFS files and directories
Lease Manager
Leases are used to maintain data integrity in zFS They have an expiration period that is set in advance Each ObS has one lease manager which acquires the major lease It grants exclusive leases on objects residing on the ObS
File Manager
Each zFS file is managed by a single file manager It obtains the exclusive lease from the lease manager It keeps track of each accomplished open() and read() request
Cooperative Cache
Due to fast network connections, it takes lesser time to retrieve data from another machines memory than from a local disk
Transaction Server
Each directory operation is protected inside a transaction It helps maintain consistency of the file-system Acquires all required leases and holds onto them for as long as it can
page is a replicated page is a singlet, the page is forwarded to another node using the following
steps :
1.
A message is sent to the zFS file manager indicating that the page is sent to another machine B, the node with the largest free memory known to A
2.
3.
Due to network delay, this message reaches N after memory pressure developed on N and it
discarded the page as it was marked replicated
count
It is more efficient to transmit k pages in one message rather than transmitting them in a separate message
Researchers tested the time it takes to transmit a file of N pages in chunks of 1...k pages in one message Best results were achieved for k=4 and k=8 Similar performance was achieved by zFS pre-fetching mechanism
Methodology Used
IOZONE benchmark tool was used to compare zFS performance to that of NFS NFS does not carry out pre-fetching so to make up for this, IOZONE was configured to read the NFS mounted file using record sizes of n=1,4,8,16 pages zFS mounted files were read with record size of one page but with prefetching parameter R=1,4,8,16 pages
Observations
The performance of NFS was almost the same for different block sizes But its performance is almost four times better when the file fits entirely in the memory The performance of zFS with cooperative cache is much better than NFS When cooperative cache was deactivated, different behaviors were observed for different range of pages
Observations
The performance of zFS for R=1 is lower than that of NFS For larger ranges, the performance of zFS was slightly better than that of NFS due to pre-fetching When cooperative cache is used, zFS performance is significantly better than NFS Performance with cooperative cache is lower in second case due to memory pressure and discarded pages generating reject messages
Conclusion
The results show that using the cache of all the clients as one cooperative cache gives better performance as compared to NFS as well as the case when cooperative cache is not used The results also show that using pre-fetching with ranges of four and eight pages results in much better performance