Sie sind auf Seite 1von 13

Two examples for recovering VxVM configuration

for a Bare Metal Disaster Recovery scenario



2011-03-22
Introduction

This document will present two of the many possible methods on how to recover the VxVM
configuration in the following scenarios:
1. The customer has a similar number of devices of the same size or larger in the Disaster
Recovery site as in production, and would like to use the information in the VxVM
configuration backup area.
2. The customer has a very different LUN layout, but can use vxassist to create volumes of an
appropriate size.
In the first situation, since the Disaster Recovery site has provided matching storage at the LUN level,
the user will be able to retain the underlying volume layout my simply mapping the old LUN devices
to the new storage naming. In the second scenario, the user will not be able to keep the previous
volume layout due to differences in the storage provided. A method will be presented on how to
extract the volume name and size information, and create VxVM objects of the appropriate size. As
stated, this method will not capture all of the underlying volume characteristics, and will potentially
will not work with more advanced VxVM objects.
The two methods described here are by no means the only approach to recovering the VxVM
configuration, and in many cases the will not be the most appropriate. The end user will need to
understand both the production environment and the DR scenario to select the method that is most
appropriate to their recovery situation.
In most cases, and for the purpose of this demonstration, the output of vxconfigbackup is stored in a
time and date stamped subdirectory in the /etc/vx/cbr/bk directory. As mentioned, the
vxconfigbackup command will time stamp the output, enabling the user to determine the most
appropriate backup version to use.
Many customers back the vxconfgbackup information up as part of the regular system backup, and
make that information available to the DR site. In these examples, it is assumed that:
- the /etc/vx/cbr/bk filesystem is backed up as part of the regular system backup (either file
system backup , flash archive, or mksysb)
- the OS image has been installed and that the vxconfigbackup directories are available
The information contained in the vxconfigbackup area does not replace adequate system
documentation. Customers should ensure that vxconfigbackupd is configured appropriately for their
system prior to beginning a recovery effort. For additional information on the vxconfigbackup and
the vxconfigbackupd daemon, please consult the appropriate system documentation.

Example System
The example system used in the course of this document has 4 EMC Clariion LUNs of 52GB in size.
The system has the following Volume layout: 3 48GB 4 way striped volumes (testvol00, testvol01 and
testvol02) in 1 disk group named test01dg.

The output of vxprint t testvol01 shows:
[root@ebctest211 bk]# vxprint -g test01dg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg test01dg test01dg - - - - - -

dm test01disk00 emc_clariion0_44 - 104661936 - - - -
dm test01disk01 emc_clariion0_45 - 104661936 - - - -
dm test01disk02 emc_clariion0_46 - 104661936 - - - -
dm test01disk03 emc_clariion0_47 - 104661936 - - - -

v testvol01 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol01-01 testvol01 ENABLED 100663296 - ACTIVE - -
sd test01disk00-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk01-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk02-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk03-03 testvol01-01 ENABLED 25165824 0 - - -

v testvol02 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol02-01 testvol02 ENABLED 100663296 - ACTIVE - -
sd test01disk00-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk01-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk02-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk03-02 testvol02-01 ENABLED 25165824 0 - - -

v testvol03 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol03-01 testvol03 ENABLED 100663296 - ACTIVE - -
sd test01disk00-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk01-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk02-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk03-01 testvol03-01 ENABLED 25165824 0 - - -
[root@ebctest211 bk]#

The output of vxdisk list show:
[root@ebctest211 bk]# vxdisk list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:none - - online invalid
disk_1 auto:none - - online invalid
emc_clariion0_44 auto:cdsdisk test01disk00 test01dg online
emc_clariion0_45 auto:cdsdisk test01disk01 test01dg online
emc_clariion0_46 auto:cdsdisk test01disk02 test01dg online
emc_clariion0_47 auto:cdsdisk test01disk03 test01dg online

Recovery with similar number of devices of the appropriate size

As noted above, our production site had four EMC Clariion devices. For this example, we are going to
recover the VxVM configuration onto four FAS3070 devices. The advantage of this method is that all
volumes are automatically created, and the underlying device layout is maintained.
Below is an example of the storage layout on the system on the DR side:
[root@ebctest211 tmp]# vxdisk list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:none - - online invalid
disk_1 auto:none - - online invalid
fas30700_1 auto:cdsdisk - - online thin
fas30700_2 auto:cdsdisk - - online thin
fas30700_3 auto:cdsdisk - - online thin
fas30700_4 auto:cdsdisk - - online thin

As mentioned before, several backups can be found in this directory. For this example, we are going
to use the contents of the test01dg.1298222923.13.ebctest211.ebc.veritas.com directory to recover
from.
Create a disk group with the appropriate number of LUNs from the FAS3070 array:
[root@ebctest211 tmp]# vxdg init test01dg fas30700_1 fas30700_2 fas30700_3 fas30700_4
[root@ebctest211 tmp]#

The vxprint and vxdisk output after this operation show the following:
[root@ebctest211 test01dg.1298222923.13.ebctest211.ebc.veritas.com]# vxdisk list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:none - - online invalid
disk_1 auto:none - - online invalid
fas30700_1 auto:cdsdisk fas30700_1 test01dg online thin
fas30700_2 auto:cdsdisk fas30700_2 test01dg online thin
fas30700_3 auto:cdsdisk fas30700_3 test01dg online thin
fas30700_4 auto:cdsdisk fas30700_4 test01dg online thin
[root@ebctest211 test01dg.1298222923.13.ebctest211.ebc.veritas.com]# vxprint -g test01dg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg test01dg test01dg - - - - - -

dm fas30700_1 fas30700_1 - 104661936 - - - -
dm fas30700_2 fas30700_2 - 104661936 - - - -
dm fas30700_3 fas30700_3 - 104661936 - - - -
dm fas30700_4 fas30700_4 - 104661936 - - - -
[root@ebctest211 test01dg.1298222923.13.ebctest211.ebc.veritas.com]#

It should be noted that the system assigned the names of fas3070_1, fas3070_02, fas3070_03 and
fas3070_04 to the new devices in the disk group, rather than the previous user defined naming of
test01disk00, test01disk01, test01disk02 and test01disk03. This will need to be changed in the below
steps to allow the vxmake utility to properly map the plex layout to the underlying sub-disk. As an
example below we will attempt to apply the configuration prior to renaming the devices.

Extract the volume, plex and sub-disk information and prepare it for the vxmake utility:
[root@ebctest211 test01dg.1298222923.13.ebctest211.ebc.veritas.com]# pwd
/etc/vx/cbr/bk/test01dg.1298222923.13.ebctest211.ebc.veritas.com
[root@ebctest211 test01dg.1298222923.13.ebctest211.ebc.veritas.com]# cat *.cfgrec | vxprint -D
- -mpvshr > /tmp/test01dg-vxmake.out
[root@ebctest211 test01dg.1298222923.13.ebctest211.ebc.veritas.com]#

Lets apply the configuration prior to renaming the devices to understand why this task is necessary.
Notice that the vxmake command cannot determine how to map the previous plexes and sub disk to
the new storage layout. In a following step we will show how to correct this situation.
[root@ebctest211 tmp]# vxmake -g test01dg -d /tmp/test01dg-vxmake.out
VxVM vxmake ERROR V-5-1-639 Failed to obtain locks:
test01disk03: no such object in the configuration
test01disk02: no such object in the configuration
test01disk01: no such object in the configuration
test01disk00: no such object in the configuration
test01disk03: no such object in the configuration
test01disk02: no such object in the configuration
test01disk01: no such object in the configuration
test01disk00: no such object in the configuration
test01disk03: no such object in the configuration
test01disk02: no such object in the configuration
test01disk01: no such object in the configuration
test01disk00: no such object in the configuration
[root@ebctest211 tmp]#

To be able to apply the configuration, rename the underlying fas3070 devices to match the previous
naming scheme. The output of the vxdisk list command follows:
[root@ebctest211 tmp]# vxedit -g test01dg rename fas30700_1 test01disk00
[root@ebctest211 tmp]# vxedit -g test01dg rename fas30700_2 test01disk01
[root@ebctest211 tmp]# vxedit -g test01dg rename fas30700_3 test01disk02
[root@ebctest211 tmp]# vxedit -g test01dg rename fas30700_4 test01disk03
[root@ebctest211 tmp]# vxdisk list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:none - - online invalid
disk_1 auto:none - - online invalid
fas30700_1 auto:cdsdisk test01disk00 test01dg online thin
fas30700_2 auto:cdsdisk test01disk01 test01dg online thin
fas30700_3 auto:cdsdisk test01disk02 test01dg online thin
fas30700_4 auto:cdsdisk test01disk03 test01dg online thin

No re-apply the configuration to the test01dg disk group using the following vxmake command. The
output of vxprint g test01dg is included below:
[root@ebctest211 tmp]# vxmake -g test01dg -d /tmp/test01dg-vxmake.out
[root@ebctest211 tmp]# vxprint -g test01dg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg test01dg test01dg - - - - - -

dm test01disk00 fas30700_1 - 104661936 - - - -
dm test01disk01 fas30700_2 - 104661936 - - - -
dm test01disk02 fas30700_3 - 104661936 - - - -
dm test01disk03 fas30700_4 - 104661936 - - - -

v testvol01 fsgen DISABLED 100663296 - EMPTY - -
pl testvol01-01 testvol01 DISABLED 100663296 - EMPTY - -
sd test01disk00-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk01-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk02-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk03-03 testvol01-01 ENABLED 25165824 0 - - -

v testvol02 fsgen DISABLED 100663296 - EMPTY - -
pl testvol02-01 testvol02 DISABLED 100663296 - EMPTY - -
sd test01disk00-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk01-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk02-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk03-02 testvol02-01 ENABLED 25165824 0 - - -

v testvol03 fsgen DISABLED 100663296 - EMPTY - -
pl testvol03-01 testvol03 DISABLED 100663296 - EMPTY - -
sd test01disk00-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk01-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk02-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk03-01 testvol03-01 ENABLED 25165824 0 - - -

You should now be able to start the volumes, as show below:
[root@ebctest211 tmp]# vxvol -g test01dg start testvol01
[root@ebctest211 tmp]# vxvol -g test01dg start testvol02
[root@ebctest211 tmp]# vxvol -g test01dg start testvol03
[root@ebctest211 tmp]# vxprint -g test01dg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg test01dg test01dg - - - - - -

dm test01disk00 fas30700_1 - 104661936 - - - -
dm test01disk01 fas30700_2 - 104661936 - - - -
dm test01disk02 fas30700_3 - 104661936 - - - -
dm test01disk03 fas30700_4 - 104661936 - - - -

v testvol01 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol01-01 testvol01 ENABLED 100663296 - ACTIVE - -
sd test01disk00-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk01-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk02-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk03-03 testvol01-01 ENABLED 25165824 0 - - -
v testvol02 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol02-01 testvol02 ENABLED 100663296 - ACTIVE - -
sd test01disk00-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk01-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk02-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk03-02 testvol02-01 ENABLED 25165824 0 - - -

v testvol03 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol03-01 testvol03 ENABLED 100663296 - ACTIVE - -
sd test01disk00-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk01-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk02-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk03-01 testvol03-01 ENABLED 25165824 0 - - -
[root@ebctest211 tmp]#

A filesystem can now be created on the volumes:
[root@ebctest211 tmp]# mkfs -t vxfs /dev/vx/rdsk/test01dg/testvol01
version 7 layout
100663296 sectors, 50331648 blocks of size 1024, log size 65536 blocks
largefiles supported
[root@ebctest211 tmp]# mkfs -t vxfs /dev/vx/rdsk/test01dg/testvol02
version 7 layout
100663296 sectors, 50331648 blocks of size 1024, log size 65536 blocks
largefiles supported
[root@ebctest211 tmp]# mkfs -t vxfs /dev/vx/rdsk/test01dg/testvol03
version 7 layout
100663296 sectors, 50331648 blocks of size 1024, log size 65536 blocks
largefiles supported
[root@ebctest211 tmp]#
Under some circumstances, VxVM might put the volumes in an indeterminate state. If this is the
case, force the underlying plex off, on, and to the clean state prior to starting the volume. The
process for this is below:
[root@ebctest211 tmp]#
[root@ebctest211 tmp]# vxmend -g test01dg -o force off testvol01-01
[root@ebctest211 tmp]# vxmend -g test01dg -o force off testvol02-01
[root@ebctest211 tmp]# vxmend -g test01dg -o force off testvol03-01
[root@ebctest211 tmp]#
[root@ebctest211 tmp]# vxmend -g test01dg on testvol03-01
[root@ebctest211 tmp]# vxmend -g test01dg on testvol02-01
[root@ebctest211 tmp]# vxmend -g test01dg on testvol01-01
[root@ebctest211 tmp]#
[root@ebctest211 tmp]# vxmend -g test01dg fix clean testvol03-01
[root@ebctest211 tmp]# vxmend -g test01dg fix clean testvol02-01
[root@ebctest211 tmp]# vxmend -g test01dg fix clean testvol01-01
[root@ebctest211 tmp]#
[root@ebctest211 tmp]# vxvol -g test01dg start testvol01
[root@ebctest211 tmp]# vxvol -g test01dg start testvol02
[root@ebctest211 tmp]# vxvol -g test01dg start testvol03
[root@ebctest211 tmp]# vxprint -g test01dg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg test01dg test01dg - - - - - -

dm test01disk00 fas30700_1 - 104661936 - - - -
dm test01disk01 fas30700_2 - 104661936 - - - -
dm test01disk02 fas30700_3 - 104661936 - - - -
dm test01disk03 fas30700_4 - 104661936 - - - -

v testvol01 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol01-01 testvol01 ENABLED 100663296 - ACTIVE - -
sd test01disk00-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk01-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk02-03 testvol01-01 ENABLED 25165824 0 - - -
sd test01disk03-03 testvol01-01 ENABLED 25165824 0 - - -

v testvol02 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol02-01 testvol02 ENABLED 100663296 - ACTIVE - -
sd test01disk00-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk01-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk02-02 testvol02-01 ENABLED 25165824 0 - - -
sd test01disk03-02 testvol02-01 ENABLED 25165824 0 - - -

v testvol03 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol03-01 testvol03 ENABLED 100663296 - ACTIVE - -
sd test01disk00-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk01-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk02-01 testvol03-01 ENABLED 25165824 0 - - -
sd test01disk03-01 testvol03-01 ENABLED 25165824 0 - - -
[root@ebctest211 tmp]#
[root@ebctest211 tmp]#

Creation of VxVM volumes with a different underlying storage layout
In this example, we are going to recover the initial storage configuration onto three FAS3070 LUNs.
This has the side effect of not preserving the underlying storage layout.
If the customer wishes, the volumes can be laid out as appropriate using the options to the vxassist
utility. For further information on the vxassist utility, please refer to the appropriate system
documentation.
Below is shown the VxVM configuration in the production site:
[root@ebctest211 tmp]# vxprint -g test01dg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg test01dg test01dg - - - - - -

dm emc_clariion0_44 emc_clariion0_44 - 104661936 - - - -
dm emc_clariion0_45 emc_clariion0_45 - 104661936 - - - -
dm emc_clariion0_46 emc_clariion0_46 - 104661936 - - - -
dm emc_clariion0_47 emc_clariion0_47 - 104661936 - - - -

v testvol01 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol01-01 testvol01 ENABLED 100663296 - ACTIVE - -
sd emc_clariion0_44-01 testvol01-01 ENABLED 25165824 0 - - -
sd emc_clariion0_45-01 testvol01-01 ENABLED 25165824 0 - - -
sd emc_clariion0_46-01 testvol01-01 ENABLED 25165824 0 - - -
sd emc_clariion0_47-01 testvol01-01 ENABLED 25165824 0 - - -

v testvol02 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol02-01 testvol02 ENABLED 100663296 - ACTIVE - -
sd emc_clariion0_44-02 testvol02-01 ENABLED 25165824 0 - - -
sd emc_clariion0_45-02 testvol02-01 ENABLED 25165824 0 - - -
sd emc_clariion0_46-02 testvol02-01 ENABLED 25165824 0 - - -
sd emc_clariion0_47-02 testvol02-01 ENABLED 25165824 0 - - -

v testvol03 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol03-01 testvol03 ENABLED 100663296 - ACTIVE - -
sd emc_clariion0_44-03 testvol03-01 ENABLED 25165824 0 - - -
sd emc_clariion0_45-03 testvol03-01 ENABLED 25165824 0 - - -
sd emc_clariion0_46-03 testvol03-01 ENABLED 25165824 0 - - -
sd emc_clariion0_47-03 testvol03-01 ENABLED 25165824 0 - - -
[root@ebctest211 tmp]#

On the DR site, a disk group named test01dg has been created using three LUNs, below is the output
of vxprint g test01dg:
[root@ebctest211 tmp]# vxprint -g test01dg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg test01dg test01dg - - - - - -

dm fas30700_1 fas30700_1 - 104661936 - - - -
dm fas30700_2 fas30700_2 - 104661936 - - - -
dm fas30700_3 fas30700_3 - 104661936 - - - -
[root@ebctest211 tmp]#

Identify the appropriate vxconfigbackup directory that you wish to recover the information from. For
this example, we are going to be using test01dg.1298225796.17.ebctest211.ebc.veritas.com. Extract
from the cfgrec file the volume and size information using egrep. The initial section will contain
information about subdisk layouts, this can be discarded. The output of this is shown below, with the
information of interest highlighted in green.
[root@ebctest211 test01dg.1298225796.17.ebctest211.ebc.veritas.com]# egrep "^vol|len="
*.cfgrec
pub_len=104661936
[.......]
contig_len=100663296
vol testvol01
len=100663296
log_len=0
logmap_len=0
vol testvol02
len=100663296
log_len=0
logmap_len=0
vol testvol03
len=100663296
log_len=0
logmap_len=0

From the above output, we can determine that this disk group had three volumes, each of size
10663296 blocks. Use the vxassist utility to create volumes with the appropriate layout. In this case,
due to the fact that the DR site only has three LUNs presented, we cannot duplicate the 4 way stripe.
What can be done though, is to create a three-way stripe to utilize all the spindles. The process for
doing this looks like:
[root@ebctest211 /]# vxassist -g test01dg make testvol01 100663296 layout=stripe ncol=3
[root@ebctest211 /]# vxassist -g test01dg make testvol02 100663296 layout=stripe ncol=3
[root@ebctest211 /]# vxassist -g test01dg make testvol03 100663296 layout=stripe ncol=3
[root@ebctest211 /]# vxprint -g test01dg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg test01dg test01dg - - - - - -

dm fas30700_1 fas30700_1 - 104661936 - - - -
dm fas30700_2 fas30700_2 - 104661936 - - - -
dm fas30700_3 fas30700_3 - 104661936 - - - -

v testvol01 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol01-01 testvol01 ENABLED 100663296 - ACTIVE - -
sd fas30700_1-01 testvol01-01 ENABLED 33554432 0 - - -
sd fas30700_2-01 testvol01-01 ENABLED 33554432 0 - - -
sd fas30700_3-01 testvol01-01 ENABLED 33554432 0 - - -

v testvol02 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol02-01 testvol02 ENABLED 100663296 - ACTIVE - -
sd fas30700_1-02 testvol02-01 ENABLED 33554432 0 - - -
sd fas30700_2-02 testvol02-01 ENABLED 33554432 0 - - -
sd fas30700_3-02 testvol02-01 ENABLED 33554432 0 - - -

v testvol03 fsgen ENABLED 100663296 - ACTIVE - -
pl testvol03-01 testvol03 ENABLED 100663296 - ACTIVE - -
sd fas30700_1-03 testvol03-01 ENABLED 33554432 0 - - -
sd fas30700_2-03 testvol03-01 ENABLED 33554432 0 - - -
sd fas30700_3-03 testvol03-01 ENABLED 33554432 0 - - -
[root@ebctest211 /]#

From the above output, it can be observed that vxassist created the volumes, and left them in an
ENABLED state. The volumes are now ready for use either as a raw volume or to have a filesystem
created on them:
[root@ebctest211 /]# mkfs -t vxfs /dev/vx/rdsk/test01dg/testvol01
version 7 layout
100663296 sectors, 50331648 blocks of size 1024, log size 65536 blocks
largefiles supported
[root@ebctest211 /]# mkfs -t vxfs /dev/vx/rdsk/test01dg/testvol02
version 7 layout
100663296 sectors, 50331648 blocks of size 1024, log size 65536 blocks
largefiles supported
[root@ebctest211 /mkfs -t vxfs /dev/vx/rdsk/test01dg/testvol03
version 7 layout
100663296 sectors, 50331648 blocks of size 1024, log size 65536 blocks
largefiles supported

Das könnte Ihnen auch gefallen