Beruflich Dokumente
Kultur Dokumente
Reference Architecture
Reference Architecture
Contents
Executive summary ................................................................................................................................................................................................................................................................................................................................ 3
Introduction ...................................................................................................................................................................................................................................................................................................................................................3
Solution overview ..................................................................................................................................................................................................................................................................................................................................... 3
Solution components............................................................................................................................................................................................................................................................................................................................5
Hardware...................................................................................................................................................................................................................................................................................................................................................5
Software .....................................................................................................................................................................................................................................................................................................................................................6
Application software .......................................................................................................................................................................................................................................................................................................................6
SAS Mixed Analytics Workload description............................................................................................................................................................................................................................................................... 6
Best practices and configuration guidance for the solution ............................................................................................................................................................................................................................. 7
Capacity and sizing ................................................................................................................................................................................................................................................................................................................................ 7
SAS Mixed Analytics Workload results - 200 drive configuration ........................................................................................................................................................................................................7
SAS Mixed Analytics Workload results - 400 drive configuration ........................................................................................................................................................................................................8
Summary ...................................................................................................................................................................................................................................................................................................................................................... 11
Implementing a proof-of-concept .................................................................................................................................................................................................................................................................................. 12
Appendix A: Bill of materials ...................................................................................................................................................................................................................................................................................................... 12
Appendix B: Script for creating the volume group ................................................................................................................................................................................................................................................ 15
Appendix C: Script for creating the logical volumes ............................................................................................................................................................................................................................................ 15
Appendix D: Script for creating the file systems on the logical volumes .......................................................................................................................................................................................... 16
Appendix E: Script for mounting the file systems the first time ................................................................................................................................................................................................................ 17
Appendix F: /etc/fstab script for remounting the file systems after a system boot ................................................................................................................................................................ 18
Resources and additional links ................................................................................................................................................................................................................................................................................................ 19
Reference Architecture Page 3
Executive summary
We all live and work in a new era of extreme business speed with heightened customer, partner, and employee expectations. To better compete
and grow, businesses demand more innovation, speed, and flexibility from their data centers. In today’s complex business environment, enterprise
businesses need to utilize their data effectively and intelligently so they can stay ahead of their competition and to serve their customers well.
Enterprises deploy Business Intelligence and Analytical solutions in order to gain a competitive edge and make real time decisions. These
solutions need compute and storage platforms that are highly scalable and very reliable. The large amount of data that needs to be addressed
and handled to make these complex decisions that give the business a competitive edge is compute intensive and consumes large amounts of
storage.
Additionally, SAS® 9.4 customers require a hardware and software configuration that can deliver data analysis results quickly and accurately. As a
result, enterprise customers demand reliable and fast storage that can scale to meet their business analytics requirements. This report details a
proof point for SAS 9.4 that demonstrates the performance available with the HPE 3PAR StoreServ 8000 mid-range offering. Because of the
unique strengths the HPE 3PAR 8400 array can bring to the SAS workload, we can utilize spinning media, bringing the same performance
characteristics to the environment and do that for half the price of an all flash option.
This Reference Architecture highlights the key findings from running SAS 9.4 using the Mixed Analytics Workload suite running on two HPE
Synergy 480 Gen10 Compute Modules, one HPE Synergy 660 Gen10 Compute Module, and HPE 3PAR StoreServ 8400 Storage. This test
showcased the scalability of SAS 9.4 on the HPE Synergy Compute Modules and HPE 3PAR StoreServ 8400 Storage.
The HPE 3PAR StoreServ 8400 array was chosen because of its ability to scale from a small, departmental SAS storage option up to an
enterprise level, highly performant array. Scaling can be done is small increments and allows for a waste nothing scenario. A customer that starts
small doesn’t have to throw out or replace components that were useful in a smaller configuration, as the total configuration is grown into.
HPE Synergy compute modules were chosen because they exhibit the ability to deliver greater performance than competing platforms using the
same industry standard Intel® Xeon® Scalable Family of processors. Because SAS 9.4 is licensed by the core, the ability to deliver greater
performance and throughput with the same processor effectively reduces the number of cores needed to run a given SAS 9.4 workload.
SAS has certified that the test suite was run correctly, and that the test workload represents the needs of a typical user.
Target audience: The target audience for this performance report is the IT community studying solutions for their environments. Business users
and IT professionals who are interested in implementing a SAS solution may find this report useful for a sample SAS configuration and a
demonstration of the HPE Synergy Compute Module’s and HPE 3PAR StoreServ scalability. IT professionals that either already have a SAS
implementation on HPE 3PAR storage or are considering one may find suggestions for improving the performance of their systems, as well as a
proof-point for running SAS with HPE 3PAR StoreServ 8000 Storage.
Document purpose: The purpose of this document is to describe a Reference Architecture, highlighting recognizable benefits to technical
audiences.
Introduction
SAS creates test kits to provide partners with representative workloads for performance testing and tuning. The SAS Mixed Analytics Workload
suite scenario uses real-world data volumes and structures of a typical SAS customer. The scenario simulates the types of jobs received from
various SAS clients such as display manager, batch, SAS Data Integration Studio, SAS Enterprise Miner™, SAS Add-In for Microsoft® Office, SAS
Enterprise Guide and SAS® Studio. Many customer environments have large numbers of ad hoc SAS users or jobs that utilize analytics in support
of their company’s day to day business activities.
The scenario consisted of typical analytic jobs designed to replicate a light to heavy workload. These jobs were launched via a script which
included time delays to simulate scheduled jobs and interactive users launching at different times.
The purpose of this particular set of tests was to characterize the HPE 3PAR StoreServ 8400 so that users of that storage will understand how to
build and configure the storage and also get a feeling for the type of performance the HPE 3PAR 8400 is able to deliver into SAS 9.4
environments.
Solution overview
The HPE 3PAR StoreServ 8400 is an enterprise class, mid-range storage array that offers SAS 9.4 environments potent performance that will
allow the completion of SAS jobs in minutes and hours versus days or weeks.
Reference Architecture Page 4
In SAS environments, Hewlett Packard Enterprise can deliver SSD level performance in an all spinning media package which reduces the amount
of investment by approximately half.
Hewlett Packard Enterprise has seen, over the years, that when a customer has a SAS 9.4 performance issue, that issue is typically resolved by
improving storage throughput. The point of this series of tests was to focus on the storage aspect and in particular the storage throughput facet
of an environment, so that customers know the type of performance an HPE 3PAR StoreServ 8400 can deliver.
This paper will demonstrate configurations for an HPE 3PAR StoreServ 8400 optimized for the best throughput, as well as a start small, build out
scenario on the same array.
A companion paper focusing on the HPE Synergy Compute aspect of these series of tests can be found at,
http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=a00044785enw.
The following, Figure 1, is a graphical representation of the environment that was tested.
Figure 1. Graphical depiction of the hardware environment on which the SAS 9.4 testing was performed
Reference Architecture Page 5
Solution components
Hardware
HPE 3PAR 8400
As mentioned before, the solution was comprised of a 4-node HPE 3PAR StoreServ 8400. We utilized only the 8 X 16Gb Fibre Channel ports
that come with the base array.
The disk drives are attached to the 4 node controllers by 12Gb SAS channels. Each controller comes with 2 X 12Gb SAS drive adapters and each
drive adapter has dual ports so that each node in the node pair has access to all storage shelves. This is done so that, should one controller fail,
the array still has connectivity to all drives and can keep accessing and writing data. However, should a node fail occur, all write caching is
suspended and that node pair functions in a performance degraded state until the node that’s failed is repaired or replaced.
The HPE 3PAR StoreServ 8400 had 18 X expansion enclosures in addition to the controller pairs’ storage enclosures, for a total of 20 X 24 drive
slot shelves. We populated the shelves with 20 X 600GB 15K RPM drives. We then created virtual volumes using the RAID 5 CPG in a 3+1 stripe
set. Hewlett Packard Enterprise found that RAID 5, 3+1 was the best performing RAID configuration when running the SAS 9.4 workload. It
should be noted that HPE 3PAR discourages the use of RAID 5, due to the possibility that a second disk failure within a RAID group before a
data reconstruction can complete, would result in data loss. This is mitigated to a degree by the use of smaller 600GB 15K RPM drives reducing
the amount of time it takes to reconstruct data in the event of a disk failure. Customers may want to consider the use of RAID 1 for their
persistent SAS data space.
We ran the initial set of tests using all 400 of the drives. Then to demonstrate the scaling properties of the HPE 3PAR StoreServ 8400, we
eliminated 10 of the expansion shelves and 200 of the drives and ran the same set of tests with only 200 drives configured. A customer may
deploy an HPE 3PAR StoreServ 8400 starting with fewer than 400 drives and then over time, scale the solution up to the full throughput
capability. If added storage capacity is required, additional drives and expansion shelves may be added beyond the 400 drive configuration that
was tested, however due to limitations, throughput will remain approximately the same as when running the 400 drive configuration.
HPE Synergy
The compute portion of this testing was handled by three HPE Synergy Compute Modules. The configuration of the Compute Modules were
varied purposely, to test both scaling of workload and also to demonstrate core versus clock differences in throughput.
SAS 9.4 is influenced heavily by the clock speed of a processor. For example, comparing processors with similar architecture, such as the Intel
Xeon Scalable family processors used in HPE Synergy Compute Modules and assuming no other bottlenecks, a SAS job that runs on a processor
with a clock of 2GHz will complete twice as fast as the same SAS job that runs on a processor with a clock speed of 1GHz. However, it may be
that a group of SAS jobs will, in total, complete faster on a processor with a slower clock speed but having more cores than the same jobs will
complete on a processor with a faster clock but fewer cores.
Within the constraints of a customer’s SAS license, enterprises need to determine which model they wish to pursue. Is it most important for single
jobs to complete more quickly, or is it more important for a group of jobs to complete more quickly?
A companion paper has been published that will focus on the throughput of each individual HPE Synergy Compute Module. The focus of this
paper is on storage throughput and because of that, Hewlett Packard Enterprise ran the tests utilizing multiple HPE Synergy Compute Modules
to stress the storage. This means that the throughput of the Compute Modules will trend lesser than what would be possible when only one
Compute Module was used to run the SAS Mixed Analytics Workload at a time.
The three Compute Modules used had the following configurations:
Software
• HPE 3PAR OS 3.2.2 MU4
• Red Hat Enterprise Linux® 7.4
Application software
• Base SAS v9.4 M4
• SAS Mixed Analytics Workload
Important
This test scenario can provide an outline for comparing hardware and/or software products; it is not intended to be used as a sizing guideline. In
the real world, server performance is highly dependent upon the application design and workload profiling.
As with any laboratory testing, the performance metrics quoted in this paper are idealized. In a production environment, these metrics may be
impacted by a variety of factors.
Reference Architecture Page 7
Hewlett Packard Enterprise has found, through testing, that it is optimal to have multiple virtual volumes (LUNs), typically 8, exported from the
array to the Compute Modules. On the Compute Modules those LUNs are then aggregated into a volume group, from which logical volumes are
created for each of the storage areas needed for SAS.
For each of the 30 user runs, Hewlett Packard Enterprise created a logical volume for SAS Data, SAS Work and UTILLOC. Each of the logical
volumes were striped across all 8 LUNs with a stripe size of 2M. The Logical Volume Manager (LVM) read ahead was then also set to 2M, via the
lvchange command.
Hewlett Packard Enterprise has also found that the best performing file system for SAS is XFS.
Scripts for the creation of the volume group, the logical volumes and the file systems can be found in appendices B through F.
The following, Figure 2, is the average throughput for each of those intervals along with the total number of jobs for each of the scenarios.
The legend tells the number of 30 user runs being executed on each Compute Module. A legend of 1 X 1 X 2, means that one 30 user run was
executed on the first HPE Synergy 480 Gen10 Compute Module, one 30 user run was executed on the second HPE Synergy 480 Gen10
Compute Module and two 30 user runs were executed on the HPE Synergy 660 Gen10 Compute Module. Each 30 user run consisted of 102
total SAS jobs being run. This means that for the 1 X 1 X 2 stack, 408 total SAS jobs were completed. The largest number of SAS jobs occurred
with the 4 X 4 X 8 run and during that run there were 1632 total SAS jobs run across all environments, with their IO demand being satisfied by
the HPE 3PAR StoreServ 8400.
Avg Throughput
9000 1800
8000 1600
7000 1400
Number of SAS jobs
MB per second
6000 1200
5000 1000
4000 800
3000 600
2000 400
Figure 2. IO throughput recorded for the best 50 second, 2-minute, 10 minutes, 20 minutes and total number of SAS jobs
Reference Architecture Page 8
What is most important to users of SAS 9.4 software is the amount of time that it takes for a series of SAS jobs to complete in total and also the
time it takes for a single SAS job to complete. The following, Figure 3, shows the times for all jobs, along with the average time it took a single
SAS job to run during the scenario.
Time to complete
7:12:00 00:13.8
00:12.1
6:00:00
Wall time to complete all jobs
00:10.4
4:48:00
00:08.6
00:05.2
2:24:00
00:03.5
1:12:00
00:01.7
0:00:00 00:00.0
408 714 816 918 1122 1224 1428 1632
Total Number SAS Jobs run
Figure 3. Average time for each 30-user scenario to complete along with the average time taken per job
The following, Figure 4, is the average throughput for each of the recorded interval periods using the 400 drive configuration.
8000 1200
1000
6000
800
4000 600
400
2000
200
0 0
1X1X2 2x2x3 2x2x4 2x2x5 3x3x5 3x3x6 4x4x6 4x4x8
Figure 4. IO throughput recorded for the best 50 second, 2 minute, 10 minute, 20 minute and total number of SAS jobs
Finally, Figure 5 is the time to complete graph for the 400 drive configuration.
Time to complete
4:48:00 00:10.4
Wall time to complete all jobs
4:19:12
00:08.6
3:50:24
3:21:36
00:06.9
Time per job
2:52:48
2:24:00 00:05.2
1:55:12
00:03.5
1:26:24
0:57:36
00:01.7
0:28:48
0:00:00 00:00.0
408 714 816 918 1122 1224 1428 1632
Total Number SAS Jobs run
Figure 5. Average time for each 30 user scenario to complete along with the average time taken per job
Reference Architecture Page 10
Today SAS recommends 100MB/sec/core to 150MB/sec/core of IO throughput on modern computers in order to keep the processors busy.
What we need to determine then is the number of disk drives required to enable the HPE 3PAR StoreServ 8400 to deliver the required
throughput.
The following, Figure 6, shows the amount of throughput on a per drive basis in the 200 drive configuration, displaying both the amount of IO
per facing drive and the amount of IO per installed drive. Since HPE configured the storage in a RAID 5, 3+1 configuration, and we have 200
total drives installed, the number of facing drives is 150, while the total number of drives is 200.
The 200 drive configuration had a total of 10 storage shelves installed in the HPE 3PAR StoreServ 8400, which left space for an additional 40
drives in this configuration.
MB/sec/drive
MB/sec/total
5000
30
4000
3000 20
2000
10
1000
0 0
1X1X2 2x2x3 2x2x4 2x2x5 3x3x5 3x3x6 4x4x6 4x4x8
Figure 6. IO throughput per installed and facing drive in a 200 drive configuration
Reference Architecture Page 11
The following, Figure 7, is the same type of graph, except this time the graph is based on the 400 drive configuration.
The 400 drive configuration had a total of 20 storage shelves installed in the HPE 3PAR StoreServ 8400, which left space for an additional 80
drives in this configuration.
35
10000
30
8000
25
MB/sec/drive
MB/sec/total
6000 20
15
4000
10
2000
5
0 0
1X1X2 2x2x3 2x2x4 2x2x5 3x3x5 3x3x6 4x4x6 4x4x8
Figure 7. IO throughput per installed and facing drive in a 400 drive configuration
From a throughput perspective, Hewlett Packard Enterprise recommends placing 20 drives per storage shelf to optimize throughput on a per
drive basis. Additional drives may be placed in the storage shelves for additional capacity, but HPE does not expect additional drives, beyond the
20 per shelf, to result in any additional throughput per shelf.
The HPE 3PAR StoreServ 8400 has a limitation of 22 drive shelves per array. Adding shelves beyond the tested 20 shelves will add capacity, but
HPE does not believe the additional shelves with additional drives will add significantly to the total throughput of the array.
Summary
Enterprises are continually asking information technology to do more while at the same time spending less money. Traditional SAS 9.4 workloads
are very IO intensive, requiring between 100 and 150MB/sec of IO throughput per core in order to keep the processors busy on a consistent
basis. These two aspects are at odds with each other. But Hewlett Packard Enterprise helps resolve these two competing issues by blending the
performance of flash storage with the cost of spinning storage, lowering the overall expenditure associated with acquiring high performance
storage.
As reported above, the HPE 3PAR 8400 array can start small and grow into a true SAS 9.4 IO powerhouse. Even at full deployment, the use of
spinning media reduces the cost of the array to approximately half that of an all flash solution without any sacrifice in performance in a SAS 9.4
environment.
In the appendixes, HPE also explained exactly how to provision the storage from both the HPE 3PAR side of things, as well as how to assemble it
all to create a well performing system on the server side. The advantage is twofold:
1. Enterprises don’t have to experiment to determine the best performing combination of configuration settings required to glean the best
performance out of the total solution.
2. Enterprises can be confident that the size of the solution will fit the requirements using the sizing information contained in this document.
Reference Architecture Page 12
This allows those enterprises to tailor a solution for their unique needs rather than settle for a one-size-fits-all amalgamation.
Implementing a proof-of-concept
As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as
closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained.
For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.
Note
Part numbers are valid at the time of publication/testing and subject to change. The bill of materials does not include complete support options
or other rack and power requirements. If you have questions regarding ordering, please consult with your HPE Reseller or HPE Sales
Representative for more details. hpe.com/us/en/services/consulting.html
Table 1a. Bill of materials for the HPE 3PAR StoreServ 8400 - 400 disk drive configuration
Qty Part number Description
Table 1b. Bill of materials for the HPE 3PAR StoreServ 8400 - 200 disk drive configuration
Qty Part number Description
Table 1c. Bill of materials for the HPE Synergy Frame and Compute Modules
Qty Part number Description
Appendix D: Script for creating the file systems on the logical volumes
The following is the script that was used to create the file systems on the logical volumes on the HPE Synergy 660 Gen10 Compute Module.
#!/bin/bash
mkfs.xfs /dev/vgsas/sas
mkfs.xfs /dev/vgsas/asuite1
mkfs.xfs /dev/vgsas/asuite2
mkfs.xfs /dev/vgsas/asuite3
mkfs.xfs /dev/vgsas/asuite4
mkfs.xfs /dev/vgsas/asuite5
mkfs.xfs /dev/vgsas/asuite6
mkfs.xfs /dev/vgsas/asuite7
mkfs.xfs /dev/vgsas/asuite8
mkfs.xfs /dev/vgsas/saswork1
mkfs.xfs /dev/vgsas/utilloc1
mkfs.xfs /dev/vgsas/saswork2
mkfs.xfs /dev/vgsas/utilloc2
mkfs.xfs /dev/vgsas/saswork3
mkfs.xfs /dev/vgsas/utilloc3
mkfs.xfs /dev/vgsas/saswork4
mkfs.xfs /dev/vgsas/utilloc4
mkfs.xfs /dev/vgsas/saswork5
mkfs.xfs /dev/vgsas/utilloc5
mkfs.xfs /dev/vgsas/saswork6
mkfs.xfs /dev/vgsas/utilloc6
mkfs.xfs /dev/vgsas/saswork7
mkfs.xfs /dev/vgsas/utilloc7
mkfs.xfs /dev/vgsas/saswork8
mkfs.xfs /dev/vgsas/utilloc8
Reference Architecture Page 17
Appendix E: Script for mounting the file systems the first time
The script used to mount the file systems the first time on the HPE Synergy 660 Gen10 Compute Module.
#!/bin/bash
mkdir /sasdata
mkdir /sasdata/asuite1
mkdir /sasdata/asuite2
mkdir /sasdata/asuite3
mkdir /sasdata/asuite4
mkdir /sasdata/asuite5
mkdir /sasdata/asuite6
mkdir /sasdata/asuite7
mkdir /sasdata/asuite8
mkdir /sasdata/asuite1/saswork
mount /dev/vgsas/saswork1 /sasdata/asuite1/saswork
mkdir /sasdata/asuite1/saswork/utilloc
mount /dev/vgsas/utilloc1 /sasdata/asuite1/saswork/utilloc
mkdir /sasdata/asuite2/saswork
mount /dev/vgsas/saswork2 /sasdata/asuite2/saswork
mkdir /sasdata/asuite2/saswork/utilloc
mount /dev/vgsas/utilloc2 /sasdata/asuite2/saswork/utilloc
mkdir /sasdata/asuite3/saswork
mount /dev/vgsas/saswork3 /sasdata/asuite3/saswork
mkdir /sasdata/asuite3/saswork/utilloc
mount /dev/vgsas/utilloc3 /sasdata/asuite3/saswork/utilloc
mkdir /sasdata/asuite4/saswork
mount /dev/vgsas/saswork4 /sasdata/asuite4/saswork
mkdir /sasdata/asuite4/saswork/utilloc
mount /dev/vgsas/utilloc4 /sasdata/asuite4/saswork/utilloc
mkdir /sasdata/asuite5/saswork
mount /dev/vgsas/saswork5 /sasdata/asuite5/saswork
mkdir /sasdata/asuite5/saswork/utilloc
mount /dev/vgsas/utilloc5 /sasdata/asuite5/saswork/utilloc
mkdir /sasdata/asuite6/saswork
mount /dev/vgsas/saswork6 /sasdata/asuite6/saswork
mkdir /sasdata/asuite6/saswork/utilloc
mount /dev/vgsas/utilloc6 /sasdata/asuite6/saswork/utilloc
mkdir /sasdata/asuite7/saswork
mount /dev/vgsas/saswork7 /sasdata/asuite7/saswork
mkdir /sasdata/asuite7/saswork/utilloc
mount /dev/vgsas/utilloc7 /sasdata/asuite7/saswork/utilloc
mkdir /sasdata/asuite8/saswork
mount /dev/vgsas/saswork8 /sasdata/asuite8/saswork
mkdir /sasdata/asuite8/saswork/utilloc
mount /dev/vgsas/utilloc8 /sasdata/asuite8/saswork/utilloc
Reference Architecture Page 18
Appendix F: /etc/fstab script for remounting the file systems after a system boot
The following is what the /etc/fstab file looks like on the HPE Synergy 660 Gen10 Compute Module after all the logical volumes have been
created, all of the file systems have been created and all of the mount points have been created. This allows the logical volumes to automatically
mounted after the system is rebooted.
#
# /etc/fstab
# Created by anaconda on Tue Feb 6 16:06:57 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=2970cfe0-e126-4952-9314-747212606f80 /boot xfs defaults 0 0
UUID=6F5C-E5C3 /boot/efi vfat umask=0077,shortname=winnt 0 0
/dev/mapper/rhel-home /home xfs defaults 0 0
/dev/mapper/rhel-opt /opt xfs defaults 0 0
/dev/mapper/rhel-swap swap swap defaults 0 0
/dev/mapper/vgsas/asuite1 /sasdata/asuite1 xfs defaults 0 0
/dev/mapper/vgsas/asuite2 /sasdata/asuite2 xfs defaults 0 0
/dev/mapper/vgsas/asuite3 /sasdata/asuite3 xfs defaults 0 0
/dev/mapper/vgsas/asuite4 /sasdata/asuite4 xfs defaults 0 0
/dev/mapper/vgsas/asuite5 /sasdata/asuite5 xfs defaults 0 0
/dev/mapper/vgsas/asuite6 /sasdata/asuite6 xfs defaults 0 0
/dev/mapper/vgsas/asuite7 /sasdata/asuite7 xfs defaults 0 0
/dev/mapper/vgsas/asuite8 /sasdata/asuite8 xfs defaults 0 0
/dev/mapper/vgsas/saswork1 /sasdata/asuite1/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork2 /sasdata/asuite2/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork3 /sasdata/asuite3/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork4 /sasdata/asuite4/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork5 /sasdata/asuite5/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork6 /sasdata/asuite6/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork7 /sasdata/asuite7/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork8 /sasdata/asuite8/saswork xfs defaults 0 0
/dev/mapper/vgsas/utilloc1 /sasdata/asuite1/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc2 /sasdata/asuite2/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc3 /sasdata/asuite3/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc4 /sasdata/asuite4/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc5 /sasdata/asuite5/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc6 /sasdata/asuite6/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc7 /sasdata/asuite7/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc8 /sasdata/asuite8/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/sas /opt/sas9.4 xfs defaults 0 0
Reference Architecture Page 19
© Copyright 2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice.
The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall
not be liable for technical or editorial errors or omissions contained herein.
SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and
other countries. Microsoft is a registered trademark of Microsoft Corporation in the United States and/or other countries. Intel, Xeon, and
Intel Xeon, are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Red Hat Enterprise Linux Certified is a
registered trademark of Red Hat, Inc. in the United States and other countries.