Sie sind auf Seite 1von 19

HPE Reference Architecture for

SAS® 9.4 on HPE 3PAR 8400 and


HPE Synergy
Up to 11GB/second of IO throughput with 480
concurrent users

Reference Architecture
Reference Architecture

Contents
Executive summary ................................................................................................................................................................................................................................................................................................................................ 3
Introduction ...................................................................................................................................................................................................................................................................................................................................................3
Solution overview ..................................................................................................................................................................................................................................................................................................................................... 3
Solution components............................................................................................................................................................................................................................................................................................................................5
Hardware...................................................................................................................................................................................................................................................................................................................................................5
Software .....................................................................................................................................................................................................................................................................................................................................................6
Application software .......................................................................................................................................................................................................................................................................................................................6
SAS Mixed Analytics Workload description............................................................................................................................................................................................................................................................... 6
Best practices and configuration guidance for the solution ............................................................................................................................................................................................................................. 7
Capacity and sizing ................................................................................................................................................................................................................................................................................................................................ 7
SAS Mixed Analytics Workload results - 200 drive configuration ........................................................................................................................................................................................................7
SAS Mixed Analytics Workload results - 400 drive configuration ........................................................................................................................................................................................................8
Summary ...................................................................................................................................................................................................................................................................................................................................................... 11
Implementing a proof-of-concept .................................................................................................................................................................................................................................................................................. 12
Appendix A: Bill of materials ...................................................................................................................................................................................................................................................................................................... 12
Appendix B: Script for creating the volume group ................................................................................................................................................................................................................................................ 15
Appendix C: Script for creating the logical volumes ............................................................................................................................................................................................................................................ 15
Appendix D: Script for creating the file systems on the logical volumes .......................................................................................................................................................................................... 16
Appendix E: Script for mounting the file systems the first time ................................................................................................................................................................................................................ 17
Appendix F: /etc/fstab script for remounting the file systems after a system boot ................................................................................................................................................................ 18
Resources and additional links ................................................................................................................................................................................................................................................................................................ 19
Reference Architecture Page 3

Executive summary
We all live and work in a new era of extreme business speed with heightened customer, partner, and employee expectations. To better compete
and grow, businesses demand more innovation, speed, and flexibility from their data centers. In today’s complex business environment, enterprise
businesses need to utilize their data effectively and intelligently so they can stay ahead of their competition and to serve their customers well.
Enterprises deploy Business Intelligence and Analytical solutions in order to gain a competitive edge and make real time decisions. These
solutions need compute and storage platforms that are highly scalable and very reliable. The large amount of data that needs to be addressed
and handled to make these complex decisions that give the business a competitive edge is compute intensive and consumes large amounts of
storage.

Additionally, SAS® 9.4 customers require a hardware and software configuration that can deliver data analysis results quickly and accurately. As a
result, enterprise customers demand reliable and fast storage that can scale to meet their business analytics requirements. This report details a
proof point for SAS 9.4 that demonstrates the performance available with the HPE 3PAR StoreServ 8000 mid-range offering. Because of the
unique strengths the HPE 3PAR 8400 array can bring to the SAS workload, we can utilize spinning media, bringing the same performance
characteristics to the environment and do that for half the price of an all flash option.

This Reference Architecture highlights the key findings from running SAS 9.4 using the Mixed Analytics Workload suite running on two HPE
Synergy 480 Gen10 Compute Modules, one HPE Synergy 660 Gen10 Compute Module, and HPE 3PAR StoreServ 8400 Storage. This test
showcased the scalability of SAS 9.4 on the HPE Synergy Compute Modules and HPE 3PAR StoreServ 8400 Storage.

The HPE 3PAR StoreServ 8400 array was chosen because of its ability to scale from a small, departmental SAS storage option up to an
enterprise level, highly performant array. Scaling can be done is small increments and allows for a waste nothing scenario. A customer that starts
small doesn’t have to throw out or replace components that were useful in a smaller configuration, as the total configuration is grown into.

HPE Synergy compute modules were chosen because they exhibit the ability to deliver greater performance than competing platforms using the
same industry standard Intel® Xeon® Scalable Family of processors. Because SAS 9.4 is licensed by the core, the ability to deliver greater
performance and throughput with the same processor effectively reduces the number of cores needed to run a given SAS 9.4 workload.

SAS has certified that the test suite was run correctly, and that the test workload represents the needs of a typical user.
Target audience: The target audience for this performance report is the IT community studying solutions for their environments. Business users
and IT professionals who are interested in implementing a SAS solution may find this report useful for a sample SAS configuration and a
demonstration of the HPE Synergy Compute Module’s and HPE 3PAR StoreServ scalability. IT professionals that either already have a SAS
implementation on HPE 3PAR storage or are considering one may find suggestions for improving the performance of their systems, as well as a
proof-point for running SAS with HPE 3PAR StoreServ 8000 Storage.
Document purpose: The purpose of this document is to describe a Reference Architecture, highlighting recognizable benefits to technical
audiences.

Introduction
SAS creates test kits to provide partners with representative workloads for performance testing and tuning. The SAS Mixed Analytics Workload
suite scenario uses real-world data volumes and structures of a typical SAS customer. The scenario simulates the types of jobs received from
various SAS clients such as display manager, batch, SAS Data Integration Studio, SAS Enterprise Miner™, SAS Add-In for Microsoft® Office, SAS
Enterprise Guide and SAS® Studio. Many customer environments have large numbers of ad hoc SAS users or jobs that utilize analytics in support
of their company’s day to day business activities.

The scenario consisted of typical analytic jobs designed to replicate a light to heavy workload. These jobs were launched via a script which
included time delays to simulate scheduled jobs and interactive users launching at different times.

The purpose of this particular set of tests was to characterize the HPE 3PAR StoreServ 8400 so that users of that storage will understand how to
build and configure the storage and also get a feeling for the type of performance the HPE 3PAR 8400 is able to deliver into SAS 9.4
environments.

Solution overview
The HPE 3PAR StoreServ 8400 is an enterprise class, mid-range storage array that offers SAS 9.4 environments potent performance that will
allow the completion of SAS jobs in minutes and hours versus days or weeks.
Reference Architecture Page 4

In SAS environments, Hewlett Packard Enterprise can deliver SSD level performance in an all spinning media package which reduces the amount
of investment by approximately half.

Hewlett Packard Enterprise has seen, over the years, that when a customer has a SAS 9.4 performance issue, that issue is typically resolved by
improving storage throughput. The point of this series of tests was to focus on the storage aspect and in particular the storage throughput facet
of an environment, so that customers know the type of performance an HPE 3PAR StoreServ 8400 can deliver.

This paper will demonstrate configurations for an HPE 3PAR StoreServ 8400 optimized for the best throughput, as well as a start small, build out
scenario on the same array.

A companion paper focusing on the HPE Synergy Compute aspect of these series of tests can be found at,
http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=a00044785enw.

The following, Figure 1, is a graphical representation of the environment that was tested.

Figure 1. Graphical depiction of the hardware environment on which the SAS 9.4 testing was performed
Reference Architecture Page 5

Solution components
Hardware
HPE 3PAR 8400
As mentioned before, the solution was comprised of a 4-node HPE 3PAR StoreServ 8400. We utilized only the 8 X 16Gb Fibre Channel ports
that come with the base array.

The disk drives are attached to the 4 node controllers by 12Gb SAS channels. Each controller comes with 2 X 12Gb SAS drive adapters and each
drive adapter has dual ports so that each node in the node pair has access to all storage shelves. This is done so that, should one controller fail,
the array still has connectivity to all drives and can keep accessing and writing data. However, should a node fail occur, all write caching is
suspended and that node pair functions in a performance degraded state until the node that’s failed is repaired or replaced.

The HPE 3PAR StoreServ 8400 had 18 X expansion enclosures in addition to the controller pairs’ storage enclosures, for a total of 20 X 24 drive
slot shelves. We populated the shelves with 20 X 600GB 15K RPM drives. We then created virtual volumes using the RAID 5 CPG in a 3+1 stripe
set. Hewlett Packard Enterprise found that RAID 5, 3+1 was the best performing RAID configuration when running the SAS 9.4 workload. It
should be noted that HPE 3PAR discourages the use of RAID 5, due to the possibility that a second disk failure within a RAID group before a
data reconstruction can complete, would result in data loss. This is mitigated to a degree by the use of smaller 600GB 15K RPM drives reducing
the amount of time it takes to reconstruct data in the event of a disk failure. Customers may want to consider the use of RAID 1 for their
persistent SAS data space.

We ran the initial set of tests using all 400 of the drives. Then to demonstrate the scaling properties of the HPE 3PAR StoreServ 8400, we
eliminated 10 of the expansion shelves and 200 of the drives and ran the same set of tests with only 200 drives configured. A customer may
deploy an HPE 3PAR StoreServ 8400 starting with fewer than 400 drives and then over time, scale the solution up to the full throughput
capability. If added storage capacity is required, additional drives and expansion shelves may be added beyond the 400 drive configuration that
was tested, however due to limitations, throughput will remain approximately the same as when running the 400 drive configuration.
HPE Synergy
The compute portion of this testing was handled by three HPE Synergy Compute Modules. The configuration of the Compute Modules were
varied purposely, to test both scaling of workload and also to demonstrate core versus clock differences in throughput.

SAS 9.4 is influenced heavily by the clock speed of a processor. For example, comparing processors with similar architecture, such as the Intel
Xeon Scalable family processors used in HPE Synergy Compute Modules and assuming no other bottlenecks, a SAS job that runs on a processor
with a clock of 2GHz will complete twice as fast as the same SAS job that runs on a processor with a clock speed of 1GHz. However, it may be
that a group of SAS jobs will, in total, complete faster on a processor with a slower clock speed but having more cores than the same jobs will
complete on a processor with a faster clock but fewer cores.

Within the constraints of a customer’s SAS license, enterprises need to determine which model they wish to pursue. Is it most important for single
jobs to complete more quickly, or is it more important for a group of jobs to complete more quickly?

A companion paper has been published that will focus on the throughput of each individual HPE Synergy Compute Module. The focus of this
paper is on storage throughput and because of that, Hewlett Packard Enterprise ran the tests utilizing multiple HPE Synergy Compute Modules
to stress the storage. This means that the throughput of the Compute Modules will trend lesser than what would be possible when only one
Compute Module was used to run the SAS Mixed Analytics Workload at a time.
The three Compute Modules used had the following configurations:

• HPE Synergy 480 Gen10


– 2 X Intel Gold 6154 18-core 3.0GHz processors
– 1TB of memory
– 2 X Synergy 3530C 16Gb Fibre Channel Mezzanine cards
– 1 X Synergy 3820C 10/20Gb Converged Network Adapter Mezzanine card
– 1 X HPE Smart Array E208i-c
– 2 X 300GB Internal disk drives - RAID 1 used for the operating system
Reference Architecture Page 6

• HPE Synergy 480 Gen10


– 2 X Intel Platinum 8168 24-core 2.7GHz processors
– 1TB of memory
– 2 X Synergy 3530C 16Gb Fibre Channel Mezzanine cards
– 1 X Synergy 3820C 10/20Gb Converged Network Adapter Mezzanine card
– 1 X HPE Smart Array E208i-c
– 2 X 300GB Internal disk drives - RAID 1 used for the operating system
• HPE Synergy 660 Gen10
– 4 X Intel Gold 6154 18-core 3.0GHz processors
– 1TB of memory
– 4 X Synergy 3530C 16Gb Fibre Channel Mezzanine cards
– 1 X Synergy 3820C 10/20Gb Converged Network Adapter Mezzanine card
– 1 X HPE Smart Array E208i-c
– 2 X 300GB Internal disk drives - RAID 1 used for the operating system

Software
• HPE 3PAR OS 3.2.2 MU4
• Red Hat Enterprise Linux® 7.4

Application software
• Base SAS v9.4 M4
• SAS Mixed Analytics Workload

SAS Mixed Analytics Workload description


The Foundation SAS workload used for this test had the following characteristics:

• 50% CPU intensive and 50% IO intensive jobs


• Utilized SAS procedures including:
– DATA step, PROC RISK, PROC LOGISTIC, PROC GLM (general linear model), PROC REQ, PROC SQL, PROC MEANS, PROC SUMMARY,
PROC FREQ and PROC SORT
• SAS program input sizes up to 50GB per job
• Input data types are text, SAS data set and SAS transport files
• Memory use per job is up to 1GB
• Job runtimes were varied (short- and long-running tasks)

Important
This test scenario can provide an outline for comparing hardware and/or software products; it is not intended to be used as a sizing guideline. In
the real world, server performance is highly dependent upon the application design and workload profiling.

As with any laboratory testing, the performance metrics quoted in this paper are idealized. In a production environment, these metrics may be
impacted by a variety of factors.
Reference Architecture Page 7

Best practices and configuration guidance for the solution


Equally as important to SAS 9.4 performance is how the various pieces of the entire solution are assembled.

Hewlett Packard Enterprise has found, through testing, that it is optimal to have multiple virtual volumes (LUNs), typically 8, exported from the
array to the Compute Modules. On the Compute Modules those LUNs are then aggregated into a volume group, from which logical volumes are
created for each of the storage areas needed for SAS.

For each of the 30 user runs, Hewlett Packard Enterprise created a logical volume for SAS Data, SAS Work and UTILLOC. Each of the logical
volumes were striped across all 8 LUNs with a stripe size of 2M. The Logical Volume Manager (LVM) read ahead was then also set to 2M, via the
lvchange command.

Hewlett Packard Enterprise has also found that the best performing file system for SAS is XFS.

Scripts for the creation of the volume group, the logical volumes and the file systems can be found in appendices B through F.

Capacity and sizing


SAS Mixed Analytics Workload results - 200 drive configuration
The environment was queried every 5 seconds during the tests. The results were then aggregated for the best 10 intervals (50 seconds), 24
intervals (2 minutes), 120 intervals (10 minutes) and 240 intervals (20 minutes). This was done because the IO demand starts high and tails
steadily off during the duration of the tests. In fact, for a number of intervals during the later stages of the tests, only CPU intensive jobs
remained active and there was no IO being performed.

The following, Figure 2, is the average throughput for each of those intervals along with the total number of jobs for each of the scenarios.

The legend tells the number of 30 user runs being executed on each Compute Module. A legend of 1 X 1 X 2, means that one 30 user run was
executed on the first HPE Synergy 480 Gen10 Compute Module, one 30 user run was executed on the second HPE Synergy 480 Gen10
Compute Module and two 30 user runs were executed on the HPE Synergy 660 Gen10 Compute Module. Each 30 user run consisted of 102
total SAS jobs being run. This means that for the 1 X 1 X 2 stack, 408 total SAS jobs were completed. The largest number of SAS jobs occurred
with the 4 X 4 X 8 run and during that run there were 1632 total SAS jobs run across all environments, with their IO demand being satisfied by
the HPE 3PAR StoreServ 8400.

Avg Throughput
9000 1800

8000 1600

7000 1400
Number of SAS jobs
MB per second

6000 1200

5000 1000

4000 800

3000 600

2000 400

1000 200 Largest number of SAS


jobs occurred with the
0 0 4 X 4 X 8 run - there
1X1X2 2x2x3 2x2x4 2x2x5 3x3x5 3x3x6 4x4x6 4x4x8 were 1632 total SAS
jobs run across all
50 second 2 minute 10 minute 20 minute Number of Jobs environments

Figure 2. IO throughput recorded for the best 50 second, 2-minute, 10 minutes, 20 minutes and total number of SAS jobs
Reference Architecture Page 8

What is most important to users of SAS 9.4 software is the amount of time that it takes for a series of SAS jobs to complete in total and also the
time it takes for a single SAS job to complete. The following, Figure 3, shows the times for all jobs, along with the average time it took a single
SAS job to run during the scenario.

Time to complete
7:12:00 00:13.8

00:12.1
6:00:00
Wall time to complete all jobs

00:10.4
4:48:00
00:08.6

Time per job


3:36:00 00:06.9

00:05.2
2:24:00
00:03.5
1:12:00
00:01.7

0:00:00 00:00.0
408 714 816 918 1122 1224 1428 1632
Total Number SAS Jobs run

Time to complete Time per SAS Job

Figure 3. Average time for each 30-user scenario to complete along with the average time taken per job

SAS Mixed Analytics Workload results - 400 drive configuration


Next let’s look at the results from the same series of scenarios being run, the only difference being the fact that the 3PAR had a total of 20 X 24
drive shelves during this test instead of the 10 X 24 drive shelves during the 200 drive configuration and 400 disk drives as opposed to the 200
used in the last set of results.
Reference Architecture Page 9

The following, Figure 4, is the average throughput for each of the recorded interval periods using the 400 drive configuration.

The HPE 3PAR StoreServ


Avg Throughput 8400 achieved more than
11GB/sec for 2 minutes
12000 1800
and maintained more
than 10GB/sec of for 20
1600
minutes
10000
1400

Number of SAS jobs


MB per second

8000 1200

1000
6000
800

4000 600

400
2000
200

0 0
1X1X2 2x2x3 2x2x4 2x2x5 3x3x5 3x3x6 4x4x6 4x4x8

50 second 2 minute 10 minute 20 minute Number of Jobs

Figure 4. IO throughput recorded for the best 50 second, 2 minute, 10 minute, 20 minute and total number of SAS jobs

Finally, Figure 5 is the time to complete graph for the 400 drive configuration.

Time to complete
4:48:00 00:10.4
Wall time to complete all jobs

4:19:12
00:08.6
3:50:24
3:21:36
00:06.9
Time per job

2:52:48
2:24:00 00:05.2
1:55:12
00:03.5
1:26:24
0:57:36
00:01.7
0:28:48
0:00:00 00:00.0
408 714 816 918 1122 1224 1428 1632
Total Number SAS Jobs run

MB per second Time per SAS Job

Figure 5. Average time for each 30 user scenario to complete along with the average time taken per job
Reference Architecture Page 10

Sizing an HPE 3PAR StoreServ 8400 for customer environments


The question that we wish to answer is how to size an HPE 3PAR for a specific customer environment.

Today SAS recommends 100MB/sec/core to 150MB/sec/core of IO throughput on modern computers in order to keep the processors busy.
What we need to determine then is the number of disk drives required to enable the HPE 3PAR StoreServ 8400 to deliver the required
throughput.

The following, Figure 6, shows the amount of throughput on a per drive basis in the 200 drive configuration, displaying both the amount of IO
per facing drive and the amount of IO per installed drive. Since HPE configured the storage in a RAID 5, 3+1 configuration, and we have 200
total drives installed, the number of facing drives is 150, while the total number of drives is 200.

The 200 drive configuration had a total of 10 storage shelves installed in the HPE 3PAR StoreServ 8400, which left space for an additional 40
drives in this configuration.

200 Drive MB/sec/drive


9000 60
8000
50
7000
6000 40

MB/sec/drive
MB/sec/total

5000
30
4000
3000 20
2000
10
1000
0 0
1X1X2 2x2x3 2x2x4 2x2x5 3x3x5 3x3x6 4x4x6 4x4x8

2 Minutes 10 Minutes MS/sec/fcg drv 2 min


MB/sec/drv 2 min MB/sec/fcg drv 10 min MB/sec/drv 10 min

Figure 6. IO throughput per installed and facing drive in a 200 drive configuration
Reference Architecture Page 11

The following, Figure 7, is the same type of graph, except this time the graph is based on the 400 drive configuration.

The 400 drive configuration had a total of 20 storage shelves installed in the HPE 3PAR StoreServ 8400, which left space for an additional 80
drives in this configuration.

400 Drive MB/sec/drive


12000 40

35
10000
30
8000
25

MB/sec/drive
MB/sec/total

6000 20

15
4000
10
2000
5

0 0
1X1X2 2x2x3 2x2x4 2x2x5 3x3x5 3x3x6 4x4x6 4x4x8

2 Minutes 10 Minutes MS/sec/fcg drv 2 min


MB/sec/drv 2 min MB/sec/fcg drv 10 min MB/sec/drv 10 min

Figure 7. IO throughput per installed and facing drive in a 400 drive configuration

From a throughput perspective, Hewlett Packard Enterprise recommends placing 20 drives per storage shelf to optimize throughput on a per
drive basis. Additional drives may be placed in the storage shelves for additional capacity, but HPE does not expect additional drives, beyond the
20 per shelf, to result in any additional throughput per shelf.

The HPE 3PAR StoreServ 8400 has a limitation of 22 drive shelves per array. Adding shelves beyond the tested 20 shelves will add capacity, but
HPE does not believe the additional shelves with additional drives will add significantly to the total throughput of the array.

Summary
Enterprises are continually asking information technology to do more while at the same time spending less money. Traditional SAS 9.4 workloads
are very IO intensive, requiring between 100 and 150MB/sec of IO throughput per core in order to keep the processors busy on a consistent
basis. These two aspects are at odds with each other. But Hewlett Packard Enterprise helps resolve these two competing issues by blending the
performance of flash storage with the cost of spinning storage, lowering the overall expenditure associated with acquiring high performance
storage.

As reported above, the HPE 3PAR 8400 array can start small and grow into a true SAS 9.4 IO powerhouse. Even at full deployment, the use of
spinning media reduces the cost of the array to approximately half that of an all flash solution without any sacrifice in performance in a SAS 9.4
environment.

In the appendixes, HPE also explained exactly how to provision the storage from both the HPE 3PAR side of things, as well as how to assemble it
all to create a well performing system on the server side. The advantage is twofold:

1. Enterprises don’t have to experiment to determine the best performing combination of configuration settings required to glean the best
performance out of the total solution.
2. Enterprises can be confident that the size of the solution will fit the requirements using the sizing information contained in this document.
Reference Architecture Page 12

This allows those enterprises to tailor a solution for their unique needs rather than settle for a one-size-fits-all amalgamation.

This Reference Architecture describes solution testing completed in March 2018.

Implementing a proof-of-concept
As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as
closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained.
For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.

Appendix A: Bill of materials


The following BOMs contain electronic license to use (E-LTU) parts. Electronic software license delivery is now available in most countries. HPE
recommends purchasing electronic products over physical products (when available) for faster delivery and for the convenience of not tracking
and managing confidential paper licenses. For more information, please contact your reseller or an HPE representative.

Note
Part numbers are valid at the time of publication/testing and subject to change. The bill of materials does not include complete support options
or other rack and power requirements. If you have questions regarding ordering, please consult with your HPE Reseller or HPE Sales
Representative for more details. hpe.com/us/en/services/consulting.html

Table 1a. Bill of materials for the HPE 3PAR StoreServ 8400 - 400 disk drive configuration
Qty Part number Description

HPE 3PAR StoreServ 8400 Configuration


1 BW904A HPE 42U 600x1075mm Enterprise Shock Rack
1 BW904A 001 HPE Factory Express Base Racking Service
1 H6Z03B HPE 3PAR 8400 4N+SW Storage Cent Base
40 K2P98B HPE 3PAR 8000 600GB+SW 15K SFF HDD
2 QK753B HPE SN6000B 16Gb 48/24 FC Switch
48 QK724A HPE B-series 16Gb SFP+SW XCVR
18 H6Z26A HPE 3PAR 8000 SFF(2.5in) SAS Drive Encl
360 K2P98B HPE 3PAR 8000 600GB+SW 15K SFF HDD
1 K2R29A HPE 3PAR StoreServ RPS Service Processor
1 TK808A HPE Rack Front Door Cover Kit
48 QK735A HPE Premier Flex LC/LC OM4 2f 15m Cbl
8 QK734A HPE Premier Flex LC/LC OM4 2f 5m Cbl
1 Q0Q04A HPE C13/C14 WW 10A 2m Blk 6pc Lckng PC
4 P9Q41A HPE G2 Basic 4.9kVA/(20) C13 NA/JP PDU
12 E7V95A HPE 10m Mini SAS HD Active Optical Cable
1 BW932A HPE 600mm Rack Stabilizer Kit
1 BW906A HPE 42U 1075mm Side Panel Kit
1 L7F20AAE HPE 3PAR All-in S-sys SW Current E-Media
1 BW928A HPE 1U Blck Universal 10-pk Filler Panel
Reference Architecture Page 13

Table 1b. Bill of materials for the HPE 3PAR StoreServ 8400 - 200 disk drive configuration
Qty Part number Description

HPE 3PAR StoreServ 8400 Configuration


1 BW904A HPE 42U 600x1075mm Enterprise Shock Rack
1 BW904A 001 HPE Factory Express Base Racking Service
1 H6Z03B HPE 3PAR 8400 4N+SW Storage Cent Base
40 K2P98B HPE 3PAR 8000 600GB+SW 15K SFF HDD
2 QK753B HPE SN6000B 16Gb 48/24 FC Switch
48 QK724A HPE B-series 16Gb SFP+SW XCVR
8 H6Z26A HPE 3PAR 8000 SFF(2.5in) SAS Drive Encl
160 K2P98B HPE 3PAR 8000 600GB+SW 15K SFF HDD
1 K2R29A HPE 3PAR StoreServ RPS Service Processor
1 TK808A HPE Rack Front Door Cover Kit
48 QK735A HPE Premier Flex LC/LC OM4 2f 15m Cbl
8 QK734A HPE Premier Flex LC/LC OM4 2f 5m Cbl
1 Q0Q04A HPE C13/C14 WW 10A 2m Blk 6pc Lckng PC
4 P9Q41A HPE G2 Basic 4.9kVA/(20) C13 NA/JP PDU
12 E7V95A HPE 10m Mini SAS HD Active Optical Cable
1 BW932A HPE 600mm Rack Stabilizer Kit
1 BW906A HPE 42U 1075mm Side Panel Kit
1 L7F20AAE HPE 3PAR All-in S-sys SW Current E-Media
1 BW928A HPE 1U Blck Universal 10-pk Filler Panel
Reference Architecture Page 14

Table 1c. Bill of materials for the HPE Synergy Frame and Compute Modules
Qty Part number Description

HPE Synergy Frame and Compute Modules


1 P9K10A HPE 42U 600x1200mm Adv G2 Kit Shock Rack
1 P9K10A 001 HPE Factory Express Base Racking Service
1 797740-B21 HPE Synergy12000 CTO Frame 1xFLM 10x Fan
1 871929-B21 HPE SY 660 Gen10 CTO Cmpt Mdl
2 872132-L21 HPE SY 480/660 Gen10 Xeon-G 6154 FIO Kit
2 872132-B21 HPE SY 480/660 Gen10 Xeon-G 6154 Kit
16 815101-B21 HPE 64GB 4Rx4 PC4-2666V-L Smart Kit
2 870753-B21 HPE 300GB SAS 15K SFF SC DS HDD
1 823852-B21 HPE Smart Array E208i-c SR Gen10 Ctrlr
1 777430-B21 HPE Synergy 3820C 10/20Gb CNA
4 777454-B21 HPE Synergy 3530C 16Gb Fibre Channel Host Bus Adapter
1 871940-B21 HPE SY 480 Gen10 CTO Cmpt Mdl
1 872132-L21 HPE SY 480/660 Gen10 Xeon-G 6154 FIO Kit
1 872132-B21 HPE SY 480/660 Gen10 Xeon-G 6154 Kit
16 815101-B21 HPE 64GB 4Rx4 PC4-2666V-L Smart Kit
2 870753-B21 HPE 300GB SAS 15K SFF SC DS HDD
1 823852-B21 HPE Smart Array E208i-c SR Gen10 Ctrlr
1 777430-B21 HPE Synergy 3820C 10/20Gb CNA
2 777454-B21 HPE Synergy 3530C 16Gb Fibre Channel Host Bus Adapter
1 871940-B21 HPE SY 480 Gen10 CTO Cmpt Mdl
1 872122-L21 HPE SY 480/660Gen10 Xeon-P 8168 FIO Kit
1 872122-B21 HPE SY 480/660 Gen10 Xeon-P 8168 Kit
16 815101-B21 HPE 64GB 4Rx4 PC4-2666V-L Smart Kit
2 870753-B21 HPE 300GB SAS 15K SFF SC DS HDD
1 823852-B21 HPE Smart Array E208i-c SR Gen10 Ctrlr
1 777430-B21 HPE Synergy 3820C 10/20Gb CNA
2 777454-B21 HPE Synergy 3530C 16Gb Fibre Channel Host Bus Adapter
1 779218-B21 HPE Synergy 20Gb Interconnect Link Mod
4 779227-B21 HPE VC SE 16Gb FC Module
1 794502-B23 HPE VC SE 40Gb F8 Module
1 798096-B21 HPE Synergy 12000F 6x 2650W AC Ti FIO PS
1 804353-B21 HPE Synergy Composer
1 804923-B21 HPE Synergy12000 Frame Compute Hlf Shelf
1 804938-B21 HPE Synergy 12000 Frame Rack Rail Option
1 804942-B21 HPE Synergy Frame Link Module
1 804943-B21 HPE Synergy 12000 Frame 4x Lift Handle
1 859493-B21 HPE Synergy Multi Frame Master1 FIO
2 804101-B21 HPE Synergy Interconnect Link 3m AOC
1 720199-B21 HPE BLc 40G QSFP+ QSFP+ 3m DAC Cable
1 861412-B21 HPE CAT6A 4ft Cbl
Reference Architecture Page 15

Appendix B: Script for creating the volume group


The following script was used to create the volume group on the HPE Synergy 660 Gen10 Compute Module.
vgcreate vgsas /dev/mapper/mpathr \
/dev/mapper/mpathq \
/dev/mapper/mpathx \
/dev/mapper/mpathw \
/dev/mapper/mpathv \
/dev/mapper/mpathu \
/dev/mapper/mpatht \
/dev/mapper/mpaths

Appendix C: Script for creating the logical volumes


The following is the script that was used to create the logical volumes and set the read ahead values on the HPE Synergy 660 Gen10 Compute
Module.
#!/bin/bash

lvcreate -y -L 50G -i 8 -I 2M -n sas vgsas


lvcreate -y -L 1400G -i 8 -I 2M -n saswork1 vgsas
lvcreate -y -L 900G -i 8 -I 2M -n utilloc1 vgsas
lvcreate -y -L 1400G -i 8 -I 2M -n saswork2 vgsas
lvcreate -y -L 900G -i 8 -I 2M -n utilloc2 vgsas
lvcreate -y -L 1400G -i 8 -I 2M -n saswork3 vgsas
lvcreate -y -L 900G -i 8 -I 2M -n utilloc3 vgsas
lvcreate -y -L 1400G -i 8 -I 2M -n saswork4 vgsas
lvcreate -y -L 900G -i 8 -I 2M -n utilloc4 vgsas
lvcreate -y -L 1400G -i 8 -I 2M -n saswork5 vgsas
lvcreate -y -L 900G -i 8 -I 2M -n utilloc5 vgsas
lvcreate -y -L 1400G -i 8 -I 2M -n saswork6 vgsas
lvcreate -y -L 900G -i 8 -I 2M -n utilloc6 vgsas
lvcreate -y -L 1400G -i 8 -I 2M -n saswork7 vgsas
lvcreate -y -L 900G -i 8 -I 2M -n utilloc7 vgsas
lvcreate -y -L 1400G -i 8 -I 2M -n saswork8 vgsas
lvcreate -y -L 900G -i 8 -I 2M -n utilloc8 vgsas

lvcreate -y -L 1000G -i 8 -I 2M -n asuite1 vgsas


lvcreate -y -L 1000G -i 8 -I 2M -n asuite2 vgsas
lvcreate -y -L 1000G -i 8 -I 2M -n asuite3 vgsas
lvcreate -y -L 1000G -i 8 -I 2M -n asuite4 vgsas
lvcreate -y -L 1000G -i 8 -I 2M -n asuite5 vgsas
lvcreate -y -L 1000G -i 8 -I 2M -n asuite6 vgsas
lvcreate -y -L 1000G -i 8 -I 2M -n asuite7 vgsas
lvcreate -y -L 1000G -i 8 -I 2M -n asuite8 vgsas

lvchange --readahead 2097152 /dev/vgsas/saswork1


lvchange --readahead 2097152 /dev/vgsas/utilloc1
lvchange --readahead 2097152 /dev/vgsas/saswork2
lvchange --readahead 2097152 /dev/vgsas/utilloc2
lvchange --readahead 2097152 /dev/vgsas/saswork3
lvchange --readahead 2097152 /dev/vgsas/utilloc3
lvchange --readahead 2097152 /dev/vgsas/saswork4
lvchange --readahead 2097152 /dev/vgsas/utilloc4
lvchange --readahead 2097152 /dev/vgsas/saswork5
lvchange --readahead 2097152 /dev/vgsas/utilloc5
Reference Architecture Page 16

lvchange --readahead 2097152 /dev/vgsas/saswork6


lvchange --readahead 2097152 /dev/vgsas/utilloc6
lvchange --readahead 2097152 /dev/vgsas/saswork7
lvchange --readahead 2097152 /dev/vgsas/utilloc7
lvchange --readahead 2097152 /dev/vgsas/saswork8
lvchange --readahead 2097152 /dev/vgsas/utilloc8

lvchange --readahead 2097152 /dev/vgsas/sas


lvchange --readahead 2097152 /dev/vgsas/asuite1
lvchange --readahead 2097152 /dev/vgsas/asuite2
lvchange --readahead 2097152 /dev/vgsas/asuite3
lvchange --readahead 2097152 /dev/vgsas/asuite4
lvchange --readahead 2097152 /dev/vgsas/asuite5
lvchange --readahead 2097152 /dev/vgsas/asuite6
lvchange --readahead 2097152 /dev/vgsas/asuite7
lvchange --readahead 2097152 /dev/vgsas/asuite8

Appendix D: Script for creating the file systems on the logical volumes
The following is the script that was used to create the file systems on the logical volumes on the HPE Synergy 660 Gen10 Compute Module.
#!/bin/bash

mkfs.xfs /dev/vgsas/sas
mkfs.xfs /dev/vgsas/asuite1
mkfs.xfs /dev/vgsas/asuite2
mkfs.xfs /dev/vgsas/asuite3
mkfs.xfs /dev/vgsas/asuite4
mkfs.xfs /dev/vgsas/asuite5
mkfs.xfs /dev/vgsas/asuite6
mkfs.xfs /dev/vgsas/asuite7
mkfs.xfs /dev/vgsas/asuite8

mkfs.xfs /dev/vgsas/saswork1
mkfs.xfs /dev/vgsas/utilloc1
mkfs.xfs /dev/vgsas/saswork2
mkfs.xfs /dev/vgsas/utilloc2
mkfs.xfs /dev/vgsas/saswork3
mkfs.xfs /dev/vgsas/utilloc3
mkfs.xfs /dev/vgsas/saswork4
mkfs.xfs /dev/vgsas/utilloc4
mkfs.xfs /dev/vgsas/saswork5
mkfs.xfs /dev/vgsas/utilloc5
mkfs.xfs /dev/vgsas/saswork6
mkfs.xfs /dev/vgsas/utilloc6
mkfs.xfs /dev/vgsas/saswork7
mkfs.xfs /dev/vgsas/utilloc7
mkfs.xfs /dev/vgsas/saswork8
mkfs.xfs /dev/vgsas/utilloc8
Reference Architecture Page 17

Appendix E: Script for mounting the file systems the first time
The script used to mount the file systems the first time on the HPE Synergy 660 Gen10 Compute Module.
#!/bin/bash

mkdir /sasdata
mkdir /sasdata/asuite1
mkdir /sasdata/asuite2
mkdir /sasdata/asuite3
mkdir /sasdata/asuite4
mkdir /sasdata/asuite5
mkdir /sasdata/asuite6
mkdir /sasdata/asuite7
mkdir /sasdata/asuite8

mount /dev/vgsas/asuite1 /sasdata/asuite1


mount /dev/vgsas/asuite2 /sasdata/asuite2
mount /dev/vgsas/asuite3 /sasdata/asuite3
mount /dev/vgsas/asuite4 /sasdata/asuite4

mkdir /sasdata/asuite1/saswork
mount /dev/vgsas/saswork1 /sasdata/asuite1/saswork
mkdir /sasdata/asuite1/saswork/utilloc
mount /dev/vgsas/utilloc1 /sasdata/asuite1/saswork/utilloc

mkdir /sasdata/asuite2/saswork
mount /dev/vgsas/saswork2 /sasdata/asuite2/saswork
mkdir /sasdata/asuite2/saswork/utilloc
mount /dev/vgsas/utilloc2 /sasdata/asuite2/saswork/utilloc

mkdir /sasdata/asuite3/saswork
mount /dev/vgsas/saswork3 /sasdata/asuite3/saswork
mkdir /sasdata/asuite3/saswork/utilloc
mount /dev/vgsas/utilloc3 /sasdata/asuite3/saswork/utilloc

mkdir /sasdata/asuite4/saswork
mount /dev/vgsas/saswork4 /sasdata/asuite4/saswork
mkdir /sasdata/asuite4/saswork/utilloc
mount /dev/vgsas/utilloc4 /sasdata/asuite4/saswork/utilloc

mkdir /sasdata/asuite5/saswork
mount /dev/vgsas/saswork5 /sasdata/asuite5/saswork
mkdir /sasdata/asuite5/saswork/utilloc
mount /dev/vgsas/utilloc5 /sasdata/asuite5/saswork/utilloc

mkdir /sasdata/asuite6/saswork
mount /dev/vgsas/saswork6 /sasdata/asuite6/saswork
mkdir /sasdata/asuite6/saswork/utilloc
mount /dev/vgsas/utilloc6 /sasdata/asuite6/saswork/utilloc

mkdir /sasdata/asuite7/saswork
mount /dev/vgsas/saswork7 /sasdata/asuite7/saswork
mkdir /sasdata/asuite7/saswork/utilloc
mount /dev/vgsas/utilloc7 /sasdata/asuite7/saswork/utilloc

mkdir /sasdata/asuite8/saswork
mount /dev/vgsas/saswork8 /sasdata/asuite8/saswork
mkdir /sasdata/asuite8/saswork/utilloc
mount /dev/vgsas/utilloc8 /sasdata/asuite8/saswork/utilloc
Reference Architecture Page 18

Appendix F: /etc/fstab script for remounting the file systems after a system boot
The following is what the /etc/fstab file looks like on the HPE Synergy 660 Gen10 Compute Module after all the logical volumes have been
created, all of the file systems have been created and all of the mount points have been created. This allows the logical volumes to automatically
mounted after the system is rebooted.
#
# /etc/fstab
# Created by anaconda on Tue Feb 6 16:06:57 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=2970cfe0-e126-4952-9314-747212606f80 /boot xfs defaults 0 0
UUID=6F5C-E5C3 /boot/efi vfat umask=0077,shortname=winnt 0 0
/dev/mapper/rhel-home /home xfs defaults 0 0
/dev/mapper/rhel-opt /opt xfs defaults 0 0
/dev/mapper/rhel-swap swap swap defaults 0 0
/dev/mapper/vgsas/asuite1 /sasdata/asuite1 xfs defaults 0 0
/dev/mapper/vgsas/asuite2 /sasdata/asuite2 xfs defaults 0 0
/dev/mapper/vgsas/asuite3 /sasdata/asuite3 xfs defaults 0 0
/dev/mapper/vgsas/asuite4 /sasdata/asuite4 xfs defaults 0 0
/dev/mapper/vgsas/asuite5 /sasdata/asuite5 xfs defaults 0 0
/dev/mapper/vgsas/asuite6 /sasdata/asuite6 xfs defaults 0 0
/dev/mapper/vgsas/asuite7 /sasdata/asuite7 xfs defaults 0 0
/dev/mapper/vgsas/asuite8 /sasdata/asuite8 xfs defaults 0 0
/dev/mapper/vgsas/saswork1 /sasdata/asuite1/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork2 /sasdata/asuite2/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork3 /sasdata/asuite3/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork4 /sasdata/asuite4/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork5 /sasdata/asuite5/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork6 /sasdata/asuite6/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork7 /sasdata/asuite7/saswork xfs defaults 0 0
/dev/mapper/vgsas/saswork8 /sasdata/asuite8/saswork xfs defaults 0 0
/dev/mapper/vgsas/utilloc1 /sasdata/asuite1/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc2 /sasdata/asuite2/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc3 /sasdata/asuite3/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc4 /sasdata/asuite4/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc5 /sasdata/asuite5/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc6 /sasdata/asuite6/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc7 /sasdata/asuite7/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/utilloc8 /sasdata/asuite8/saswork/utilloc xfs defaults 0 0
/dev/mapper/vgsas/sas /opt/sas9.4 xfs defaults 0 0
Reference Architecture Page 19

Resources and additional links


SAS 9.4 on HPE Synergy Compute Modules, http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=a00044785enw

HPE Reference Architectures, hpe.com/info/ra

HPE 3PAR, https://www.hpe.com/us/en/storage/3par.html


HPE Synergy, hpe.com/synergy

HPE Servers, hpe.com/servers

HPE Storage, hpe.com/storage


HPE Networking, hpe.com/networking

HPE Technology Consulting Services, hpe.com/us/en/services/consulting.html

SAS 9.4, https://www.sas.com/en_us/software/sas9.html


What’s New in SAS 9.4, http://documentation.sas.com/?docsetId=whatsnew&docsetTarget=whatsnew.pdf&docsetVersion=9.4&locale=en

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.

Sign up for updates

© Copyright 2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice.
The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall
not be liable for technical or editorial errors or omissions contained herein.

SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and
other countries. Microsoft is a registered trademark of Microsoft Corporation in the United States and/or other countries. Intel, Xeon, and
Intel Xeon, are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Red Hat Enterprise Linux Certified is a
registered trademark of Red Hat, Inc. in the United States and other countries.

a00044784enw, April 2018

Das könnte Ihnen auch gefallen