Beruflich Dokumente
Kultur Dokumente
Overview
Get started
Certifications
SAP HANA on Azure (Large Instances)
Overview and architecture
Infrastructure and connectivity
Install SAP HANA
High availability and disaster recovery
Troubleshoot and monitor
How to
HA Setup with STONITH
SAP HANA on Azure Virtual Machines
Single instance SAP HANA
S/4 HANA or BW/4 HANA SAP CAL deployment guide
SAP HANA High Availability in Azure VMs
SAP HANA backup overview
SAP HANA file level backup
SAP HANA storage snapshot backups
SAP NetWeaver on Azure Virtual Machines
SAP IDES on Windows/SQL Server SAP CAL deployment guide
SAP NetWeaver on Azure Linux VMs
Plan and implement SAP NetWeaver on Azure
High availability on Windows
High availability on SUSE Linux
Multi-SID configurations
Deployment guide
DBMS deployment guide
Azure Site Recovery for SAP Disaster Recovery
AAD SAP Identity Integration and Single-Sign-On
Integration with SAP Cloud
AAD Integration with SAP Cloud Platform Identity Authentication
Set up Single-Sign-On with SAP Cloud Platform
AAD Integration with SAP NetWeaver
AAD Integration with SAP Business ByDesign
AAD Integration with SAP HANA DBMS
SAP Fiori Launchpad SAML Single Sign-On with Azure AD
Resources
Azure Roadmap
Using Azure for hosting and running SAP workload
scenarios
10/3/2017 10 min to read Edit Online
By choosing Microsoft Azure as your SAP ready cloud partner, you are able to reliably run your mission critical SAP
workloads and scenarios on a scalable, compliant, and enterprise-proven platform. Get the scalability, flexibility,
and cost savings of Azure. With the expanded partnership between Microsoft and SAP, you can run SAP
applications across dev/test and production scenarios in Azure - and be fully supported. From SAP NetWeaver to
SAP S4/HANA, SAP BI, Linux to Windows, SAP HANA to SQL, we have you covered.
Besides hosting SAP NetWeaver scenarios with the different DBMS on Azure, you can host different other SAP
workload scenarios, like SAP BI on Azure. Documentation regarding SAP NetWeaver deployments on Azure native
Virtual Machines can be found in the section "SAP NetWeaver on Azure Virtual Machines."
Azure has native Azure Virtual Machine offers that are ever growing in size of CPU and memory resources to cover
SAP workload that leverages SAP HANA. For more information on this topic, look up the documents under the
section SAP HANA on Azure Virtual Machines."
The uniqueness of Azure for SAP HANA is a unique offer that sets Azure apart from competition. In order to enable
hosting more memory and CPU resource demanding SAP scenarios involving SAP HANA, Azure offers the usage
of customer dedicated bare-metal hardware for the purpose of running SAP HANA deployments that require up to
20 TB (60 TB scale-out) of memory for S/4HANA or other SAP HANA workload. This unique Azure solution of SAP
HANA on Azure (Large Instances) allows you to run SAP HANA on the dedicated bare-metal hardware with the SAP
application layer or workload middle-ware layer hosted in native Azure Virtual Machines. This solution is
documented in several documents in the section "SAP HANA on Azure (Large Instances)."
Hosting SAP workload scenarios in Azure also can create requirements of Identity integration and Single-Sign-On
using Azure Activity Directory to different SAP components and SAP SaaS or PaaS offers. A list of such integration
and Single-Sign-On scenarios with Azure Active Directory (AAD) and SAP entities is described and documented in
the section "AAD SAP Identity Integration and Single-Sign-On."
SAP and Microsoft have a long history of working together in a strong partnership that has mutual benefits for
their customers. Microsoft is constantly updating its platform and submitting new certification details to SAP in
order to ensure Microsoft Azure is the best platform on which to run your SAP workloads. The following tables
outline our supported configurations and list of growing certifications.
SAP HANA Developer Edition (including Red Hat Enterprise Linux, SUSE Linux D-Series VM family
the HANA client software comprised of Enterprise
SQLODBC, ODBO-Windows only,
ODBC, JDBC drivers, HANA studio, and
HANA database)
HANA One Red Hat Enterprise Linux, SUSE Linux DS14_v2 (upon general availability)
Enterprise
SAP S/4 HANA Red Hat Enterprise Linux, SUSE Linux Controlled Availability for GS5, SAP
Enterprise HANA on Azure (Large instances)
Suite on HANA, OLTP Red Hat Enterprise Linux, SUSE Linux GS5 for single node deployments for
Enterprise non-production scenarios, SAP HANA
on Azure (Large instances)
HANA Enterprise for BW, OLAP Red Hat Enterprise Linux, SUSE Linux GS5 for single node deployments, SAP
Enterprise HANA on Azure (Large instances)
SAP BW/4 HANA Red Hat Enterprise Linux, SUSE Linux GS5 for single node deployments, SAP
Enterprise HANA on Azure (Large instances)
SAP Business Suite Software Windows, SUSE Linux SQL Server, Oracle (Windows A5 to A11, D11 to D14,
Enterprise, Red Hat and Oracle Linux only), DB2, DS11 to DS14, DS11_v2 to
Enterprise Linux, Oracle SAP ASE DS15_v2, GS1 to GS5
Linux
SAP Business All-in-One Windows, SUSE Linux SQL Server, Oracle (Windows A5 to A11, D11 to D14,
Enterprise, Red Hat and Oracle Linux only), DB2, DS11 to DS14, DS11_v2 to
Enterprise Linux SAP ASE DS15_v2, GS1 to GS5
SAP PRODUCT GUEST OS RDBMS VIRTUAL MACHINE TYPES
SAP NetWeaver Windows, SUSE Linux SQL Server, Oracle (Windows A5 to A11, D11 to D14,
Enterprise, Red Hat and Oracle Linux only), DB2, DS11 to DS14, DS11_v2 to
Enterprise Linux SAP ASE DS15_v2, GS1 to GS5
SAP HANA (Large Instances) overview and
architecture on Azure
10/4/2017 44 min to read Edit Online
Definitions
Several common definitions are widely used in the Architecture and Technical Deployment Guide. Note the
following terms and their meanings:
IaaS: Infrastructure as a Service
PaaS: Platform as a Service
SaaS: Software as a Service
SAP Component: An individual SAP application, such as ECC, BW, Solution Manager, or EP. SAP components
can be based on traditional ABAP or Java technologies or a non-NetWeaver based application such as Business
Objects.
SAP Environment: One or more SAP components logically grouped to perform a business function, such as
Development, QAS, Training, DR, or Production.
SAP Landscape: Refers to the entire SAP assets in your IT landscape. The SAP landscape includes all production
and non-production environments.
SAP System: The combination of DBMS layer and application layer of an SAP ERP development system, SAP
BW test system, SAP CRM production system, etc. Azure deployments do not support dividing these two layers
between on-premises and Azure. Means an SAP system is either deployed on-premises, or it is deployed in
Azure. However, you can deploy the different systems of an SAP landscape into either Azure or on-premises.
For example, you could deploy the SAP CRM development and test systems in Azure, while deploying the SAP
CRM production system on-premises. For SAP HANA on Azure (Large Instances), it is intended that you host the
SAP application layer of SAP systems in Azure VMs and the related SAP HANA instance on a unit in the HANA
Large Instance stamp.
Large Instance stamp: A hardware infrastructure stack that is SAP HANA TDI certified and dedicated to run
SAP HANA instances within Azure.
SAP HANA on Azure (Large Instances): Official name for the offer in Azure to run HANA instances in on SAP
HANA TDI certified hardware that is deployed in Large Instance stamps in different Azure regions. The related
term HANA Large Instance is short for SAP HANA on Azure (Large Instances) and is widely used this technical
deployment guide.
Cross-Premises: Describes a scenario where VMs are deployed to an Azure subscription that has site-to-site,
multi-site, or ExpressRoute connectivity between the on-premises datacenter(s) and Azure. In common Azure
documentation, these kinds of deployments are also described as Cross-Premises scenarios. The reason for the
connection is to extend on-premises domains, on-premises Active Directory/OpenLDAP, and on-premises DNS
into Azure. The on-premises landscape is extended to the Azure assets of the Azure subscription(s). Having this
extension, the VMs can be part of the on-premises domain. Domain users of the on-premises domain can
access the servers and can run services on those VMs (like DBMS services). Communication and name
resolution between VMs deployed on-premises and Azure deployed VMs is possible. Such is the typical
scenario in which most SAP assets are deployed. See the guides of Planning and design for VPN Gateway and
Create a VNet with a Site-to-Site connection using the Azure portal for more detailed information.
Tenant: A customer deployed in HANA Large Instances stamp gets isolated into a "tenant." A tenant is isolated
in the networking, storage, and compute layer from other tenants. So, that storage and compute units assigned
to the different tenants cannot see each other or communicate with each other on the HANA Large Instance
stamp level. A customer can choose to have deployments into different tenants. Even then, there is no
communication between tenants on the HANA Large Instance stamp level.
There are a variety of additional resources that have been published on the topic of deploying SAP workload on
Microsoft Azure public cloud. It is highly recommended that anyone planning and executing a deployment of SAP
HANA in Azure is experienced and aware of the principals of Azure IaaS, and the deployment of SAP workloads on
Azure IaaS. The following resources provide more information and should be referenced before continuing:
Using SAP solutions on Microsoft Azure virtual machines
Certification
Besides the NetWeaver certification, SAP requires a special certification for SAP HANA to support SAP HANA on
certain infrastructures, such as Azure IaaS.
The core SAP Note on NetWeaver, and to a degree SAP HANA certification, is SAP Note #1928533 SAP
Applications on Azure: Supported Products and Azure VM types.
This SAP Note #2316233 - SAP HANA on Microsoft Azure (Large Instances) is also significant. It covers the solution
described in this guide. Additionally, you are supported to run SAP HANA in the GS5 VM type of Azure. Information
for this case is published on the SAP website.
The SAP HANA on Azure (Large Instances) solution referred to in SAP Note #2316233 provides Microsoft and SAP
customers the ability to deploy large SAP Business Suite, SAP Business Warehouse (BW), S/4 HANA, BW/4HANA,
or other SAP HANA workloads in Azure. The solution is based on the SAP-HANA certified dedicated hardware
stamp (SAP HANA Tailored Datacenter Integration TDI). Running as an SAP HANA TDI configured solution
provides you with the confidence of knowing that all SAP HANA-based applications (including SAP Business Suite
on SAP HANA, SAP Business Warehouse (BW) on SAP HANA, S4/HANA and BW4/HANA) are going to work on the
hardware infrastructure.
Compared to running SAP HANA in Azure Virtual Machines this solution has a benefitit provides for much larger
memory volumes. If you want to enable this solution, there are some key aspects to understand:
The SAP application layer and non-SAP applications run in Azure Virtual Machines (VMs) that are hosted in the
usual Azure hardware stamps.
Customer on-premises infrastructure, data centers, and application deployments are connected to the Microsoft
Azure cloud platform through Azure ExpressRoute (recommended) or Virtual Private Network (VPN). Active
Directory (AD) and DNS are also extended into Azure.
The SAP HANA database instance for HANA workload runs on SAP HANA on Azure (Large Instances). The Large
Instance stamp is connected into Azure networking, so software running in Azure VMs can interact with the
HANA instance running in HANA Large Instances.
Hardware of SAP HANA on Azure (Large Instances) is dedicated hardware provided in an Infrastructure as a
Service (IaaS) with SUSE Linux Enterprise Server, or Red Hat Enterprise Linux, pre-installed. As with Azure
Virtual Machines, further updates and maintenance to the operating system is your responsibility.
Installation of HANA or any additional components necessary to run SAP HANA on units of HANA Large
instances is your responsibility, as is all respective ongoing operations and administrations of SAP HANA on
Azure.
In addition to the solutions described here, you can install other components in your Azure subscription that
connects to SAP HANA on Azure (Large Instances). For example, components that enable communication with
and/or directly to the SAP HANA database (jump servers, RDP servers, SAP HANA Studio, SAP Data Services for
SAP BI scenarios, or network monitoring solutions).
As in Azure, HANA Large Instances offer supporting High Availability and Disaster Recovery functionality.
Architecture
At a high-level, the SAP HANA on Azure (Large Instances) solution has the SAP application layer residing in Azure
VMs and the database layer residing on SAP TDI configured hardware located in a Large Instance stamp in the
same Azure Region that is connected to Azure IaaS.
NOTE
You need to deploy the SAP application layer in the same Azure Region as the SAP DBMS layer. This rule is well-documented
in published information about SAP workload on Azure.
The overall architecture of SAP HANA on Azure (Large Instances) provides an SAP TDI certified hardware
configuration (non-virtualized, bare metal, high-performance server for the SAP HANA database), and the ability
and flexibility of Azure to scale resources for the SAP application layer to meet your needs.
CPU cores = sum of non-hyper-threaded CPU cores of the sum of the processors of the server unit.
CPU threads = sum of compute threads provided by hyper-threaded CPU cores of the sum of the processors of
the server unit. All units are configured by default to use Hyper-Threading.
The different configurations above which are Available or are 'Not offered anymore' are referenced in SAP Support
Note #2316233 SAP HANA on Microsoft Azure (Large Instances). The configurations, which are marked as
'Ready to Order' will find their entry into the SAP Note soon. Though, those instance SKUs can be ordered already
for the six different Azure regions the HANA Large Instance service is available.
The specific configurations chosen are dependent on workload, CPU resources, and desired memory. It is possible
for OLTP workload to use the SKUs that are optimized for OLAP workload.
The hardware base for all the offers are SAP HANA TDI certified. However, we distinguish between two different
classes of hardware, which divides the SKUs into:
S72, S72m, S144, S144m, S192, and S192m, which we refer to as the 'Type I class' of SKUs.
S384, S384m, S384xm, S576, S768, and S960, which we refer to as the 'Type II class' of SKUs.
It is important to note that a complete HANA Large Instance stamp is not exclusively allocated for a single
customer's use. This fact applies to the racks of compute and storage resources connected through a network
fabric deployed in Azure as well. HANA Large Instances infrastructure, like Azure, deploys different customer
"tenants" that are isolated from one another in the following three levels:
Network: Isolation through virtual networks within the HANA Large Instance stamp.
Storage: Isolation through storage virtual machines that have storage volumes assigned and isolate storage
volumes between tenants.
Compute: Dedicated assignment of server units to a single tenant. No hard or soft-partitioning of server units.
No sharing of a single server or host unit between tenants.
As such, the deployments of HANA Large Instances units between different tenants are not visible to each other.
Nor can HANA Large Instance Units deployed in different tenants communicate directly with each other on the
HANA Large Instance stamp level. Only HANA Large Instance Units within one tenant can communicate to each
other on the HANA Large Instance stamp level. A deployed tenant in the Large Instance stamp is assigned billing
wise to one Azure subscription. However, network wise it can be accessed from Azure VNets of other Azure
subscriptions within the same Azure enrollment. If you deploy with another Azure subscription in the same Azure
region, you also can choose to ask for a separated HANA Large Instance tenant.
There are significant differences between running SAP HANA on HANA Large Instances and SAP HANA running on
Azure VMs deployed in Azure:
There is no virtualization layer for SAP HANA on Azure (Large Instances). You get the performance of the
underlying bare-metal hardware.
Unlike Azure, the SAP HANA on Azure (Large Instances) server is dedicated to a specific customer. There is no
possibility that a server unit or host is hard or soft-partitioned. As a result, a HANA Large Instance unit is used
as assigned as a whole to a tenant and with that to you as a customer. A reboot or shutdown of the server does
not lead automatically to the operating system and SAP HANA being deployed on another server. (For Type I
class SKUs, the only exception is if a server might encounter issues and redeployment needs to be performed
on another server.)
Unlike Azure, where host processor types are selected for the best price/performance ratio, the processor types
chosen for SAP HANA on Azure (Large Instances) are the highest performing of the Intel E7v3 and E7v4
processor line.
Running multiple SAP HANA instances on one HANA Large Instance unit
It is possible to host more than one active SAP HANA instance on the HANA Large Instance units. In order to still
provide the capabilities of Storage Snapshots and Disaster recovery, such a configuration requires a volume set per
instance. As of now, the HANA Large Instance units can be subdivided as follows:
S72, S72m, S144, S192: In increments of 256 GB with 256 GB the smallest starting unit. Different increments
like 256 GB, 512 GB, and so on, can be combined to the maximum of the memory of the unit.
S144m and S192m: In increments of 256 GB with 512 GB the smallest unit. Different increments like 512 GB,
768 GB, and so on, can be combined to the maximum of the memory of the unit.
Type II class: In increments of 512 GB with the smallest starting unit of 2 TB. Different increments like 512 GB, 1
TB, 1.5 TB, and so on, can be combined to the maximum of the memory of the unit.
Some examples of running multiple SAP HANA instances could look like:
You get the idea. There certainly are other variations as well.
Using SAP HANA Data Tiering and Extension nodes
SAP supports a Data Tiering model for SAP BW of different SAP NetWeaver releases and SAP BW/4HANA. Details
regarding to the Data Tiering model can be found in this document and blog referenced in this document by SAP:
SAP BW/4HANA AND SAP BW ON HANA WITH SAP HANA EXTENSION NODES. With HANA Large Instances, you
can use option-1 configuration of SAP HANA Extension Nodes as detailed in this FAQ and SAP blog documents.
Option-2 configurations can be set up with the following HANA Large Instance SKUs: S72m, S192, S192m, S384,
and S384m.
Looking at the documentation the advantage might not be visible immediately. But looking into the SAP sizing
guidelines, you can see an advantage by using option-1 and option-2 SAP HANA extension nodes. Here an
example:
SAP HANA sizing guidelines usually require double the amount of data volume as memory. So, when you are
running your SAP HANA instance with the hot data, you only have 50% or less of the memory filled with data.
The remainder of the memory is ideally held for SAP HANA doing its work.
That means in a HANA Large Instance S192 unit with 2 TB of memory, running an SAP BW database, you only
have 1 TB as data volume.
If you use an additional SAP HANA Extension Node of option-1, also a S192 HANA Large Instance SKU, it would
give you an additional 2 TB capacity for data volume. In the option-2 configuration even and additional 4 TB for
warm data volume. Compared to the hot node, the full memory capacity of the 'warm' extension node can be
used for data storing for option-1 and double the memory can be used for data volume in option-2 SAP HANA
extension node configuration.
As a result you end up with a capacity of 3 TB for your data and a hot-to-warm ratio of 1:2 for option-1 and 5 TB
of data and a 1:4 ratio in option-2 extension node configuration.
However, the higher the data volume compared to the memory, the higher the chances are that the warm data you
are asking for is stored on disk storage.
As you can see in the diagram above, SAP HANA on Azure (Large Instances) is a multi-tenant Infrastructure as a
Service offer. And as a result, the division of responsibility is at the OS-Infrastructure boundary, for the most part.
Microsoft is responsible for all aspects of the service below the line of the operating system and you are
responsible above the line, including the operating system. So most current on-premises methods you may be
employing for compliance, security, application management, basis, and OS management can continue to be used.
The systems appear as if they are in your network in all regards.
However, this service is optimized for SAP HANA, so there are areas where you and Microsoft need to work
together to use the underlying infrastructure capabilities for best results.
The following list provides more detail on each of the layers and your responsibilities:
Networking: All the internal networks for the Large Instance stamp running SAP HANA, its access to the storage,
connectivity between the instances (for scale-out and other functions), connectivity to the landscape, and
connectivity to Azure where the SAP application layer is hosted in Azure Virtual Machines. It also includes WAN
connectivity between Azure Data Centers for Disaster Recovery purposes replication. All networks are partitioned
by the tenant and have QOS applied.
Storage: The virtualized partitioned storage for all volumes needed by the SAP HANA servers, as well as for
snapshots.
Servers: The dedicated physical servers to run the SAP HANA DBs assigned to tenants. The servers of the Type I
class of SKUs are hardware abstracted. With these types of servers, the server configuration is collected and
maintained in profiles, which can be moved from one physical hardware to another physical hardware. Such a
(manual) move of a profile by operations can be compared a bit to Azure Service healing. The servers of the Type II
class SKUs are not offering such a capability.
SDDC: The management software that is used to manage data centers as software defined entities. It allows
Microsoft to pool resources for scale, availability, and performance reasons.
O/S: The OS you choose (SUSE Linux or Red Hat Linux) that is running on the servers. The OS images you are
provided are the images provided by the individual Linux vendor to Microsoft for the purpose of running SAP
HANA. You are required to have a subscription with the Linux vendor for the specific SAP HANA-optimized image.
Your responsibilities include registering the images with the OS vendor. From the point of handover by Microsoft,
you are also responsible for any further patching of the Linux operating system. This patching also includes
additional packages that might be necessary for a successful SAP HANA installation (refer to SAP's HANA
installation documentation and SAP Notes) and which have not been included by the specific Linux vendor in their
SAP HANA optimized OS images. The responsibility of the customer also includes patching of the OS that is related
to malfunction/optimization of the OS and its drivers related to the specific server hardware. Or any security or
functional patching of the OS. Customer's responsibility is as well monitoring and capacity-planning of:
CPU resource consumption
Memory consumption
Disk volumes related to free space, IOPS, and latency
Network volume traffic between HANA Large Instance and SAP application layer
The underlying infrastructure of HANA Large Instances provides functionality for backup and restore of the OS
volume. Using this functionality is also your responsibility.
Middleware: The SAP HANA Instance, primarily. Administration, operations, and monitoring are your
responsibility. There is functionality provided that enables you to use storage snapshots for backup/restore and
Disaster Recovery purposes. These capabilities are provided by the infrastructure. However, your responsibilities
also include designing High Availability or Disaster Recovery with these capabilities, leveraging them, and
monitoring that storage snapshots have been executed successfully.
Data: Your data managed by SAP HANA, and other data such as backups files located on volumes or file shares.
Your responsibilities include monitoring disk free space and managing the content on the volumes, and
monitoring the successful execution of backups of disk volumes and storage snapshots. However, successful
execution of data replication to DR sites is the responsibility of Microsoft.
Applications: The SAP application instances or, in case of non-SAP applications, the application layer of those
applications. Your responsibilities include deployment, administration, operations, and monitoring of those
applications related to capacity planning of CPU resource consumption, memory consumption, Azure Storage
consumption and network bandwidth consumption within Azure VNets, and from Azure VNets to SAP HANA on
Azure (Large Instances).
WANs: The connections you establish from on-premises to Azure deployments for workloads. All our customers
with HANA Large Instances use Azure ExpressRoute for connectivity. This connection is not part of the SAP HANA
on Azure (Large Instances) solution, so you are responsible for the setup of this connection.
Archive: You might prefer to archive copies of data using your own methods in storage accounts. Archiving
requires management, compliance, costs, and operations. You are responsible for generating archive copies and
backups on Azure, and storing them in a compliant way.
See the SLA for SAP HANA on Azure (Large Instances).
Sizing
Sizing for HANA Large Instances is no different than sizing for HANA in general. For existing and deployed
systems, you want to move from other RDBMS to HANA, SAP provides a number of reports that run on your
existing SAP systems. If the database is moved to HANA, these reports check the data and calculate memory
requirements for the HANA instance. Read the following SAP Notes to get more information on how to run these
reports, and how to obtain their most recent patches/versions:
SAP Note #1793345 - Sizing for SAP Suite on HANA
SAP Note #1872170 - Suite on HANA and S/4 HANA sizing report
SAP Note #2121330 - FAQ: SAP BW on HANA Sizing Report
SAP Note #1736976 - Sizing Report for BW on HANA
SAP Note #2296290 - New Sizing Report for BW on HANA
For green field implementations, SAP Quick Sizer is available to calculate memory requirements of the
implementation of SAP software on top of HANA.
Memory requirements for HANA are increasing as data volume grows, so you want to be aware of the memory
consumption now and be able to predict what it is going to be in the future. Based on the memory requirements,
you can then map your demand into one of the HANA Large Instance SKUs.
Requirements
This list assembles requirements for running SAP HANA on Azure (Larger Instances).
Microsoft Azure:
An Azure subscription that can be linked to SAP HANA on Azure (Large Instances).
Microsoft Premier Support Contract. See SAP Support Note #2015553 SAP on Microsoft Azure: Support
Prerequisites for specific information related to running SAP in Azure. Using HANA large instance units with
384 and more CPUs, you also need to extend the Premier Support contract to include Azure Rapid Response
(ARR).
Awareness of the HANA large instances SKUs you need after performing a sizing exercise with SAP.
Network Connectivity:
Azure ExpressRoute between on-premises to Azure: To connect your on-premises datacenter to Azure, make
sure to order at least a 1 Gbps connection from your ISP.
Operating System:
Licenses for SUSE Linux Enterprise Server 12 for SAP Applications.
NOTE
The Operating System delivered by Microsoft is not registered with SUSE, nor is it connected with an SMT instance.
SUSE Linux Subscription Management Tool (SMT) deployed in Azure on an Azure VM. This provides the ability
for SAP HANA on Azure (Large Instances) to be registered and respectively updated by SUSE (as there is no
internet access within HANA Large Instances data center).
Licenses for Red Hat Enterprise Linux 6.7 or 7.2 for SAP HANA.
NOTE
The Operating System delivered by Microsoft is not registered with Red Hat, nor is it connected to a Red Hat Subscription
Manager Instance.
Red Hat Subscription Manager deployed in Azure on an Azure VM. The Red Hat Subscription Manager provides
the ability for SAP HANA on Azure (Large Instances) to be registered and respectively updated by Red Hat (as
there is no direct internet access from within the tenant deployed on the Azure Large Instance stamp).
SAP requires you to have a support contract with your Linux provider as well. This requirement is not erased by
the solution of HANA Large Instances or the fact that your run Linux in Azure. Unlike with some of the Linux
Azure gallery images, the service fee is NOT included in the solution offer of HANA Large Instances. It is on you
as a customer to fulfill the requirements of SAP regarding support contracts with the Linux distributor.
For SUSE Linux, look up the requirements of support contract in SAP Note #1984787 - SUSE LINUX
Enterprise Server 12: Installation notes and SAP Note #1056161 - SUSE Priority Support for SAP
applications.
For Red Hat Linux, you need to have the correct subscription levels that include support and service
(updates to the operating systems of HANA Large Instances. Red Hat recommends getting an "RHEL for
SAP Business Applications" subscription. Regarding support and services, check SAP Note #2002167 -
Red Hat Enterprise Linux 7.x: Installation and Upgrade and SAP Note #1496410 - Red Hat Enterprise
Linux 6.x: Installation and Upgrade for details.
Database:
Licenses and software installation components for SAP HANA (platform or enterprise edition).
Applications:
Licenses and software installation components for any SAP applications connecting to SAP HANA and related
SAP support contracts.
Licenses and software installation components for any non-SAP applications used in relation to SAP HANA on
Azure (Large Instances) environment and related support contracts.
Skills:
Experience and knowledge on Azure IaaS and its components.
Experience and knowledge on deploying SAP workload in Azure.
SAP HANA Installation certified personal.
SAP architect skills to design High Availability and Disaster Recovery around SAP HANA.
SAP:
Expectation is that you are an SAP customer and have a support contract with SAP
Especially for implementations on the Type II class of HANA Large Instance SKUs, it is highly recommended to
consult with SAP on versions of SAP HANA and eventual configurations on large sized scale-up hardware.
Storage
The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP HANA on Azure Service
Management through SAP recommended guidelines, documented in the SAP HANA Storage Requirements white
paper.
The HANA Large Instances of the Type I class come with four times the memory volume as storage volume. For the
type II class of HANA Large Instance units, the storage is not going to be four times more. The units come with a
volume, which is intended for storing HANA transaction log backups. Find more details in How to install and
configure SAP HANA (large instances) on Azure
See the following table in terms of storage allocation. The table lists roughly the capacity for the different volumes
provided with the different HANA Large Instance units.
Actual deployed volumes may vary a bit based on deployment and tool that is used to show the volume sizes.
If you subdivide a HANA Large Instance SKU, a few examples of possible division pieces would look like:
MEMORY PARTITION IN
GB HANA/DATA HANA/LOG HANA/SHARED HANA/LOG/BACKUP
Networking
The architecture of Azure Networking is a key component to successful deployment of SAP applications on HANA
Large Instances. Typically, SAP HANA on Azure (Large Instances) deployments have a larger SAP landscape with
several different SAP solutions with varying sizes of databases, CPU resource consumption, and memory
utilization. Likely not all of those SAP systems are based on SAP HANA, so your SAP landscape would probably be
a hybrid that uses:
Deployed SAP systems on-premises. Due to their sizes, these systems cannot currently be hosted in Azure; a
classic example would be a production SAP ERP system running on Microsoft SQL Server (as the database)
which requires more CPU or memory resources Azure VMs can provide.
Deployed SAP HANA-based SAP systems on-premises.
Deployed SAP systems in Azure VMs. These systems could be development, testing, sandbox, or production
instances for any of the SAP NetWeaver-based applications that can successfully deploy in Azure (on VMs),
based on resource consumption and memory demand. These systems also could be based on databases like
SQL Server (see SAP Support Note #1928533 SAP Applications on Azure: Supported Products and Azure VM
types) or SAP HANA (see SAP HANA Certified IaaS Platforms).
Deployed SAP application servers in Azure (on VMs) that leverage SAP HANA on Azure (Large Instance) in
Azure Large Instance stamps.
While a hybrid SAP landscape (with four or more different deployment scenarios) is typical, there are many
customer cases of complete SAP landscape running in Azure. As Microsoft Azure VMs are becoming more
powerful, the number of customers moving all their SAP solutions on Azure is increasing.
Azure networking in the context of SAP systems deployed in Azure is not complicated. It is based on the following
principles:
Azure Virtual Networks (VNets) need to be connected to the Azure ExpressRoute circuit that connects to on-
premises network.
An ExpressRoute circuit connecting on-premise to Azure usually should have a bandwidth of 1 Gbps or higher.
This minimal bandwidth allows adequate bandwidth for transferring data between on-premises systems and
systems running on Azure VMs (as well as connection to Azure systems from end users on-premises).
All SAP systems in Azure need to be set up in Azure VNets to communicate with each other.
Active Directory and DNS hosted on-premises are extended into Azure through ExpressRoute from on-premise.
NOTE
From a billing point of view, only a single Azure subscription can be linked only to one single tenant in a Large Instance
stamp in a specific Azure region, and conversely a single Large Instance stamp tenant can be linked only to one Azure
subscription. This fact is not different to any other billable objects in Azure
Deploying SAP HANA on Azure (Large Instances) in multiple different Azure regions, results in a separate tenant to
be deployed in the Large Instance stamp. However, you can run both under the same Azure subscription as long as
these instances are part of the same SAP landscape.
IMPORTANT
Only Azure Resource Management deployment is supported with SAP HANA on Azure (Large Instances).
NOTE
The throughput an Azure gateway provides is different for both use cases (see About VPN Gateway). The maximum
throughput we can achieve with a VNet gateway is 10 Gbps, using an ExpressRoute connection. Keep in mind that copying
files between an Azure VM residing in an Azure VNet and a system on-premises (as a single copy stream) does not achieve
the full throughput of the different gateway SKUs. To leverage the complete bandwidth of the VNet gateway, you must use
multiple streams, or copy different files in parallel streams of a single file.
IMPORTANT
Given the overall network traffic between the SAP application and database layers, only the HighPerformance or
UltraPerformance gateway SKUs for VNets is supported for connecting to SAP HANA on Azure (Large Instances). For HANA
Large instance Type II SKUs, only the UltraPerformance Gateway SKU is supported as Azure VNet Gateway.
NOTE
For running SAP landscapes in Azure, connect to the MSEE closest to the Azure region in the SAP landscape. Azure Large
Instance stamps are connected through dedicated MSEE devices to minimize network latency between Azure VMs in Azure
IaaS and Large Instance stamps.
The VNet gateway for the Azure VMs, hosting SAP application instances, is connected to that ExpressRoute circuit,
and the same VNet is connected to a separate MSEE Router dedicated to connecting to Large Instance stamps.
This is a straightforward example of a single SAP system, where the SAP application layer is hosted in Azure and
the SAP HANA database runs on SAP HANA on Azure (Large Instances). The assumption is that the VNet gateway
bandwidth of 2 Gbps or 10 Gbps throughput does not represent a bottleneck.
Multiple SAP systems or large SAP systems
If multiple SAP systems or large SAP systems are deployed connecting to SAP HANA on Azure (Large Instances),
it's reasonable to assume the throughput of the VNet gateway may become a bottleneck. In such a case, you need
to split the application layers into multiple Azure VNets. It also might be recommendable to create special VNets
that connect to HANA Large Instances for cases like:
Performing backups directly from the HANA Instances in HANA Large Instances to a VM in Azure that hosts
NFS shares
Copying large backups or other files from HANA Large Instance units to disk space managed in Azure.
Using separate VNets that host VMs that manage the storage avoids impact by large file or data transfer from
HANA Large Instances to Azure on the VNet Gateway that serves the VMs running the SAP application layer.
For a more scalable network architecture:
Leverage multiple Azure VNets for a single, larger SAP application layer.
Deploy one separate Azure VNet for each SAP system deployed, compared to combining these SAP systems
in separate subnets under the same VNet.
A more scalable networking architecture for SAP HANA on Azure (Large Instances):
Deploying the SAP application layer, or components, over multiple Azure VNets as shown above, introduced
unavoidable latency overhead that occurred during communication between the applications hosted in those
Azure VNets. By default, the network traffic between Azure VMs located in different VNets route through the MSEE
routers in this configuration. However, since September 2016 this routing can be optimized. The way to optimize
and cut down the latency in communication between two VNets is by peering Azure VNets within the same region.
Even if those VNets are in different subscriptions. Using Azure VNet peering, the communication between VMs in
two different Azure VNets can use the Azure network backbone to directly communicate with each other. Thereby
showing similar latency as if the VMs would be in the same VNet. Whereas traffic, addressing IP address ranges
that are connected through the Azure VNet gateway, is routed through the individual VNet gateway of the VNet.
You can get details about Azure VNet peering in the article VNet peering.
Routing in Azure
There are two important network routing considerations for SAP HANA on Azure (Large Instances):
1. SAP HANA on Azure (Large Instances) can only be accessed by Azure VMs in the dedicated ExpressRoute
connection; not directly from on-premises. Some administration clients and any applications needing direct
access, such as SAP Solution Manager running on-premises, cannot connect to the SAP HANA database.
2. SAP HANA on Azure (Large Instances) units have an assigned IP address from the Server IP Pool address
range you as the customer submitted (see SAP HANA (large instances) Infrastructure and connectivity on
Azure for details). This IP address is accessible through the Azure subscriptions and ExpressRoute that
connects Azure VNets to HANA on Azure (Large Instances). The IP address assigned out of that Server IP
Pool address range is directly assigned to the hardware unit and is NOT NAT'ed anymore as this was the
case in the first deployments of this solution.
NOTE
If you need to connect to SAP HANA on Azure (Large Instances) in a data warehouse scenario, where applications and/or
end users need to connect to the SAP HANA database (running directly), another networking component must be used: a
reverse-proxy to route data, to and from. For example, F5 BIG-IP, NGINX with Traffic Manager deployed in Azure as a virtual
firewall/traffic routing solution.
IMPORTANT
If multiple ExpressRoute circuits are used, AS Path prepending and Local Preference BGP settings should be used to ensure
proper routing of traffic.
SAP HANA (large instances) infrastructure and
connectivity on Azure
8/11/2017 23 min to read Edit Online
Some definitions upfront before reading this guide. In SAP HANA (large instances) overview and architecture on
Azure we introduced two different classes of HANA Large Instance units with:
S72, S72m, S144, S144m, S192, and S192m, which we refer to as the 'Type I class' of SKUs.
S384, S384m, S384xm, S576, S768, and S960, which we refer to as the 'Type II class' of SKUs.
The class specifiers are going to be used throughout the HANA Large Instance documentation to eventually refer
to different capabilities and requirements based on HANA Large Instance SKUs.
Other definitions we use frequently are:
Large Instance stamp: A hardware infrastructure stack that is SAP HANA TDI certified and dedicated to run
SAP HANA instances within Azure.
SAP HANA on Azure (Large Instances): Official name for the offer in Azure to run HANA instances in on SAP
HANA TDI certified hardware that is deployed in Large Instance stamps in different Azure regions. The related
term HANA Large Instance is short for SAP HANA on Azure (Large Instances) and is widely used this
technical deployment guide.
After the purchase of SAP HANA on Azure (Large Instances) is finalized between you and the Microsoft enterprise
account team, the following information is required by Microsoft to deploy HANA Large Instance Units:
Customer name
Business contact information (including e-mail address and phone number)
Technical contact information (including e-mail address and phone number)
Technical networking contact information (including e-mail address and phone number)
Azure deployment region (West US, East US, Australia East, Australia Southeast, West Europe, and North
Europe as of July
2017)
Confirm SAP HANA on Azure (Large Instances) SKU (configuration)
As already detailed in the Overview and Architecture document for HANA Large Instances, for every Azure
Region being deployed to:
A /29 IP address range for ER-P2P Connections that connect Azure VNets to HANA Large Instances
A /24 CIDR Block used for the HANA Large Instances Server IP Pool
The IP address range values used in the VNet Address Space attribute of every Azure VNet that connects to the
HANA Large Instances
Data for each of HANA Large Instances system:
Desired hostname - ideally with fully qualified domain name.
Desired IP address for the HANA Large Instance unit out of the Server IP Pool address range - Keep in
mind that the first 30 IP addresses in the Server IP Pool address range are reserved for internal usage
within HANA Large Instances
SAP HANA SID name for the SAP HANA instance (required to create the necessary SAP HANA-related
disk volumes). The HANA SID is required for creating the permissions for on the NFS volumes, which
are getting attached to the HANA Large Instance unit. It also is used as one of the name components of
the disk volumes that get mounted. If you want to run more than one HANA instance on the unit, you
need to list multiple HANA SIDs. Each one gets a separate set of volumes assigned.
The groupid the hana-sidadm user has in the Linux OS is required to create the necessary SAP HANA-
related disk volumes. The SAP HANA installation usually creates the sapsys group with a group id of
1001. The hana-sidadm user is part of that group
The userid the hana-sidadm user has in the Linux OS is required to create the necessary SAP HANA-
related disk volumes. If you are running multiple HANA instances on the unit, you need to list all the
adm users
Azure subscription ID for the Azure subscription to which SAP HANA on Azure HANA Large Instances are going
to be directly connected. This subscription ID references the Azure subscription, which is going to be charged
with the HANA Large Instance unit(s).
After you provide the information, Microsoft provisions SAP HANA on Azure (Large Instances) and will return the
information necessary to link your Azure VNets to HANA Large Instances and to access the HANA Large Instance
units.
Looking closer on the Azure VNet side, we realize the need for:
The definition of an Azure VNet that is going to be used to deploy the VMs of the SAP application layer into.
That automatically means that a default subnet in the Azure Vnet is defined that is really the one used to deploy
the VMs into.
The Azure VNet that's created needs to have at least one VM subnet and one ExpressRoute Gateway subnet.
These subnets should be assigned the IP address ranges as specified and discussed in the following sections.
So, let's look a bit closer into the Azure VNet creation for HANA Large Instances
Creating the Azure VNet for HANA Large Instances
NOTE
The Azure VNet for HANA Large Instance must be created using the Azure Resource Manager deployment model. The older
Azure deployment model, commonly known as classic deployment model, is not supported with the HANA Large Instance
solution.
The VNet can be created using the Azure portal, PowerShell, Azure template, or Azure CLI (see Create a virtual
network using the Azure portal). In the following example, we look into a VNet created through the Azure portal.
If we look into the definitions of an Azure VNet through the Azure portal, let's look into some of the definitions
and how those relate to what we list of different IP address ranges. As we are talking about the Address Space,
we mean the address space that the Azure VNet is allowed to use. This address space is also the address range
that the VNet uses for BGP route propagation. This Address Space can be seen here:
In the case preceding, with 10.16.0.0/16, the Azure VNet was given a rather large and wide IP address range to
use. Means all the IP address ranges of subsequent subnets within this VNet can have their ranges within that
'Address Space'. Usually we are not recommending such a large address range for single VNet in Azure. But
getting a step further, let's look into the subnets defined in the Azure VNet:
As you can see, we look at a VNet with a first VM subnet (here called 'default') and a subnet called
'GatewaySubnet'. In the following section, we refer to the IP address range of the subnet, which was called
'default' in the graphics as Azure VM subnet IP address range. In the following sections, we refer to the IP
address range of the Gateway Subnet as VNet Gateway Subnet IP address range.
In the case demonstrated by the two graphics above, you see that the VNet Address Space covers both, the
Azure VM subnet IP address range and the VNet Gateway Subnet IP address range.
In other cases where you need to conserve or be specific with your IP address ranges, you can restrict the VNet
Address Space of a VNet to the specific ranges being used by each subnet. In this case, you can define the VNet
Address Space of a VNet as multiple specific ranges as shown here:
In this case, the VNet Address Space has two spaces defined. These two spaces, are identical to the IP address
ranges defined for Azure VM subnet IP address range and the VNet Gateway Subnet IP address range.
You can use any naming standard you like for these tenant subnets (VM subnets). However, there must always
be one, and only one, gateway subnet for each VNet that connects to the SAP HANA on Azure (Large
Instances) ExpressRoute circuit. And this gateway subnet must always be named "GatewaySubnet" to
ensure proper placement of the ExpressRoute gateway.
WARNING
It is critical that the gateway subnet always is named "GatewaySubnet."
Multiple VM subnets may be used, even utilizing non-contiguous address ranges. But as mentioned previously,
these address ranges must be covered by the VNet Address Space of the VNet either in aggregated form or in a
list of the exact ranges of the VM subnets and the gateway subnet.
Summarizing the important fact about an Azure VNet that connects to HANA Large Instances:
You need to submit to Microsoft the VNet Address Space when performing an initial deployment of HANA
Large Instances.
The VNet Address Space can be one larger range that covers the range for Azure VM subnet IP address
range(s) and the VNet Gateway Subnet IP address range.
Or you can submit as VNet Address Space multiple ranges that cover the different IP address ranges of VM
subnet IP address range(s) and the VNet Gateway Subnet IP address range.
The defined VNet Address Space is used BGP routing propagation.
The name of the Gateway subnet must be: "GatewaySubnet."
The VNet Address Space is used as a filter on the HANA Large Instance side to allow or disallow traffic to the
HANA Large Instance units from Azure. If the BGP routing information of the Azure VNet and the IP address
ranges configured for filtering on the HANA Large Instance side do not match, issues in connectivity can arise.
There are some details about the Gateway subnet that are discussed further down in Section 'Connecting a
VNet to HANA Large Instance ExpressRoute'
Different IP address ranges to be defined
We already introduced some of the IP address ranges necessary to deploy HANA Large Instances in earlier
sections. But there are some more IP address ranges, which are important. Let's go through some further details.
The following IP addresses of which not all need to be submitted to Microsoft need to be defined, before sending a
request for initial deployment:
VNet Address Space: As already introduced earlier, is or are the IP address range(s) you have assigned (or
plan to assign) to your address space parameter in the Azure Virtual Network(s) (VNet) connecting to the
SAP HANA Large Instance environment. It is recommended that this Address Space parameter is a multi-
line value comprised of the Azure VM Subnet range(s) and the Azure Gateway subnet range as shown in
the graphics earlier. This range must NOT overlap with your on-premise or Server IP Pool or ER-P2P
address ranges. How to get this or these IP address range(s)? Your corporate network team or service
provider should provide one or multiple IP Address Range(s), which is or are not used inside your network.
Example: If your Azure VM Subnet (see earlier) is 10.0.1.0/24, and your Azure Gateway Subnet (see
following) is 10.0.2.0/28, then your Azure VNet Address Space is recommended to be two lines; 10.0.1.0/24
and 10.0.2.0/28. Although the Address Space values can be aggregated, it is recommended matching them
to the subnet ranges to avoid accidental reuse of unused IP address ranges within larger address spaces in
the future elsewhere in your network. The VNET Address Space is an IP address range, which needs to
be submitted to Microsoft when asking for an initial deployment
Azure VM subnet IP address range: This IP address range, as discussed earlier already, is the one you
have assigned (or plan to assign) to the Azure VNet subnet parameter in your Azure VNET connecting to
the SAP HANA Large Instance environment. This IP address range is used to assign IP addresses to your
Azure VMs. The IP addresses out of this range are allowed to connect to your SAP HANA Large Instance
server(s). If needed, multiple Azure VM subnets may be used. A /24 CIDR block is recommended by
Microsoft for each Azure VM Subnet. This address range must be a part of the values used in the Azure
VNet Address Space. How to get this IP address range? Your corporate network team or service provider
should provide an IP Address Range, which is not currently used inside your network.
VNet Gateway Subnet IP address range: Depending on the features you plan to use, the recommended
size would be:
Ultra-performance ExpressRoute gateway: /26 address block - required for Type II class of SKUs
Co-existence with VPN and ExpressRoute using a High-performance ExpressRoute Gateway (or smaller):
/27 address block
All other situations: /28 address block. This address range must be a part of the values used in the VNet
Address Space values. This address range must be a part of the values used in the Azure VNet Address
Space values that you need to submit to Microsoft. How to get this IP address range? Your corporate
network team or service provider should provide an IP Address Range, which is not currently used
inside your network.
Address range for ER-P2P connectivity: This range is the IP range for your SAP HANA Large Instance
ExpressRoute (ER) P2P connection. This range of IP addresses must be a /29 CIDR IP address range. This
range must NOT overlap with your on-premise or other Azure IP address ranges. This IP address range is
used to set up the ER connectivity from your Azure VNet ExpressRoute Gateway to the SAP HANA Large
Instance servers. How to get this IP address range? Your corporate network team or service provider should
provide an IP Address Range, which is not currently used inside your network. This range is an IP address
range, which needs to be submitted to Microsoft when asking for an initial deployment
Server IP Pool Address Range: This IP address range is used to assign the individual IP address to HANA
large instance servers. The recommended subnet size is a /24 CIDR block - but if needed it can be smaller
to a minimum of providing 64 IP addresses. From this range, the first 30 IP addresses are reserved for use
by Microsoft. Ensure this fact is accounted for when choosing the size of the range. This range must NOT
overlap with your on-premise or other Azure IP addresses. How to get this IP address range? Your
corporate network team or service provider should provide an IP Address Range which is not currently
used inside your network. A /24 (recommended) unique CIDR block to be used for assigning the specific IP
addresses needed for SAP HANA on Azure (Large Instances). This range is an IP address range, which
needs to be submitted to Microsoft when asking for an initial deployment
Though you need to define and plan the IP address ranges above, not all them need to be transmitted to
Microsoft. To summarize the above, the IP address ranges you are required to name to Microsoft are:
Azure VNet Address Space(s)
Address range for ER-P2P connectivity
Server IP Pool Address Range
Adding additional VNets that need to connect to HANA Large Instances, requires you to submit the new Azure
VNet Address Space you're adding to Microsoft.
Following is an example of the different ranges and some example ranges as you need to configure and eventually
provide to Microsoft. As you can see, the value for the Azure VNet Address Space is not aggregated in the first
example, but is defined from the ranges of the first Azure VM subnet IP address range and the VNet Gateway
Subnet IP address range. Using multiple VM subnets within the Azure VNet would work accordingly by
configuring and submitting the additional IP address ranges of the additional VM subnet(s) as part of the Azure
VNet Address Space.
You also have the possibility of aggregating the data you submit to Microsoft. In that case, the Address Space of
the Azure VNet only would include one space. Using the IP address ranges used in the example earlier. This
aggregated VNet Address space could look like:
As you can see above, instead of two smaller ranges that defined the address space of the Azure VNet, we have
one larger range that covers 4096 IP addresses. Such a large definition of the Address Space leaves some rather
large ranges unused. Since the VNet Address Space value(s) are used for BGP route propagation, usage of the
unused ranges on-premises or elsewhere in your network can cause routing issues. So it's recommended to keep
the Address Space tightly aligned with the actual subnet address space used. If needed, without incurring
downtime on the VNet, you can always add new Address Space values later.
IMPORTANT
Each IP address range of ER-P2P, Server IP Pool, Azure VNet Address Space must NOT overlap with each other or any other
range used somewhere else in your network; each must be discrete and as the two graphics earlier show, may not be a
subnet of any other range. If overlaps occur between ranges, the Azure VNet may not connect to the ExpressRoute circuit.
NOTE
This step can take up to 30 minutes to complete, as the new gateway is created in the designated Azure subscription and
then connected to the specified Azure VNet.
If a gateway already exists, check whether it is an ExpressRoute gateway or not. If not, the gateway must be
deleted and recreated as an ExpressRoute gateway. If an ExpressRoute gateway is already established, since the
Azure VNet is already connected to the ExpressRoute circuit for on-premises connectivity, proceed to the Linking
VNets section below.
Use either the (new) Azure portal, or PowerShell to create an ExpressRoute VPN gateway connected to your
VNet.
If you use the Azure portal, add a new Virtual Network Gateway and then select ExpressRoute as the
gateway type.
If you chose PowerShell instead, first download and use the latest Azure PowerShell SDK to ensure an
optimal experience. The following commands create an ExpressRoute gateway. The texts preceded by a
$ are user defined variables that need to be updated with your specific information.
# These values are used to create the gateway, update for how you wish the GW components to be named
$myGWName = "VNet01GW"
$myGWConfig = "VNet01GWConfig"
$myGWPIPName = "VNet01GWPIP"
$myGWSku = "HighPerformance" # Supported values for HANA Large Instances are: HighPerformance or
UltraPerformance
In this example, the HighPerformance gateway SKU was used. Your options are HighPerformance or
UltraPerformance as the only gateway SKUs that are supported for SAP HANA on Azure (Large Instances).
IMPORTANT
For HANA Large Instances of the SKU types S384, S384m, S384xm, S576, S768, and S960 (Type II class SKUs), the usage of
the UltraPerformance Gateway SKU is mandatory.
Linking VNets
Now that the Azure VNet has an ExpressRoute gateway, you use the authorization information provided by
Microsoft to connect the ExpressRoute gateway to the SAP HANA on Azure (Large Instances) ExpressRoute circuit
created for this connectivity. This step can be performed using the Azure portal or PowerShell. The portal is
recommended, however PowerShell instructions are as follows.
You execute the following commands for each VNet gateway using a different AuthGUID for each connection.
The first two entries shown in the script following come from the information provided by Microsoft. Also, the
AuthGUID is specific for every VNet and its gateway. Means, if you want to add another Azure VNet, you need
to get another AuthID for your ExpressRoute circuit that connects HANA Large Instances into Azure.
# Populate with information provided by Microsoft Onboarding team
$PeerID = "/subscriptions/9cb43037-9195-4420-a798-f87681a0e380/resourceGroups/Customer-USE-
Circuits/providers/Microsoft.Network/expressRouteCircuits/Customer-USE01"
$AuthGUID = "76d40466-c458-4d14-adcf-3d1b56d1cd61"
# Create a new connection between the ER Circuit and your Gateway using the Authorization
$gw = Get-AzureRmVirtualNetworkGateway -Name $myGWName -ResourceGroupName $myGroupName
If you want to connect the gateway to multiple ExpressRoute circuits that are associated with your subscription,
you may need to execute this step more than once. For example, you are likely going to connect the same VNet
Gateway to the ExpressRoute circuit that connects the VNet to your on-premise network.
Adding VNets
After initially connecting one or more Azure VNets, you might want to add additional ones that access SAP HANA
on Azure (Large Instances). First, submit an Azure support request, in that request include both the specific
information identifying the particular Azure deployment, and the IP address space range(s) of the Azure VNet
Address Space. SAP HANA on Azure Service Management then provides the necessary information you need to
connect the additional VNets and ExpressRoute. For every VNet, you need a unique Authorization Key to connect
to the ExpressRoute Circuit to HANA Large Instances.
Steps to add a new Azure VNet:
1. Complete the first step in the onboarding process, see the Creating Azure VNet section.
2. Complete the second step in the onboarding process, see the Creating gateway subnet section.
3. To connect your additional VNets to the HANA Large Instance ExpressRoute circuit, open an Azure support
request with information on the new VNet and request a new Authorization Key.
4. Once notified that the authorization is complete, use the Microsoft-provided authorization information to
complete the third step in the onboarding process, see the Linking VNets section.
Deleting a subnet
To remove a VNet subnet, either the Azure portal, PowerShell, or CLI can be used. In case your Azure VNet IP
address range/Azure VNet Address Space was an aggregated range, there is no follow up for you with Microsoft.
Except that the VNet is still propagating BGP route address space that includes the deleted subnet. If you defined
the Azure VNet IP address range/Azure VNet Address Space as multiple IP address ranges of which one was
assigned to your deleted subnet, you should delete that out of your VNet Address Space and subsequently inform
SAP HANA on Azure Service Management to remove it from the ranges that SAP HANA on Azure (Large
Instances) is allowed to communicate with.
While there isn't yet specific, dedicated Azure.com guidance on removing subnets, the process for removing
subnets is the reverse of the process for adding them. See the article Create a virtual network using the Azure
portal for more information on creating subnets.
Deleting a VNet
Use either the Azure portal, PowerShell, or CLI when deleting a VNet. SAP HANA on Azure Service Management
removes the existing authorizations on the SAP HANA on Azure (Large Instances) ExpressRoute circuit and
remove the Azure VNet IP address range/Azure VNet Address Space for the communication with HANA Large
Instances.
Once the VNet has been removed, open an Azure support request to provide the IP address space range(s) to be
removed.
While there isn't yet specific, dedicated Azure.com guidance on removing VNets, the process for removing VNets
is the reverse of the process for adding them, which is described above. See the articles Create a virtual network
using the Azure portal and Create a virtual network using PowerShell for more information on creating VNets.
To ensure everything is removed, delete the following items:
The ExpressRoute connection, VNet Gateway, VNet Gateway Public IP and, VNet
Following are some important definitions to know before you read this guide. In SAP HANA (large instances)
overview and architecture on Azure we introduced two different classes of HANA Large Instance units with:
S72, S72m, S144, S144m, S192, and S192m, which we refer to as the 'Type I class' of SKUs.
S384, S384m, S384xm, S576, S768, and S960, which we refer to as the 'Type II class' of SKUs.
The class specifier is going to be used throughout the HANA Large Instance documentation to eventually refer to
different capabilities and requirements based on HANA Large Instance SKUs.
Other definitions we use frequently are:
Large Instance stamp: A hardware infrastructure stack that is SAP HANA TDI certified and dedicated to run
SAP HANA instances within Azure.
SAP HANA on Azure (Large Instances): Official name for the offer in Azure to run HANA instances in on SAP
HANA TDI certified hardware that is deployed in Large Instance stamps in different Azure regions. The related
term HANA Large Instance is short for SAP HANA on Azure (Large Instances) and is widely used this technical
deployment guide.
The installation of SAP HANA is your responsibility and you can start the activity after handoff of a new SAP HANA
on Azure (Large Instances) server. And after the connectivity between your Azure VNet(s) and the HANA Large
Instance unit(s) got established.
NOTE
Per SAP policy, the installation of SAP HANA must be performed by a person certified to perform SAP HANA installations. A
person, who has passed the Certified SAP Technology Associate exam, SAP HANA Installation certification exam, or by an
SAP-certified system integrator (SI).
Check again, especially when planning to install HANA 2.0, SAP Support Note #2235581 - SAP HANA: Supported
Operating Systems in order to make sure that the OS is supported with the SAP HANA release you decided to
install. You realize that the supported OS for HANA 2.0 is more restricted than the OS supported for HANA 1.0.
Networking
We assume that you followed the recommendations in designing your Azure VNets and connecting those VNets to
the HANA Large Instances as described in these documents:
SAP HANA (large Instance) Overview and Architecture on Azure
SAP HANA (large instances) Infrastructure and connectivity on Azure
There are some details worth to mention about the networking of the single units. Every HANA Large Instance unit
comes with two or three IP addresses that are assigned to two or three NIC ports of the unit. Three IP addresses
are used in HANA scale-out configurations and the HANA System Replication scenario. One of the IP addresses
assigned to the NIC of the unit is out of the Server IP pool that was described in the SAP HANA (large Instance)
Overview and Architecture on Azure.
The distribution for units with two IP addresses assigned should look like:
eth0.xx should have an IP address assigned that is out of the Server IP Pool address range that you submitted to
Microsoft. This IP address shall be used for maintaining in /etc/hosts of the OS.
eth1.xx should have an IP address assigned that is used for communication to NFS. Therefore, these addresses
do NOT need to be maintained in etc/hosts in order to allow instance to instance traffic within the tenant.
For deployment cases of HANA System Replication or HANA scale-out, a blade configuration with two IP addresses
assigned is not suitable. If having two IP addresses assigned only and wanting to deploy such a configuration,
contact SAP HANA on Azure Service Management to get a third IP address in a third VLAN assigned. For HANA
Large Instance units having three IP addresses assigned on three NIC ports, the following usage rules apply:
eth0.xx should have an IP address assigned that is out of the Server IP Pool address range that you submitted to
Microsoft. Hence this IP address shall not be used for maintaining in /etc/hosts of the OS.
eth1.xx should have an IP address assigned that is used for communication to NFS storage. Hence this type of
addresses should not be maintained in etc/hosts.
eth2.xx should be exclusively used to be maintained in etc/hosts for communication between the different
instances. These addresses would also be the IP addresses that need to be maintained in scale-out HANA
configurations as IP addresses HANA uses for the inter-node configuration.
Storage
The storage layout for SAP HANA on Azure (Large Instances) is configured by SAP HANA on Azure Service
Management through SAP recommended guide lines as documented in SAP HANA Storage Requirements white
paper. The rough sizes of the different volumes with the different HANA Large Instances SKUs got documented in
SAP HANA (large Instance) Overview and Architecture on Azure.
The naming conventions of the storage volumes are listed in the following table:
The output of the command df -h on a S72m HANA Large Instance unit would look like:
The storage controller and nodes in the Large Instance stamps are synchronized to NTP servers. With you
synchronizing the SAP HANA on Azure (Large Instances) units and Azure VMs against an NTP server, there should
be no significant time drift happening between the infrastructure and the compute units in Azure or Large Instance
stamps.
In order to optimize SAP HANA to the storage used underneath, you should also set the following SAP HANA
configuration parameters:
max_parallel_io_requests 128
async_read_submit on
async_write_submit_active on
async_write_submit_blocks all
For SAP HANA 1.0 versions up to SPS12, these parameters can be set during the installation of the SAP HANA
database, as described in SAP Note #2267798 - Configuration of the SAP HANA Database
You also can configure the parameters after the SAP HANA database installation by using the hdbparam
framework.
With SAP HANA 2.0, the hdbparam framework has been deprecated. As a result the parameters must be set using
SQL commands. For details, see SAP Note #2399079: Elimination of hdbparam in HANA 2.
Operating system
Swap space of the delivered OS image is set to 2 GB according to the SAP Support Note #1999997 - FAQ: SAP
HANA Memory. Any different setting desired needs to be set by you as a customer.
SUSE Linux Enterprise Server 12 SP1 for SAP Applications is the distribution of Linux installed for SAP HANA on
Azure (Large Instances). This particular distribution provides SAP-specific capabilities "out of the box" (including
pre-set parameters for running SAP on SLES effectively).
See Resource Library/White Papers on the SUSE website and SAP on SUSE on the SAP Community Network (SCN)
for several useful resources related to deploying SAP HANA on SLES (including the set-up of High Availability,
security hardening specific to SAP operations, and more).
Additional and useful SAP on SUSE-related links:
SAP HANA on SUSE Linux Site
Best Practice for SAP: Enqueue Replication SAP NetWeaver on SUSE Linux Enterprise 12.
ClamSAP SLES Virus Protection for SAP (including SLES 12 for SAP Applications).
SAP Support Notes applicable to implementing SAP HANA on SLES 12:
SAP Support Note #1944799 SAP HANA Guidelines for SLES Operating System Installation.
SAP Support Note #2205917 SAP HANA DB Recommended OS Settings for SLES 12 for SAP Applications.
SAP Support Note #1984787 SUSE Linux Enterprise Server 12: Installation Notes.
SAP Support Note #171356 SAP Software on Linux: General Information.
SAP Support Note #1391070 Linux UUID Solutions.
Red Hat Enterprise Linux for SAP HANA is another offer for running SAP HANA on HANA Large Instances. Releases
of RHEL 6.7 and 7.2 are available.
Additional and useful SAP on Red Hat related links:
SAP HANA on Red Hat Linux Site.
SAP Support Notes applicable to implementing SAP HANA on Red Hat:
SAP Support Note #2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux (RHEL) Operating System.
SAP Support Note #2292690 - SAP HANA DB: Recommended OS settings for RHEL 7.
SAP Support Note #2247020 - SAP HANA DB: Recommended OS settings for RHEL 6.7.
SAP Support Note #1391070 Linux UUID Solutions.
SAP Support Note #2228351 - Linux: SAP HANA Database SPS 11 revision 110 (or higher) on RHEL 6 or SLES
11.
SAP Support Note #2397039 - FAQ: SAP on RHEL.
SAP Support Note #1496410 - Red Hat Enterprise Linux 6.x: Installation and Upgrade.
SAP Support Note #2002167 - Red Hat Enterprise Linux 7.x: Installation and Upgrade.
Time synchronization
SAP applications built on the SAP NetWeaver architecture are sensitive on time differences for the various
components that comprise the SAP system. SAP ABAP short dumps with the error title of
ZDATE_LARGE_TIME_DIFF are likely familiar, as these short dumps appear when the system time of different
servers or VMs is drifting too far apart.
For SAP HANA on Azure (Large Instances), time synchronization done in Azure doesn't apply to the compute units
in the Large Instance stamps. This synchronization is not applicable for running SAP applications in native Azure
VMs, as Azure ensures a system's time is properly synchronized. As a result, a separate time server must be set up
that can be used by SAP application servers running on Azure VMs and the SAP HANA database instances running
on HANA Large Instances. The storage infrastructure in Large Instance stamps is time synchronized with NTP
servers.
Log in to the HANA Large Instance unit(s), maintain /etc/hosts and check whether you can reach the Azure VM that
is supposed to run the SMT server over the network.
After this check is done successfully, you need to log in to the Azure VM that should run the SMT server. If you are
using putty to log in to the VM, you need to execute this sequence of commands in your bash window:
cd ~
echo "export NCURSES_NO_UTF8_ACS=1" >> .bashrc
After executing these commands, restart your bash to activate the settings. Then start YAST.
In YAST, go to Software Maintenance and search for smt. Select smt, which switches automatically to yast2-smt as
shown below
Accept the selection for installation on the smtserver. Once installed, go to the SMT server configuration and enter
the organizational credentials from the SUSE Customer Center you retrieved earlier. Also enter your Azure VM
hostname as the SMT Server URL. In this demonstration, it was https://smtserver as displayed in the next graphics.
As next step, you need to test whether the connection to the SUSE Customer Center works. As you see in the
following graphics, in the demonstration case, it did work.
Once the SMT setup starts, you need to provide a database password. Since it is a new installation, you need to
define that password as shown in the next graphics.
The next interaction you have is when a certificate gets created. Go through the dialog as shown next and the step
should proceed.
There might be some minutes spent in the step of 'Run synchronization check' at the end of the configuration.
After the installation and configuration of the SMT server, you should find the directory repo under the mount
point /srv/www/htdocs/ plus some sub-directories under repo.
Restart the SMT server and its related services with these commands.
rcsmt restart
systemctl restart smt.service
systemctl restart apache2
Once you are finished with the package selection, you need to start the initial copy of the select packages to the
SMT server you set up. This copy is triggered in the shell using the command smt-mirror as shown below
As you see above, the packages should get copied into the directories created under the mount point
/srv/www/htdocs. This process can take a while. Dependent on how many packages you select, it could take up to
one hour or more. As this process finishes, you need to move to the SMT client setup.
Set up the SMT client on HANA Large Instance units
The client(s) in this case are the HANA Large Instance units. The SMT server setup copied the script
clientSetup4SMT.sh into the Azure VM. Copy that script over to the HANA Large Instance unit you want to connect
to your SMT server. Start the script with the -h option and give it as parameter the name of your SMT server. In this
example smtserver.
There might be a scenario where the load of the certificate from the server by the client succeeded, but the
registration failed as shown below.
If the registration failed, read this SUSE support document and execute the steps described there.
IMPORTANT
As server name you need to provide the name of the VM, in this case smtserver, without the fully qualified domain name.
Just the VM name works.
After these steps have been executed, you need to execute the following command on the HANA Large Instance
unit
SUSEConnect cleanup
NOTE
In our tests we always had to wait a few minutes after that step. The immediate execution clientSetup4SMT.sh, after the
corrective measures described in the SUSE article, ended with messages that the certificate would not be valid yet. Waiting
for 5-10 minutes usually and executing clientSetup4SMT.sh ended in a successful client configuration.
If you ran into the issue that you needed to fix based on the steps of the SUSE article, you need to restart
clientSetup4SMT.sh on the HANA Large Instance unit again. Now it should finish successfully as shown below.
With this step, you configured the SMT client of the HANA Large Instance unit to connect against the SMT server
you installed in the Azure VM. You now can take 'zypper up' or 'zypper in' to install OS patches to HANA Large
Instances or install additional packages. It is understood that you only can get patches that you downloaded before
on the SMT server.
In the demonstration case, we downloaded SAP HANA 2.0 installation packages. On the Azure jump box VM, you
expand the self-extracting archives into the directory as shown below.
As the archives are extracted, copy the directory created by the extraction, in the case above 51052030, to the
HANA Large instance unit into the /hana/shared volume into a directory you created.
IMPORTANT
Do Not copy the installation packages into the root or boot LUN since space is limited and needs to be used by other
processes as well.
In further steps, we are demonstrating the SAP HANA setup with the graphical user interface. As next step, go into
the installation directory and navigate into the sub directory HDB_LCM_LINUX_X86_64. Start
./hdblcmgui
out of that directory. Now you are getting guided through a sequence of screens where you need to provide the
data for the installation. In the case demonstrated, we are installing the SAP HANA database server and the SAP
HANA client components. Therefore our selection is 'SAP HANA Database' as shown below
In the next screen, you choose the option 'Install New System'
After this step, you need to select between several additional components that can be installed additionally to the
SAP HANA database server.
For the purpose of this documentation, we chose the SAP HANA Client and the SAP HANA Studio. We also
installed a scale-up instance. hence in the next screen, you need to choose 'Single-Host System'
In the next screen, you need to provide some data
IMPORTANT
As HANA System ID (SID), you need to provide the same SID, as you provided Microsoft when you ordered the HANA Large
Instance deployment. Choosing a different SID makes the installation fail due to access permission problems on the different
volumes
As installation directory you use the /hana/shared directory. In the next step, you need to provide the locations for
the HANA data files and the HANA log files
NOTE
You should define as data and log files the volumes that came already with the mount points that contain the SID you chose
in the screen selection before this screen. If the SID does mismatch with the one you typed in, in the screen before, go back
and adjust the SID to the value you have on the mount points.
In the next step, review the host name and eventually correct it.
In the next step, you also need to retrieve data you gave to Microsoft when you ordered the HANA Large Instance
deployment.
IMPORTANT
You need to provide the same System User ID and ID of User Group as you provided Microsoft as you order the unit
deployment. If you fail to give the very same IDs, the installation of SAP HANA on the HANA Large Instance unit fails.
In the next two screens, which we are not showing in this documentation, you need to provide the password for
the SYSTEM user of the SAP HANA database and the password for the sapadm user, which is used for the SAP
Host Agent that gets installed as part of the SAP HANA database instance.
After defining the password, a confirmation screen is showing up. check all the data listed and continue with the
installation. You reach a progress screen that documents the installation progress, like the one below
As the installation finishes, you should a picture like the following one
At this point, the SAP HANA instance should be up and running and ready for usage. You should be able to connect
to it from SAP HANA Studio. Also make sure that you check for the latest patches of SAP HANA and apply those
patches.
SAP HANA Large Instances high availability and
disaster recovery on Azure
10/3/2017 54 min to read Edit Online
High availability and disaster recovery (DR) are important aspects of running your mission-critical SAP HANA on
Azure (Large Instances) server. It's important to work with SAP, your system integrator, or Microsoft to properly
architect and implement the right high-availability and disaster-recovery strategy. It is also important to consider
the recovery point objective (RPO) and the recovery time objective, which are specific to your environment.
Microsoft supports some SAP HANA high-availability capabilities with HANA Large Instances. These capabilities
include:
Storage replication: The storage system's ability to replicate all data to another HANA Large Instance stamp
in another Azure region. SAP HANA operates independently of this method.
HANA system replication: The replication of all data in SAP HANA to a separate SAP HANA system. The
recovery time objective is minimized through data replication at regular intervals. SAP HANA supports
asynchronous, synchronous in-memory, and synchronous modes. Synchronous mode is recommended only
for SAP HANA systems that are within the same datacenter or less than 100 km apart. In the current design of
HANA large-instance stamps, HANA system replication can be used for high availability only. Currently, HANA
system replication requires a third-party reverse-proxy component for disaster-recovery configurations into
another Azure region.
Host auto-failover: A local fault-recovery solution for SAP HANA to use as an alternative to HANA system
replication. If the master node becomes unavailable, you configure one or more standby SAP HANA nodes in
scale-out mode, and SAP HANA automatically fails over to a standby node.
SAP HANA on Azure (Large Instances) is offered in two Azure regions that cover three different geopolitical
regions (US, Australia, and Europe). Two different regions that host HANA Large Instance stamps are connected to
separate dedicated network circuits that are used for replicating storage snapshots to provide disaster-recovery
methods. The replication is not established by default. It is set up for customers that ordered disaster-recovery
functionality. Storage replication is dependent on the usage of storage snapshots for HANA Large Instances. It is
not possible to choose an Azure region as a DR region that is in a different geopolitical area.
The following table shows the currently supported high-availability and disaster-recovery methods and
combinations:
SCENARIO SUPPORTED IN
HANA LARGE INSTANCES HIGH-AVAILABILITY OPTION DISASTER-RECOVERY OPTION COMMENTS
Host auto-failover: N+m Possible with the standby Dedicated DR setup. HANA volume sets are
including 1+1 taking the active role. Multipurpose DR setup. attached to all the nodes
HANA controls the role DR synchronization by using (n+m).
switch. storage replication. DR site must have the same
number of nodes.
SCENARIO SUPPORTED IN
HANA LARGE INSTANCES HIGH-AVAILABILITY OPTION DISASTER-RECOVERY OPTION COMMENTS
HANA system replication Possible with primary or Dedicated DR setup. Separate set of disk volumes
secondary setup. Multipurpose DR setup. are attached to each node.
Secondary moves to primary DR synchronization by using Only disk volumes of
role in a failover case. storage replication. secondary replica in the
HANA system replication DR by using HANA system production site get
and OS control failover. replication is not yet replicated to the DR
possible without third-party location.
components. One set of volumes is
required at the DR site.
A dedicated DR setup is where the HANA Large Instance unit in the DR site is not used for running any other
workload or non-production system. The unit is passive and is deployed only if a disaster failover is executed.
Though, this is not a preferred choice for many customers.
A multipurpose DR setup is where the HANA Large Instance unit on the DR site runs a non-production workload.
In case of disaster, you shut down the non-production system, you mount the storage-replicated (additional)
volume sets, and then you start the production HANA instance. Most customers who use the HANA Large Instance
disaster-recovery functionality, use this configuration.
You can find more information on SAP HANA high availability in the following SAP articles:
SAP HANA High Availability Whitepaper
SAP HANA Administration Guide
SAP Academy Video on SAP HANA System Replication
SAP Support Note #1999880 FAQ on SAP HANA System Replication
SAP Support Note #2165547 SAP HANA Back up and Restore within SAP HANA System Replication
Environment
SAP Support Note #1984882 Using SAP HANA System Replication for Hardware Exchange with
Minimum/Zero Downtime
NOTE
The snapshot technology that is used by the underlying infrastructure of HANA Large Instances has a dependency on SAP
HANA snapshots. At this point, SAP HANA snapshots do not work in conjunction with multiple tenants of SAP HANA
multitenant database containers. Thus, this method of backup cannot be used when you deploy multiple tenants in SAP
HANA multitenant database containers. If only one tenant is deployed, SAP HANA snapshots do work.
NOTE
Storage snapshots consume storage space that has been allocated to the HANA Large Instance units. Therefore, you need
to consider the following aspects of scheduling storage snapshots and how many storage snapshots to keep.
The specific mechanics of storage snapshots for SAP HANA on Azure (Large Instances) include:
A specific storage snapshot (at the point in time when it is taken) consumes little storage.
As data content changes and the content in SAP HANA data files change on the storage volume, the snapshot
needs to store the original block content, as well as the data changes.
As a result, the storage snapshot increases in size. The longer the snapshot exists, the larger the storage
snapshot becomes.
The more changes that are made to the SAP HANA database volume over the lifetime of a storage snapshot,
the larger the space consumption of the storage snapshot.
SAP HANA on Azure (Large Instances) comes with fixed volume sizes for the SAP HANA data and log volumes.
Performing snapshots of those volumes eats into your volume space. You need to determine when to schedule
storage snapshots. You also need to monitor the space consumption of the storage volumes, as well as manage
the number of snapshots that you store. You can disable the storage snapshots when you either import masses of
data or perform other significant changes to the HANA database.
The following sections provide information for performing these snapshots, including general recommendations:
Though the hardware can sustain 255 snapshots per volume, we highly recommend that you stay well below
this number.
Before you perform storage snapshots, monitor and keep track of free space.
Lower the number of storage snapshots based on free space. You can lower the number of snapshots that you
keep, or you can extend the volumes. You can order additional storage in 1-terabyte units.
During activities such as moving data into SAP HANA with SAP platform migration tools (R3load) or restoring
SAP HANA databases from backups, disable storage snapshots on the /hana/data volume.
During larger reorganizations of SAP HANA tables, storage snapshots should be avoided, if possible.
Storage snapshots are a prerequisite to taking advantage of the disaster-recovery capabilities of SAP HANA on
Azure (Large Instances).
Setting up storage snapshots
The steps to set up storage snapshots with HANA Large Instances are as follows:
1. Make sure that Perl is installed on the Linux operating system on the HANA Large Instances server.
2. Modify the /etc/ssh/ssh_config to add the line MACs hmac-sha1.
3. Create an SAP HANA backup user account on the master node for each SAP HANA instance you are running, if
applicable.
4. Install the SAP HANA HDB client on all the SAP HANA Large Instances servers.
5. On the first SAP HANA Large Instances server of each region, create a public key to access the underlying
storage infrastructure that controls snapshot creation.
6. Copy the scripts and configuration file from GitHub to the location of hdbsql in the SAP HANA installation.
7. Modify the HANABackupDetails.txt file as necessary for the appropriate customer specifications.
Step 1: Install the SAP HANA HDB client
The Linux operating system installed on SAP HANA on Azure (Large Instances) includes the folders and scripts
necessary to execute SAP HANA storage snapshots for backup and disaster-recovery purposes. Check for more
recent releases in GitHub. The most recent release version of the scripts is 2.1. However, it is your responsibility to
install the SAP HANA HDB client on the HANA Large Instance units while you are installing SAP HANA. (Microsoft
does not install the HDB client or SAP HANA.)
Step 2: Change the /etc/ssh/ssh_config
Change /etc/ssh/ssh_config by adding the MACs hmac-sha1 line as shown here:
# RhostsRSAAuthentication no
# RSAAuthentication yes
# PasswordAuthentication yes
# HostbasedAuthentication no
# GSSAPIAuthentication no
# GSSAPIDelegateCredentials no
# GSSAPIKeyExchange no
# GSSAPITrustDNS no
# BatchMode no
# CheckHostIP yes
# AddressFamily any
# ConnectTimeout 0
# StrictHostKeyChecking ask
# IdentityFile ~/.ssh/identity
# IdentityFile ~/.ssh/id_rsa
# IdentityFile ~/.ssh/id_dsa
# Port 22
Protocol 2
# Cipher 3des
# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
# MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160
MACs hmac-sha1
# EscapeChar ~
# Tunnel no
# TunnelDevice any:any
# PermitLocalCommand no
# VisualHostKey no
# ProxyCommand ssh -q -W %h:%p gateway.example.com
The new location is _/root/.ssh/id_dsa.pub. Do not enter an actual password, or else you are required to enter
the password each time you sign in. Instead, select Enter twice to remove the "enter password" requirement for
signing in.
Check to make sure that the public key was corrected as expected by changing folders to /root/.ssh/ and then
executing the ls command. If the key is present, you can copy it by running the following command:
At this point, contact SAP HANA on Azure Service Management and provide them with the public key. The service
representative uses the public key to register it in the underlying storage infrastructure that is carved out for your
HANA Large Instance tenant.
Step 4: Create an SAP HANA user account
To initiate the creation of SAP HANA snapshots, you need to create a user account in SAP HANA that the storage
snapshot scripts can use. Create an SAP HANA user account within SAP HANA Studio for this purpose. This
account must have the following privileges: Backup Admin and Catalog Read. In this example, the username is
SCADMIN. The user account name created in HANA Studio is case-sensitive. Make sure to select No for requiring
the user to change the password on the next sign-in.
IMPORTANT
Run the following command as root . Otherwise, the script cannot work properly.
In the following example, the user is SCADMIN01, the hostname is lhanad01, and the instance number is 01:
If you have an SAP HANA scale-out configuration, you should manage all scripting from a single server. In this
example, the SAP HANA key SCADMIN01 must be altered for each host in a way that reflects which host is
related to the key. Amend the SAP HANA backup account with the instance number of the HANA DB. The key
must have administrative privileges on the host it is assigned to, and the backup user for scale-out configurations
must have access rights to all the SAP HANA instances. Assuming the three scale-out nodes have the names
lhanad01, lhanad02, and lhanad03, the sequence of commands looks like this:
Step 6: Get the snapshot scripts, configure the snapshots, and test the configuration and connectivity
Download the most recent version of the scripts from GitHub. Copy the downloaded scripts and the text file to the
working directory for hdbsql. For current HANA installations, this directory is like
/hana/shared/D01/exe/linuxx86_64/hdb.
azure_hana_backup.pl
azure_hana_replication_status.pl
azure_hana_snapshot_details.pl
azure_hana_snapshot_delete.pl
testHANAConnection.pl
testStorageSnapshotConnection.pl
removeTestStorageSnapshot.pl
HANABackupCustomerDetails.txt
NOTE
Currently, only Node 1 details are used in the actual HANA storage snapshot script. We recommend that you test access to
or from all HANA nodes so that, if the master backup node ever changes, you have already ensured that any other node
can take its place by modifying the details in Node 1.
After you put all the configuration data into the HANABackupCustomerDetails.txt file, you need to check whether
the configurations are correct regarding the HANA instance data. Use the script testHANAConnection.pl . This script
is independent of an SAP HANA scale-up or scale-out configuration.
testHANAConnection.pl
If you have an SAP HANA scale-out configuration, ensure that the master HANA instance has access to all the
required HANA servers and instances. There are no parameters to the test script, but you must add your data into
the HANABackupCustomerDetails.txt configuration file for the script to run properly. Only the shell command
error codes are returned, so it is not possible for the script to error check every instance. Even so, the script does
provide some helpful comments for you to double-check.
To run the script, enter the following command:
./testHANAConnection.pl
If the script successfully obtains the status of the HANA instance, it displays a message that the HANA connection
was successful.
The next test step is to check the connectivity to the storage based on the data you put into the
HANABackupCustomerDetails.txt configuration file, and then execute a test snapshot. Before you execute the
azure_hana_backup.pl script, you must execute this test. If a volume contains no snapshots, it is impossible to
determine whether the volume is empty or if there is an SSH failure to obtain the snapshot details. For this reason,
the script executes two steps:
It verifies that the tenant's storage virtual machine and interfaces are accessible for the scripts to execute
snapshots.
It creates a test, or dummy, snapshot for each volume by HANA instance.
For this reason, the HANA instance is included as an argument. If the execution fails, it is not possible to provide
error checking for the storage connection. Even if there is no error checking, the script provides helpful hints.
The script is run as:
Next, the script tries to sign in to the storage by using the public key provided in the previous setup steps and with
the data configured in the HANABackupCustomerDetails.txt file. If sign-in is successful, the following content is
shown:
If problems occur connecting to the storage console, the output looks like this:
If the test snapshot has been executed successfully with the script, you can proceed with configuring the actual
storage snapshots. If it is not successful, investigate the problems before going ahead. The test snapshot should
stay around until the first real snapshots are done.
Step 7: Perform snapshots
As all the preparation steps are finished, you can start to configure the actual storage snapshot configuration. The
script to be scheduled works with SAP HANA scale-up and scale-out configurations. You should schedule the
execution of the scripts via cron.
Three types of snapshot backups can be created:
HANA: Combined snapshot backup in which the volumes that contain /hana/data and /hana/shared (which
contains /usr/sap as well) are covered by the coordinated snapshot. A single file restore is possible from this
snapshot.
Logs: Snapshot backup of the /hana/logbackups volume. No HANA snapshot is triggered to execute this
storage snapshot. This storage volume is the volume meant to contain the SAP HANA transaction-log backups.
SAP HANA transaction-log backups are performed more frequently to restrict log growth and prevent
potential data loss. A single file restore is possible from this snapshot. You should not lower the frequency to
under five minutes.
Boot: Snapshot of the volume that contains the boot logical unit number (LUN) of the HANA Large Instance.
This snapshot backup is possible only with the Type I SKUs of HANA Large Instances. You can't perform single
file restores from the snapshot of the volume that contains the boot LUN.
The call syntax for these three different types of snapshots looks like this:
HANA backup covering /hana/data and /hana/shared (includes/usr/sap)
./azure_hana_backup.pl hana <HANA SID> manual 30
NOTE
As soon as you change the label, the counting starts again. This means you need to be strict in labeling so your snapshots
are not accidentally deleted.
Snapshot strategies
The frequency of snapshots for the different types depends on whether you use the HANA Large Instance
disaster-recovery functionality or not. The disaster-recovery functionality of HANA Large Instances relies on
storage snapshots. Relying on storage snapshots might require some special recommendations in terms of the
frequency and execution periods of the storage snapshots.
In the considerations and recommendations that follow, we assume that you do not use the disaster-recovery
functionality HANA Large Instances offers. Instead, you use the storage snapshots as a way to have backups and
be able to provide point-in-time recovery for the last 30 days. Given the limitations of the number of snapshots
and space, customers have considered the following requirements:
The recovery time for point-in-time recovery.
The space used.
The recovery point objective and the recovery time objective for potential disaster recovery.
The eventual execution of HANA full-database backups against disks. Whenever a full-database backup against
disks or the backint interface is performed, the execution of the storage snapshots fails. If you plan to execute
full-database backups on top of storage snapshots, make sure that the execution of the storage snapshots is
disabled during this time.
The number of snapshots per volume is limited to 255.
For customers who don't use the disaster-recovery functionality of HANA Large Instances, the snapshot period is
less frequent. In such cases, we see customers performing the combined snapshots on /hana/data and
/hana/shared (includes /usr/sap) in 12-hour or 24-hour periods, and they keep the snapshots to cover a whole
month. The same is true with the snapshots of the log backup volume. However, the execution of SAP HANA
transaction-log backups against the log backup volume occurs in 5-minute to 15-minute periods.
We encourage you to perform scheduled storage snapshots by using cron. We also recommend that you use the
same script for all backups and disaster-recovery needs. You need to modify the script inputs to match the various
requested backup times. These snapshots are all scheduled differently in cron depending on their execution time:
hourly, 12-hour, daily, or weekly.
An example of a cron schedule in /etc/crontab might look like this:
In the previous example, there is an hourly combined snapshot that covers the volumes that contain the
/hana/data and /hana/shared (includes /usr/sap) locations. This type of snapshot would be used for a faster point-
in-time recovery within the past two days. Additionally, there is a daily snapshot on those volumes. So, you have
two days of coverage by hourly snapshots, plus four weeks of coverage by daily snapshots. Additionally, the
transaction-log backup volume is backed up once every day. These backups are kept for four weeks as well. As
you see in the third line of crontab, the backup of the HANA transaction log is scheduled to execute every five
minutes. The start minutes of the different cron jobs that execute storage snapshots are staggered, so that those
snapshots are not executed all at once at a certain point in time.
In the following example, you perform a combined snapshot that covers the volumes that contain the /hana/data
and /hana/shared (including /usr/sap) locations on an hourly basis. You keep these snapshots for two days. The
snapshots of the transaction-log backup volumes are executed on a five-minute basis and are kept for four hours.
As before, the backup of the HANA transaction log file is scheduled to execute every five minutes. The snapshot of
the transaction-log backup volume is performed with a two-minute delay after the transaction-log backup has
started. Within those two minutes, the SAP HANA transaction-log backup should finish under normal
circumstances. As before, the volume that contains the boot LUN is backed up once per day by a storage snapshot
and is kept for four weeks.
10 0-23 * * * ./azure_hana_backup.pl hana HM3 hourlyhana 48
0,5,10,15,20,25,30,35,40,45,50,55 * * * * Perform SAP HANA transaction log backup
2,7,12,17,22,27,32,37,42,47,52,57 * * * * ./azure_hana_backup.pl log HM3 logback 48
30 00 * * * ./azure_hana_backup.pl boot dailyboot 28
The following graphic illustrates the sequences of the previous example, excluding the boot LUN:
SAP HANA performs regular writes against the /hana/log volume to document the committed changes to the
database. On a regular basis, SAP HANA writes a savepoint to the /hana/data volume. As specified in crontab, an
SAP HANA transaction-log backup is executed every five minutes. You also see that an SAP HANA snapshot is
executed every hour as a result of triggering a combined storage snapshot over the /hana/data and /hana/shared
volumes. After the HANA snapshot succeeds, the combined storage snapshot is executed. As instructed in crontab,
the storage snapshot on the /hana/logbackup volume is executed every five minutes, around two minutes after
the HANA transaction-log backup.
IMPORTANT
The use of storage snapshots for SAP HANA backups is valuable only when the snapshots are performed in conjunction
with SAP HANA transaction-log backups. These transaction-log backups need to be able to cover the time periods between
the storage snapshots.
You can choose backups that are more frequent than every 15 minutes. This is frequently done in conjunction
with disaster recovery. Some customers perform transaction-log backups every five minutes.
If the database has never been backed up, the final step is to perform a file-based database backup to create a
single backup entry that must exist within the backup catalog. Otherwise, SAP HANA cannot initiate your specified
log backups.
After your first successful storage snapshots have been executed, you can also delete the test snapshot that was
executed in step 6. To do so, run the script removeTestStorageSnapshot.pl :
Use these commands to make sure that the snapshots that are taken and stored are not consuming all the storage
on the volumes.
NOTE
The snapshots of the boot LUN are not visible with the previous commands.
./azure_hana_snapshot_details.pl
Because the script tries to retrieve the HANA backup ID, it needs to connect to the SAP HANA instance. This
connection requires the configuration file HANABackupCustomerDetails.txt to be correctly set. An output of two
snapshots on a volume might look like this:
**********************************************************
****Volume: hana_shared_SAPTSTHDB100_t020_vol ***********
**********************************************************
Total Snapshot Size: 411.8MB
----------------------------------------------------------
Snapshot: customer.2016-09-20_1404.0
Create Time: "Tue Sep 20 18:08:35 2016"
Size: 2.10MB
Frequency: customer
HANA Backup ID:
----------------------------------------------------------
Snapshot: customer2.2016-09-20_1532.0
Create Time: "Tue Sep 20 19:36:21 2016"
Size: 2.37MB
Frequency: customer2
HANA Backup ID:
In the previous example, the snapshot label is customer and the number of snapshots with this label to be
retained is 30. As you respond to disk space consumption, you might want to reduce the number of stored
snapshots. The easy way to reduce the number of snapshots to 15, for example, is to run the script with the last
parameter set to 15:
If you run the script with this setting, the number of snapshots, including the new storage snapshot, is 15. The 15
most recent snapshots are kept, whereas the 15 older snapshots are deleted.
NOTE
This script reduces the number of snapshots only if there are snapshots that are more than one hour old. The script does
not delete snapshots that are less than one hour old. These restrictions are related to the optional disaster-recovery
functionality offered.
If you no longer want to maintain a set of snapshots with a specific backup label hanadaily in the syntax
examples, you can execute the script with 0 as the retention number. This removes all snapshots matching that
label. However, removing all snapshots can affect the capabilities of disaster recovery.
A second possibility to delete specific snapshots is to use the script azure_hana_snapshot_delete.pl . This script is
designed to delete a snapshot or set of snapshots either by using the HANA backup ID as found in HANA Studio
or through the snapshot name itself. Currently, the backup ID is only tied to the snapshots created for the hana
snapshot type. Snapshot backups of the type logs and boot do not perform an SAP HANA snapshot. Therefore,
there is no backup ID to be found for those snapshots. If the snapshot name is entered, it looks for all snapshots
on the different volumes that match the entered snapshot name. The call syntax of the script is:
./azure_hana_snapshot_delete.pl
IMPORTANT
If there is data that only exists on the snapshot that you are deleting, then if you execute the deletion, the data is lost
forever.
Recovering to the most recent HANA snapshot
If you experience a production-down scenario, the process of recovering from a storage snapshot can be initiated
as a customer incident with Microsoft Azure Support. It is a high-urgency matter if data was deleted in a
production system and the only way to retrieve the data is to restore the production database.
In a different situation, a point-in-time recovery might be low urgency and planned days in advance. You can plan
this recovery with SAP HANA on Azure Service Management instead of raising a high-priority problem. For
example, you might be planning to upgrade the SAP software by applying a new enhancement package. You then
need to revert to a snapshot that represents the state before the enhancement package upgrade.
Before you send the request, you need to prepare. The SAP HANA on Azure Service Management team can then
handle the request and provide the restored volumes. Afterward, you restore the HANA database based on the
snapshots. Here is how to prepare for the request:
NOTE
Your user interface might vary from the following screenshots, depending on the SAP HANA release that you are using.
1. Decide which snapshot to restore. Only the hana/data volume is restored unless you instruct otherwise.
2. Shut down the HANA instance.
3. Unmount the data volumes on each HANA database node. If the data volumes are still mounted to the
operating system, the restoration of the snapshot fails.
4. Open an Azure support request to instruct them about the restoration of a specific snapshot.
During the restoration: SAP HANA on Azure Service Management might ask you to attend a
conference call to ensure coordination, verification, and confirmation that the correct storage
snapshot is restored.
After the restoration: SAP HANA on Azure Service Management notifies you when the storage
snapshot has been restored.
5. After the restoration process is complete, remount all the data volumes.
6. Select the recovery options within SAP HANA Studio, if they do not automatically come up when you
reconnect to HANA DB through SAP HANA Studio. The following example shows a restoration to the last
HANA snapshot. A storage snapshot embeds one HANA snapshot. If you restore to the most recent storage
snapshot, it should be the most recent HANA snapshot. (If you restore to an older storage snapshot, you
need to locate the HANA snapshot based on the time the storage snapshot was taken.)
7. Select Recover the database to a specific data backup or storage snapshot.
IMPORTANT
Before you proceed, make sure that you have a complete and contiguous chain of transaction-log backups. Without these
backups, you cannot restore the current state of the database.
1. Complete steps 1-6 of from Recovering to the most recent HANA snapshot.
2. Select Recover the database to its most recent state.
3. Specify the location of the most recent HANA log backups. The location needs to contain all the HANA
transaction-log backups from the HANA snapshot to the most recent state.
4. Select a backup as a base from which to recover the database. In our example, the HANA snapshot in the
screenshot is the HANA snapshot that was included in the storage snapshot.
5. Clear the Use Delta Backups check box if deltas do not exist between the time of the HANA snapshot and
the most recent state.
6. On the summary screen, select Finish to start the restoration procedure.
Recovering to another point in time
To recover to a point in time between the HANA snapshot (included in the storage snapshot) and one that is later
than the HANA snapshot point-in-time recovery, do the following:
1. Make sure that you have all the transaction-log backups from the HANA snapshot to the time you want to
recover to.
2. Begin the procedure under Recovering to the most recent state.
3. In step 2 of the procedure, in the Specify Recovery Type window, select Recover the database to the
following point in time, and specify the point in time. Then complete steps 3-6.
Monitoring the execution of snapshots
As you use storage snapshots of HANA Large Instances, you also need to monitor the execution of those storage
snapshots. The script that executes a storage snapshot writes output to a file and then saves it to the same
location as the Perl scripts. A separate file is written for each storage snapshot. The output of each file clearly
shows the various phases that the snapshot script executes:
1. Find the volumes that need to create a snapshot.
2. Find the snapshots taken from these volumes.
3. Delete eventual existing snapshots to match the number of snapshots you specified.
4. Create an SAP HANA snapshot.
5. Create the storage snapshot over the volumes.
6. Delete the SAP HANA snapshot.
7. Rename the most recent snapshot to .0.
The most important part of the script cab identified is this part:
You can see from this sample how the script records the creation of the HANA snapshot. In the scale-out case, this
process is initiated on the master node. The master node initiates the synchronous creation of the SAP HANA
snapshots on each of the worker nodes. Then, the storage snapshot is taken. After the successful execution of the
storage snapshots, the HANA snapshot is deleted. The deletion of the HANA snapshot is initiated from the master
node.
IMPORTANT
As with multitier HANA System Replication, a shutdown of the Tier 2 HANA instance or server unit blocks replication to the
disaster-recovery site when you use the HANA Large Instance disaster-recovery functionality.
NOTE
The HANA Large Instance storage-replication functionality is mirroring and replicating storage snapshots. Therefore, if you
do not perform storage snapshots as introduced in the backup section of this document, there cannot be any replication to
the disaster-recovery site. Storage snapshot execution is a prerequisite to storage replication to the disaster-recovery site.
The next step for you is to install the second SAP HANA instance on the HANA Large Instance unit in the disaster
recovery Azure region, where you run the TST HANA instance. The newly installed SAP HANA instance needs to
have the same SID. The users created need to have the same UID and Group ID that the production instance has. If
the installation succeeded, you need to:
Stop the newly installed SAP HANA instance on the HANA large Instance unit in the disaster recovery Azure
region.
Unmount these PRD volumes and contact SAP HANA on Azure Service Management. The volumes can't stay
mounted to the unit because they can't be accessible while functioning as storage replication target.
The operations team is going to establish the replication relationship between the PRD volumes in the production
Azure region and the PRD volumes in the disaster recovery Azure region.
IMPORTANT
The /hana/log volume will not be replicated because it is not necessary to restore the replicated SAP HANA database to a
consistent state in the disaster recovery site.
The next step for you is to set up or adjust the storage snapshot backup schedule to get to your RTO and RPO in
the disaster case. To minimize the recovery point objective, set the following replication intervals in the HANA
Large Instance service:
The volumes that are covered by the combined snapshot (snapshot type = hana) replicate every 15 minutes to
the equivalent storage volume targets in the disaster-recovery site.
The transaction-log backup volume (snapshot type = logs) replicates every three minutes to the equivalent
storage volume targets in the disaster-recovery site.
To minimize the recovery point objective, set up the following:
Perform a hana type storage snapshot (see "Step 7: Perform snapshots") every 30 minutes to 1 hour.
Perform SAP HANA transaction-log backups every 5 minutes.
Perform a logs type storage snapshot every 5-15 minutes. With this interval period, you should be able to
achieve an RPO of around 15-25 minutes.
With this setup, the sequence of transaction-log backups, storage snapshots, and the replication of the HANA
transaction-log backup volume and /hana/data, and /hana/shared (includes /usr/sap) might look like the data
shown in this graphic:
To achieve an even better RPO in the disaster-recovery case, you can copy the HANA transaction-log backups
from SAP HANA on Azure (Large Instances) to the other Azure region. To achieve this further RPO reduction,
perform the following rough steps:
1. Back up the HANA transaction log as frequently as possible to /hana/logbackups.
2. Use rsync to copy the transaction-log backups to the NFS share hosted Azure virtual machines. The VMs are in
Azure virtual networks in the Azure production region and in the DR regions. You need to connect both Azure
virtual networks to the circuit connecting the production HANA Large Instances to Azure. See the graphics in
the Network considerations for disaster recovery with HANA Large Instances section.
3. Keep the transaction-log backups in the region in the VM attached to the NFS exported storage.
4. In a disaster-failover case, supplement the transaction-log backups you find on the /hana/logbackups volume
with more recently taken transaction-log backups on the NFS share in the disaster-recovery site.
5. Now you can start a transaction-log backup to restore to the latest backup that might be saved over to the DR
region.
As HANA Large Instance operations confirm having the replication relationship setup and you start the execution
storage snapshot backups, the data starts to be replicated.
As the replication progresses, the snapshots on the PRD volumes in the disaster recovery Azure regions are not
restored. They are only stored. If the volumes are mounted in such a state, they represent the state in which you
unmounted those volumes after the PRD SAP HANA instance was installed in the server unit in the disaster
recovery Azure region. They also represent the storage backups that are not yet restored.
In case of a failover, you also can choose to restore to an older storage snapshot instead of the latest storage
snapshot.
2. SAP HANA scans through the backup file locations and suggests the most recent transaction-log backup to
restore to. The scan can take a few minutes until a screen like the following appears:
3. Adjust some of the default settings:
Clear Use Delta Backups.
Select Initialize Log Area.
4. Select Finish.
A progress window, like the one shown here, should appear. Keep in mind that the example is of a disaster-
recovery restore of a 3-node scale-out SAP HANA configuration.
If the restore seems to hang at the Finish screen and does not show the progress screen, check to confirm that all
the SAP HANA instances on the worker nodes are running. If necessary, start the SAP HANA instances manually.
Failback from DR to a production site
You can fail back from a DR to a production site. Let's look at the case that the failover into the disaster-recovery
site was caused by problems in the production Azure region and not by your need to recover lost data. This
means you have been running your SAP production workload for a while in the disaster-recovery site. As the
problems in the production site are resolved, you want to fail back to your production site. Because you can't lose
data, the step back into the production site involves several steps and close cooperation with the SAP HANA on
Azure operations team. It is up to you to trigger the operations team to start synchronizing back to the production
site after the problems are resolved.
The sequence of steps looks like this:
1. The SAP HANA on Azure operations team gets the trigger to synchronize the production storage volumes from
the disaster-recovery storage volumes, which now represent the production state. In this state, the HANA Large
Instance unit in the production site is shut down.
2. The SAP HANA on Azure operations team monitors the replication and makes sure that a catch-up is achieved
before informing you as a customer.
3. You shut down the applications that use the production HANA Instance in the disaster-recovery site. You then
perform a HANA transaction-log backup. Then you stop the HANA instance running on the HANA Large
Instance units in the disaster-recovery site.
4. After the HANA instance running in the HANA Large Instance unit in the disaster-recovery site is shut down,
the operations team manually synchronizes the disk volumes again.
5. The SAP HANA on Azure operations team starts the HANA Large Instance unit in the production site again and
hands it over to you. Make sure that the SAP HANA instance is in a shutdown state at the startup time of the
HANA Large Instance unit.
6. You perform the same database restore steps as you did when failing over to the disaster-recovery site
previously.
Monitoring disaster recovery replication
You can monitor the status of your storage replication progress by executing the script
azure_hana_replication_status.pl . This script must be run from a unit running in the disaster-recovery location.
Otherwise, it is not going to function as expected. The script works regardless of whether or not replication is
active. The script can be run for every HANA Large Instance unit of your tenant in the disaster-recovery location. It
cannot be used to obtain details about the boot volume.
Call the script like:
hana_data_hm3_mnt00002_t020_dp
-------------------------------------------------
Link Status: Broken-Off
Current Replication Activity: Idle
Latest Snapshot Replicated: snapmirror.c169b434-75c0-11e6-9903-00a098a13ceb_2154095454.2017-04-21_051515
Size of Latest Snapshot Replicated: 244KB
Current Lag Time between snapshots: - ***Less than 90 minutes is acceptable***
How to troubleshoot and monitor SAP HANA (large
instances) on Azure
6/27/2017 6 min to read Edit Online
CPU
For an alert triggered due to improper threshold setting, a resolution is to reset to the default value or a more
reasonable threshold value.
The Load graph might show high CPU consumption, or high consumption in the past:
An alert triggered due to high CPU utilization could be caused by several reasons, including, but not limited to:
execution of certain transactions, data loading, hanging of jobs, long running SQL statements, and bad query
performance (for example, with BW on HANA cubes).
Refer to the SAP HANA Troubleshooting: CPU Related Causes and Solutions site for detailed troubleshooting steps.
Operating System
One of the most important checks for SAP HANA on Linux is to make sure that Transparent Huge Pages are
disabled, see SAP Note #2131662 Transparent Huge Pages (THP) on SAP HANA Servers.
You can check if Transparent Huge Pages are enabled through the following Linux command: cat
/sys/kernel/mm/transparent_hugepage/enabled
If always is enclosed in brackets as below, it means that the Transparent Huge Pages are enabled: [always]
madvise never; if never is enclosed in brackets as below, it means that the Transparent Huge Pages are disabled:
always madvise [never]
The following Linux command should return nothing: rpm -qa | grep ulimit. If it appears ulimit is installed,
uninstall it immediately.
Memory
You may observe that the amount of memory allocated by the SAP HANA database is higher than expected. The
following alerts indicate issues with high memory usage:
Host physical memory usage (Alert 1)
Memory usage of name server (Alert 12)
Total memory usage of Column Store tables (Alert 40)
Memory usage of services (Alert 43)
Memory usage of main storage of Column Store tables (Alert 45)
Runtime dump files (Alert 46)
Refer to the SAP HANA Troubleshooting: Memory Problems site for detailed troubleshooting steps.
Network
Refer to SAP Note #2081065 Troubleshooting SAP HANA Network and perform the network troubleshooting
steps in this SAP Note.
1. Analyzing round-trip time between server and client. A. Run the SQL script HANA_Network_Clients.
2. Analyze internode communication. A. Run SQL script HANA_Network_Services.
3. Run Linux command ifconfig (the output shows if any packet losses are occurring).
4. Run Linux command tcpdump.
Also, use the open source IPERF tool (or similar) to measure real application network performance.
Refer to the SAP HANA Troubleshooting: Networking Performance and Connectivity Problems site for detailed
troubleshooting steps.
Storage
From an end-user perspective, an application (or the system as a whole) runs sluggishly, is unresponsive, or can
even seem to hang if there are issues with I/O performance. In the Volumes tab in SAP HANA Studio, you can see
the attached volumes, and what volumes are used by each service.
Attached volumes in the lower part of the screen you can see details of the volumes, such as files and I/O statistics.
Refer to the SAP HANA Troubleshooting: I/O Related Root Causes and Solutions and SAP HANA Troubleshooting:
Disk Related Root Causes and Solutions site for detailed troubleshooting steps.
Diagnostic Tools
Perform an SAP HANA Health Check through HANA_Configuration_Minichecks. This tool returns potentially critical
technical issues that should have already been raised as alerts in SAP HANA Studio.
Refer to SAP Note #1969700 SQL statement collection for SAP HANA and download the SQL Statements.zip file
attached to that note. Store this .zip file on the local hard drive.
In SAP HANA Studio, on the System Information tab, right-click in the Name column and select Import SQL
Statements.
Select the SQL Statements.zip file stored locally, and a folder with the corresponding SQL statements will be
imported. At this point, the many different diagnostic checks can be run with these SQL statements.
For example, to test SAP HANA System Replication bandwidth requirements, right-click the Bandwidth statement
under Replication: Bandwidth and select Open in SQL Console.
The complete SQL statement opens allowing input parameters (modification section) to be changed and then
executed.
Another example is right-clicking on the statements under Replication: Overview. Select Execute from the
context menu:
Do the same for HANA_Configuration_Minichecks and check for any X marks in the C (Critical) column.
Sample outputs:
HANA_Configuration_MiniChecks_Rev102.01+1 for general SAP HANA checks.
HANA_Services_Overview for an overview of what SAP HANA services are currently running.
HANA_Services_Statistics for SAP HANA service information (CPU, memory, etc.).
This document provides the detailed step by step instructions to setup the High Availability on SUSE Operating
system using the STONITH device.
Disclaimer: This guide is derived by testing the setup in the Microsoft HANA Large Instances environment which
successfully works. As Microsoft Service Management team for HANA Large Instances does not support Operating
system, you may need to contact SUSE for any further troubleshooting or clarification on the operating system
layer. Microsoft service management team does setup STONITH device and will be fully supportive and can be
involved for troubleshooting for STONITH device issues.
Overview
To setup the High availability using SUSE clustering, the following pre-requisites must meet.
Pre -requisites
HANA Large Instances are provisioned
Operating system is registered
HANA Large Instances servers are connected to SMT server to get patches/packages
Operating system have latest patches installed
NTP (time server) is setup
Read and understand the latest version of SUSE documentation on HA setup
Setup details
In this guide, we used the following setup.
Operating System: SUSE 12 SP1
HANA Large Instances: 2xS192 (4 sockets, 2 TB)
HANA Version: HANA 2.0 SP1
Server Names: sapprdhdb95 (node1) and sapprdhdb96 (node2)
STONITH Device: iSCSI based STONITH device
NTP setup on one of the HANA Large Instance node
When you setup HANA Large Instances with HSR, you can request Microsoft Service Management team to setup
STONITH. If you are already an existing customer who have HANA Large Instances provisioned, and need STONITH
device setup for your existing blades, you need to provide the following information to Microsoft Service
Management team in the service request form (SRF). You can request SRF form thru the Technical Account
Manager or your Microsoft Contact for HANA Large Instance onboarding. The new customers can request STONITH
device at the time of provisioning. The inputs are available in the provisioning request form.
Server Name and Server IP address (e.g., myhanaserver1, 10.35.0.1)
Location (e.g., US East)
Customer Name (e.g., Microsoft)
Once the STONITH device is configured, Microsoft Service Management team does provide you the SBD device
name and IP address of the iSCSI storage which you can use to configure STONITH setup.
To setup the end to end HA using STONITH, the following steps needs to be followed:
1. Identify the SBD device
2. Initialize the SBD device
3. Configuring the Cluster
4. Setting Up the Softdog Watchdog
5. Join the node to the cluster
6. Validate the cluster
7. Configure the resources to the cluster
8. Test the failover process
1.4 Execute the command to log in to the iSCSI device, it shows four sessions. This needs to be done on both the
nodes.
iscsiadm -m node -l
1.5 Execute the rescan script: rescan-scsi-bus.sh. This shows you the new disks created for you. Run it on both the
nodes. You should see a LUN number that is greater than zero (for example: 1, 2 etc.)
rescan-scsi-bus.sh
1.6 To get the device name run the command fdisk l. Run it on both the nodes. Pick the device with the size of
178MiB.
fdisk l
2.2 Check what has been written to the device. Do it on both the nodes
Click Next
Click Next
In the default option, Booting was off, change it to on so pacemaker is started on boot. You can make the choice
based on your setup requirements. Click Next and the cluster configurtion is complete.
modprobe softdog
modprobe softdog
4.4 Check and ensure that softdog is running as below on both the nodes
/usr/share/sbd/sbd.sh start
4.6 Test the SBD daemon on both the nodes. You see two entries after you configure it on both the nodes
4.8 On the Second node (node2) you can check the message status
4.9 To adopt the sbd config, update the file /etc/sysconfig/sbd as following. This needs to be done on both the
nodes
SBD_DEVICE=" <SBD Device Name>"
SBD_WATCHDOG="yes"
SBD_PACEMAKER="yes"
SBD_STARTMODE="clean"
SBD_OPTS=""
ha-cluster-join
If you receive an error during joining the cluster, refer Scenario 6: Node 2 unable to join the cluster.
crm_mon
You can also log in to hawk to check the cluster status https://:7630. The default user is hacluster and the password
is linux. If needed, you can change the password using passwd command.
sapprdhdb95:~ # vi crm-bs.txt
# enter the following to crm-bs.txt
property $id="cib-bootstrap-options" \
no-quorum-policy="ignore" \
stonith-enabled="true" \
stonith-action="reboot" \
stonith-timeout="150s"
rsc_defaults $id="rsc-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op-options" \
timeout="600"
Add the configuration to the cluster.
# vi crm-sbd.txt
# enter the following to crm-sbd.txt
primitive stonith-sbd stonith:external/sbd \
params pcmk_delay_max="15" \
op monitor interval="15" timeout="15"
# vi crm-vip.txt
primitive rsc_ip_HA1_HDB10 ocf:heartbeat:IPaddr2 \
operations $id="rsc_ip_HA1_HDB10-operations" \
op monitor interval="10s" timeout="20s" \
params ip="10.35.0.197"
Now, stop the pacemaker service on node2 and resources failed over to node1
Before failover
After failover
9. Troubleshooting
This section describes the few failure scenarios, which can be encountered during the setup. You may not
necessarily face these issues.
Scenario 1: Cluster node not online
If any of the nodes does not show online in cluster manager, you can try following to bring it online.
Start the iSCSI service
iscsiadm -m node -l
Expected Output
If the yast2 does not open with the graphical view, follow the steps following.
Install the required packages. You must be logged in as user root and have SMT setup to download/install the
packages.
To install the packages, use yast>Software>Software Management>Dependencies> option Install recommended
packages. The following screenshot illustrates the expected screens.
NOTE
You need to perform the steps on both the nodes, so that you can access the yast2 graphical view from both the nodes.
Click Next
Click Finish
You also need to install the libqt4 and libyui-qt packages.
zypper -n install libqt4
Yast2 should be able to open the graphical view now as shown here.
Click Continue
Click Next when hte installation is complete
To fix it, delete the following line from the file /usr/lib/systemd/system/fstrim.timer
Persistent=true
After the preceding fix, node2 should get added to the cluster
Introduction
This guide helps you set up a single-instance SAP HANA on Azure virtual machines (VMs) when you install SAP
NetWeaver 7.5 and SAP HANA 1.0 SP12 manually. The focus of this guide is on deploying SAP HANA on Azure. It
does not replace SAP documentation.
NOTE
This guide describes deployments of SAP HANA into Azure VMs. For information on deploying SAP HANA into HANA large
instances, see Using SAP on Azure virtual machines (VMs).
Prerequisites
This guide assumes that you are familiar with such infrastructure as a service (IaaS) basics as:
How to deploy virtual machines or virtual networks via the Azure portal or PowerShell.
The Azure cross-platform command-line interface (CLI), including the option to use JavaScript Object Notation
(JSON) templates.
This guide also assumes that you are familiar with:
SAP HANA and SAP NetWeaver and how to install them on-premises.
Installing and operating SAP HANA and SAP application instances on Azure.
The following concepts and procedures:
Planning for SAP deployment on Azure, including Azure Virtual Network planning and Azure Storage
usage. See SAP NetWeaver on Azure Virtual Machines (VMs) - Planning and implementation guide.
Deployment principles and ways to deploy VMs in Azure. See Azure Virtual Machines deployment for
SAP.
High availability for SAP NetWeaver ASCS (ABAP SAP Central Services), SCS (SAP Central Services), and
ERS (Evaluated Receipt Settlement) on Azure. See High availability for SAP NetWeaver on Azure VMs.
Details on how to improve efficiency in leveraging a multi-SID installation of ASCS/SCS on Azure. See
Create a SAP NetWeaver multi-SID configuration.
Principles of running SAP NetWeaver based on Linux-driven VMs in Azure. See Running SAP NetWeaver
on Microsoft Azure SUSE Linux VMs. This guide provides specific settings for Linux in Azure VMs and
details on how to properly attach Azure storage disks to Linux VMs.
At this time, Azure VMs are certified by SAP for SAP HANA scale-up configurations only. Scale-out configurations
with SAP HANA workloads are not yet supported. For SAP HANA high availability in cases of scale-up
configurations, see High availability of SAP HANA on Azure virtual machines (VMs).
If you are seeking to get an SAP HANA instance or S/4HANA, or BW/4HANA system deployed in very fast time, you
should consider the usage of SAP Cloud Appliance Library. You can find documentation about deploying, for
example, an S/4HANA system through SAP CAL on Azure in this guide. All you need to have is an Azure
subscription and an SAP user that can be registered with SAP Cloud Appliance Library.
Additional resources
SAP HANA backup
For information on backing up SAP HANA databases on Azure VMs, see:
Backup guide for SAP HANA on Azure Virtual Machines
SAP HANA Azure Backup on file level
SAP HANA backup based on storage snapshots
SAP Cloud Appliance Library
For information on using SAP Cloud Appliance Library to deploy S/4HANA or BW/4HANA, see Deploy SAP
S/4HANA or BW/4HANA on Microsoft Azure.
SAP HANA -supported operating systems
For information on SAP HANA-supported operating systems, see SAP Support Note #2235581 - SAP HANA:
Supported Operating Systems. Azure VMs support only a subset of these operating systems. The following
operating systems are supported to deploy SAP HANA on Azure:
SUSE Linux Enterprise Server 12.x
Red Hat Enterprise Linux 7.2
For additional SAP documentation about SAP HANA and different Linux operating systems, see:
SAP Support Note #171356 - SAP Software on Linux: General Information
SAP Support Note #1944799 - SAP HANA Guidelines for SLES Operating System Installation
SAP Support Note #2205917 - SAP HANA DB Recommended OS Settings for SLES 12 for SAP Applications
SAP Support Note #1984787 - SUSE Linux Enterprise Server 12: Installation Notes
SAP Support Note #1391070 - Linux UUID Solutions
SAP Support Note #2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux (RHEL) Operating System
2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
SAP monitoring in Azure
For information about SAP monitoring in Azure, see:
SAP Note 2191498. This note discusses SAP "enhanced monitoring" with Linux VMs on Azure.
SAP Note 1102124. This note discusses information about SAPOSCOL on Linux.
SAP Note 2178632. This note discusses key monitoring metrics for SAP on Microsoft Azure.
Azure VM types
Azure VM types and SAP-supported workload scenarios used with SAP HANA are documented in SAP certified IaaS
Platforms.
Azure VM types that are certified by SAP for SAP NetWeaver or the S/4HANA application layer are documented in
SAP Note 1928533 - SAP Applications on Azure: Supported Products and Azure VM types.
NOTE
SAP-Linux-Azure integration is supported only on Azure Resource Manager and not the classic deployment model.
Key steps for SAP HANA installation when you use SAP SWPM
This section lists the key steps for a manual, single-instance SAP HANA installation when you use SAP SWPM to
perform a distributed SAP NetWeaver 7.5 installation. The individual steps are explained in more detail in
screenshots later in this guide.
1. Create an Azure virtual network that includes two test VMs.
2. Deploy the two Azure VMs with operating systems (in our example, SUSE Linux Enterprise Server (SLES) and
SLES for SAP Applications 12 SP1), according to the Azure Resource Manager model.
3. Attach two Azure standard or premium storage disks (for example, 75-GB or 500-GB disks) to the application
server VM.
4. Attach premium storage disks to the HANA DB server VM. For details, see the "Disk setup" section later in this
guide.
5. Depending on size or throughput requirements, attach multiple disks, and then create striped volumes by using
either logical volume management or a multiple-devices administration tool (MDADM) at the OS level inside the
VM.
6. Create XFS file systems on the attached disks or logical volumes.
7. Mount the new XFS file systems at the OS level. Use one file system for all the SAP software. Use the other file
system for the /sapmnt directory and backups, for example. On the SAP HANA DB server, mount the XFS file
systems on the premium storage disks as /hana and /usr/sap. This process is necessary to prevent the root file
system, which isn't large on Linux Azure VMs, from filling up.
8. Enter the local IP addresses of the test VMs in the /etc/hosts file.
9. Enter the nofail parameter in the /etc/fstab file.
10. Set Linux kernel parameters according to the Linux OS release you are using. For more information, see the
appropriate SAP notes that discuss HANA and the "Kernel parameters" section in this guide.
11. Add swap space.
12. Optionally, install a graphical desktop on the test VMs. Otherwise, use a remote SAPinst installation.
13. Download the SAP software from the SAP Service Marketplace.
14. Install the SAP ASCS instance on the app server VM.
15. Share the /sapmnt directory among the test VMs by using NFS. The application server VM is the NFS server.
16. Install the database instance, including HANA, by using SWPM on the DB server VM.
17. Install the primary application server (PAS) on the application server VM.
18. Start SAP Management Console (SAP MC). Connect with SAP GUI or HANA Studio, for example.
Key steps for SAP HANA installation when you use HDBLCM
This section lists the key steps for a manual, single-instance SAP HANA installation when you use SAP HDBLCM to
perform a distributed SAP NetWeaver 7.5 installation. The individual steps are explained in more detail in
screenshots throughout this guide.
1. Create an Azure virtual network that includes two test VMs.
2. Deploy two Azure VMs with operating systems (in our example, SLES and SLES for SAP Applications 12 SP1)
according to the Azure Resource Manager model.
3. Attach two Azure standard or premium storage disks (for example, 75-GB or 500-GB disks) to the app server
VM.
4. Attach premium storage disks to the HANA DB server VM. For details, see the "Disk setup" section later in this
guide.
5. Depending on size or throughput requirements, attach multiple disks and create striped volumes by using either
logical volume management or a multiple-devices administration tool (MDADM) at the OS level inside the VM.
6. Create XFS file systems on the attached disks or logical volumes.
7. Mount the new XFS file systems at the OS level. Use one file system for all the SAP software, and use the other
one for the /sapmnt directory and backups, for example. On the SAP HANA DB server, mount the XFS file
systems on the premium storage disks as /hana and /usr/sap. This process is necessary to help prevent the root
file system, which isn't large on Linux Azure VMs, from filling up.
8. Enter the local IP addresses of the test VMs in the /etc/hosts file.
9. Enter the nofail parameter in the /etc/fstab file.
10. Set kernel parameters according to the Linux OS release you are using. For more information, see the
appropriate SAP notes that discuss HANA and the "Kernel parameters" section in this guide.
11. Add swap space.
12. Optionally, install a graphical desktop on the test VMs. Otherwise, use a remote SAPinst installation.
13. Download the SAP software from the SAP Service Marketplace.
14. Create a group, sapsys, with group ID 1001, on the HANA DB server VM.
15. Install SAP HANA on the DB server VM by using HANA Database Lifecycle Manager (HDBLCM).
16. Install the SAP ASCS instance on the app server VM.
17. Share the /sapmnt directory among the test VMs by using NFS. The application server VM is the NFS server.
18. Install the database instance, including HANA, by using SWPM on the HANA DB server VM.
19. Install the primary application server (PAS) on the application server VM.
20. Start SAP MC. Connect through SAP GUI or HANA Studio.
Depending on the kind of issue, patches are classified by category and severity. Commonly used values for
category are: security, recommended, optional, feature, document, or yast. Commonly used values for
severity are: critical, important, moderate, low, or unspecified.
The zypper command looks only for the updates that your installed packages need. For example, you could use
this command:
sudo zypper patch --category=security,recommended --severity=critical,important
You can add the parameter --dry-run to test the update without actually updating the system.
Disk setup
The root file system in a Linux VM on Azure has a size limitation. Therefore, it's necessary to attach additional disk
space to an Azure VM for running SAP. For SAP application server Azure VMs, the use of Azure standard storage
disks might be sufficient. However, for SAP HANA DBMS Azure VMs, the use of Azure Premium Storage disks for
production and non-production implementations is mandatory.
Based on the SAP HANA TDI Storage Requirements, the following Azure Premium Storage configuration is
suggested:
/HANA/DATA AND
/HANA/LOG
STRIPED WITH
VM SKU RAM LVM OR MDADM /HANA/SHARED /ROOT VOLUME /USR/SAP
In the suggested disk configuration, the HANA data volume and log volume are placed on the same set of Azure
premium storage disks that are striped with LVM or MDADM. It is not necessary to define any RAID redundancy
level because Azure Premium Storage keeps three images of the disks for redundancy. To make sure that you
configure enough storage, consult the SAP HANA TDI Storage Requirements and SAP HANA Server Installation and
Update Guide. Also consider the different virtual hard disk (VHD) throughput volumes of the different Azure
premium storage disks as documented in High-performance Premium Storage and managed disks for VMs.
You can add more premium storage disks to the HANA DBMS VMs for storing database or transaction log backups.
For more information about the two main tools used to configure striping, see the following articles:
Configure software RAID on Linux
Configure LVM on a Linux VM in Azure
For more information on attaching disks to Azure VMs running Linux as a guest OS, see Add a disk to a Linux VM.
Azure Premium Storage allows you to define disk caching modes. For the striped set holding /hana/data and
/hana/log, disk caching should be disabled. For the other volumes (disks), the caching mode should be set to
ReadOnly.
For more information, see Premium Storage: High-performance storage for Azure Virtual Machine workloads.
To find sample JSON templates for creating VMs, go to Azure Quickstart Templates. The vm-simple-sles template is
a basic template. It includes a storage section, with an additional 100-GB data disk. This template can be used as a
base. You can adapt the template to your specific configuration.
NOTE
It is important to attach the Azure storage disk by using a UUID as documented in Running SAP NetWeaver on Microsoft
Azure SUSE Linux VMs.
In the test environment, two Azure standard storage disks were attached to the SAP app server VM, as shown in the
following screenshot. One disk stored all the SAP software (including NetWeaver 7.5, SAP GUI, and SAP HANA) for
installation. The second disk ensured that enough free space would be available for additional requirements (for
example, backup and test data) and for the /sapmnt directory (that is, SAP profiles) to be shared among all VMs that
belong to the same SAP landscape.
Kernel parameters
SAP HANA requires specific Linux kernel settings, which are not part of the standard Azure gallery images and must
be set manually. Depending on whether you use SUSE or Red Hat, the parameters might be different. The SAP
Notes listed earlier give information about those parameters. In the screenshots shown, SUSE Linux 12 SP1 was
used.
SLES for SAP Applications 12 GA and SLES for SAP Applications 12 SP1 have a new tool, tuned-adm, that replaces
the old sapconf tool. A special SAP HANA profile is available for tuned-adm. To tune the system for SAP HANA,
enter the following as a root user:
tuned-adm profile sap-hana
For more information about tuned-adm, see the SUSE documentation about tuned-adm.
In the following screenshot, you can see how tuned-adm changed the transparent_hugepage and numa_balancing
values, according to the required SAP HANA settings.
To make the SAP HANA kernel settings permanent, use grub2 on SLES 12. For more information about grub2, go
to the Configuration File Structure section of the SUSE documentation.
The following screenshot shows how the kernel settings were changed in the configuration file and then compiled
by using grub2-mkconfig:
Another option is to change the settings by using YaST and the Boot Loader > Kernel Parameters settings:
File systems
The following screenshot shows two file systems that were created on the SAP app server VM on top of the two
attached Azure standard storage disks. Both file systems are of type XFS and are mounted to /sapdata and
/sapsoftware.
It is not mandatory to structure your file systems in this way. You have other options for structuring the disk space.
The most important consideration is to prevent the root file system from running out of free space.
Regarding the SAP HANA DB VM, during a database installation, when you use SAPinst (SWPM) and the typical
installation option, everything is installed under /hana and /usr/sap. The default location for the SAP HANA log
backup is under /usr/sap. Again, because it's important to prevent the root file system from running out of storage
space, make sure that there is enough free space under /hana and /usr/sap before you install SAP HANA by using
SWPM.
For a description of the standard file-system layout of SAP HANA, see the SAP HANA Server Installation and Update
Guide.
When you install SAP NetWeaver on a standard SLES/SLES for SAP Applications 12 Azure gallery image, a message
is displayed that says there is no swap space, as shown in the following screenshot. To dismiss this message, you
can manually add a swap file by using dd, mkswap, and swapon. To learn how, search for "Adding a swap file
manually" in the Using the YaST Partitioner section of the SUSE documentation.
Another option is to configure swap space by using the Linux VM agent. For more information, see the Azure Linux
Agent User Guide.
The /etc/hosts file
Before you start to install SAP, make sure you include the host names and IP addresses of the SAP VMs in the
/etc/hosts file. Deploy all the SAP VMs within one Azure virtual network, and then use the internal IP addresses, as
shown here:
4. Run chkconfig to make sure that xrdp starts automatically after a reboot:
chkconfig -level 3 xrdp on
5. If you have an issue with the RDP connection, try to restart (from a PuTTY window, for example):
/etc/xrdp/xrdp.sh restart
6. If an xrdp restart mentioned in the previous step doesn't work, check for a .pid file:
check /var/run
Look for xrdp.pid . If you find it, remove it, and try to restart again.
Starting SAP MC
After you install the GNOME desktop, starting the graphical Java-based SAP MC from Firefox while running in an
Azure SLES 12/SLES for SAP Applications 12 VM might display an error because of the missing Java-browser plug-
in.
The URL to start the SAP MC is <server>:5<instance_number>13 .
For more information, see Starting the Web-Based SAP Management Console.
The following screenshot shows the error message that is displayed when the Java-browser plug-in is missing:
One way to solve the problem is to install the missing plug-in by using YaST, as shown in the following screenshot:
When you re-enter the SAP Management Console URL, a message appears asking you to activate the plug-in:
You might also receive an error message about a missing file, javafx.properties. This is related to the requirement of
Oracle Java 1.8 for SAP GUI 7.4. (See SAP Note 2059429.) Neither the IBM Java version nor the openjdk package
delivered with SLES/SLES for SAP Applications 12 includes the needed javafx.properties file. The solution is to
download and install Java SE 8 from Oracle.
For information about a similar issue with openjdk on openSUSE, see the discussion thread SAPGui 7.4 Java for
openSUSE 42.1 Leap.
On the app server VM, the /sapmnt directory should be shared via NFS by using the rw and no_root_squash
options. The defaults are ro and root_squash, which might lead to problems when you install the database
instance.
As the next screenshot shows, the /sapmnt share from the app server VM must be configured on the SAP HANA DB
server VM by using NFS Client (and YaST).
To perform a distributed NetWeaver 7.5 installation (Database Instance), as shown in the following screenshot,
sign in to the SAP HANA DB server VM and start SWPM.
After you select typical installation and the path to the installation media, enter a DB SID, the host name, the
instance number, and the DB system administrator password.
Enter the password for the DBACOCKPIT schema:
Enter a question for the SAPABAP1 schema password:
After each task is completed, a green check mark is displayed next to each phase of the DB installation process. The
message "Execution of ... Database Instance has completed" is displayed.
After successful installation, the SAP Management Console should also show the DB instance as "green" and
display the full list of SAP HANA processes (hdbindexserver, hdbcompileserver, and so forth).
The following screenshot shows the parts of the file structure under the /hana/shared directory that SWPM created
during the HANA installation. Because there is no option to specify a different path, it's important to mount
additional disk space under the /hana directory before the SAP HANA installation by using SWPM. This prevents
the root file system from running out of free space.
This screenshot shows the file structure of the /usr/sap directory:
The last step of the distributed ABAP installation is to install the primary application server instance:
After the primary application server instance and SAP GUI are installed, use the DBA Cockpit transaction to
confirm that the SAP HANA installation has finished correctly:
As a final step, you might want to first install HANA Studio in the SAP app server VM, and then connect to the SAP
HANA instance that's running on the DB server VM:
The following screenshot displays all the key options that you selected previously.
IMPORTANT
Directories that are named for HANA log and data volumes, as well as the installation path (/hana/shared in this sample) and
/usr/sap, should not be part of the root file system. These directories belong to the Azure data disks that were attached to
the VM (described in the "Disk setup" section). This approach helps prevent the root file system from running out of space. In
the following screenshot, you can see that the HANA system administrator has user ID 1005 and is part of the sapsys
group (ID 1001 ) that was defined before the installation.
You can check the \<HANA SID\>adm user ( azdadm in the following screenshot) details in the /etc/passwd directory:
After you install SAP HANA by using HDBLCM, you can see the file structure in SAP HANA Studio, as shown in the
following screenshot. The SAPABAP1 schema, which includes all the SAP NetWeaver tables, isn't available yet.
After you install SAP HANA, you can install SAP NetWeaver on top of it. As shown in the following screenshot, the
installation was performed as a distributed installation by using SWPM (as described in the previous section). When
you install the database instance by using SWPM, you enter the same data by using HDBLCM (for example, host
name, HANA SID, and instance number). SWPM then uses the existing HANA installation and adds more schemas.
The following screenshot shows the SWPM installation step where you enter data about the DBACOCKPIT schema:
Enter data about the SAPABAP1 schema:
After the SWPM database instance installation is completed, you can see the SAPABAP1 schema in SAP HANA
Studio:
Finally, after the SAP app server and SAP GUI installations are completed, you can verify the HANA DB instance by
using the DBA Cockpit transaction:
SAP software downloads
You can download software from the SAP Service Marketplace, as shown in the following screenshots.
Download NetWeaver 7.5 for Linux/HANA:
This article describes how to deploy S/4HANA on Azure by using the SAP Cloud Appliance Library (SAP CAL) 3.0. To
deploy other SAP HANA-based solutions, such as BW/4HANA, follow the same steps.
NOTE
For more information about the SAP CAL, go to the SAP Cloud Appliance Library website. SAP also has a blog about the SAP
Cloud Appliance Library 3.0.
NOTE
As of May 29, 2017, you can use the Azure Resource Manager deployment model in addition to the less-preferred classic
deployment model to deploy the SAP CAL. We recommend that you use the new Resource Manager deployment model and
disregard the classic deployment model.
NOTE
The next steps show how to create an SAP CAL account for Resource Manager deployments. If you already have an
SAP CAL account that is linked to the classic deployment model, you need to follow these steps to create a new SAP
CAL account. The new SAP CAL account needs to deploy in the Resource Manager model.
2. Create a new SAP CAL account. The Accounts page shows three choices for Azure:
a. Microsoft Azure (classic) is the classic deployment model and is no longer preferred.
b. Microsoft Azure is the new Resource Manager deployment model.
c. Windows Azure operated by 21Vianet is an option in China that uses the classic deployment model.
To deploy in the Resource Manager model, select Microsoft Azure.
3. Enter the Azure Subscription ID that can be found on the Azure portal.
4. To authorize the SAP CAL to deploy into the Azure subscription you defined, click Authorize. The following
page appears in the browser tab:
5. If more than one user is listed, choose the Microsoft account that is linked to be the coadministrator of the
Azure subscription you selected. The following page appears in the browser tab:
6. Click Accept. If the authorization is successful, the SAP CAL account definition displays again. After a short
time, a message confirms that the authorization process was successful.
7. To assign the newly created SAP CAL account to your user, enter your User ID in the text box on the right
and click Add.
8. To associate your account with the user that you use to sign in to the SAP CAL, click Review.
9. To create the association between your user and the newly created SAP CAL account, click Create.
You successfully created an SAP CAL account that is able to:
Use the Resource Manager deployment model.
Deploy SAP systems into your Azure subscription.
Now you can start to deploy S/4HANA into your user subscription in Azure.
NOTE
Before you continue, determine whether you have Azure core quotas for Azure H-Series VMs. At the moment, the SAP CAL
uses H-Series VMs of Azure to deploy some of the SAP HANA-based solutions. Your Azure subscription might not have any
H-Series core quotas for H-Series. If so, you might need to contact Azure support to get a quota of at least 16 H-Series cores.
NOTE
When you deploy a solution on Azure in the SAP CAL, you might find that you can choose only one Azure region. To deploy
into Azure regions other than the one suggested by the SAP CAL, you need to purchase a CAL subscription from SAP. You
also might need to open a message with SAP to have your CAL account enabled to deliver into Azure regions other than the
ones initially suggested.
Deploy a solution
Let's deploy a solution from the Solutions page of the SAP CAL. The SAP CAL has two sequences to deploy:
A basic sequence that uses one page to define the system to be deployed
An advanced sequence that gives you certain choices on VM sizes
We demonstrate the basic path to deployment here.
1. On the Account Details page, you need to:
a. Select an SAP CAL account. (Use an account that is associated to deploy with the Resource Manager
deployment model.)
b. Enter an instance Name.
c. Select an Azure Region. The SAP CAL suggests a region. If you need another Azure region and you don't
have an SAP CAL subscription, you need to order a CAL subscription with SAP.
d. Enter a master Password for the solution of eight or nine characters. The password is used for the
administrators of the different components.
2. Click Create, and in the message box that appears, click OK.
3. In the Private Key dialog box, click Store to store the private key in the SAP CAL. To use password
protection for the private key, click Download.
4. Read the SAP CAL Warning message, and click OK.
Now the deployment takes place. After some time, depending on the size and complexity of the solution (the
SAP CAL provides an estimate), the status is shown as active and ready for use.
5. To find the virtual machines collected with the other associated resources in one resource group, go to the
Azure portal:
6. On the SAP CAL portal, the status appears as Active. To connect to the solution, click Connect. Different
options to connect to the different components are deployed within this solution.
7. Before you can use one of the options to connect to the deployed systems, click Getting Started Guide.
The documentation names the users for each of the connectivity methods. The passwords for those users are
set to the master password you defined at the beginning of the deployment process. In the documentation,
other more functional users are listed with their passwords, which you can use to sign in to the deployed
system.
For example, if you use the SAP GUI that's preinstalled on the Windows Remote Desktop machine, the S/4
system might look like this:
Or if you use the DBACockpit, the instance might look like this:
Within a few hours, a healthy SAP S/4 appliance is deployed in Azure.
If you bought an SAP CAL subscription, SAP fully supports deployments through the SAP CAL on Azure. The
support queue is BC-VCM-CAL.
High Availability of SAP HANA on Azure Virtual
Machines (VMs)
7/31/2017 16 min to read Edit Online
On-premises, you can use either HANA System Replication or use shared storage to establish high availability for
SAP HANA. We currently only support setting up HANA System Replication on Azure. SAP HANA Replication
consists of one master node and at least one slave node. Changes to the data on the master node are replicated to
the slave nodes synchronously or asynchronously.
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework, install and configure SAP HANA System Replication. In the example configurations, installation
commands etc. instance number 03 and HANA System ID HDB is used.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA SR Performance Optimized Scenario The guide contains all required information to set up SAP
HANA System Replication on-premises. Use this guide as a baseline.
Deploying Linux
The resource agent for SAP HANA is included in SUSE Linux Enterprise Server for SAP Applications. The Azure
Marketplace contains an image for SUSE Linux Enterprise Server for SAP Applications 12 with BYOS (Bring Your
Own Subscription) that you can use to deploy new virtual machines.
Manual Deployment
1. Create a Resource Group
2. Create a Virtual Network
3. Create two Storage Accounts
4. Create an Availability Set
Set max update domain
5. Create a Load Balancer (internal)
Select VNET of step above
6. Create Virtual Machine 1
https://portal.azure.com/#create/suse-byos.sles-for-sap-byos12-sp1
SLES For SAP Applications 12 SP1 (BYOS)
Select Storage Account 1
Select Availability Set
7. Create Virtual Machine 2
https://portal.azure.com/#create/suse-byos.sles-for-sap-byos12-sp1
SLES For SAP Applications 12 SP1 (BYOS)
Select Storage Account 2
Select Availability Set
8. Add Data Disks
9. Configure the load balancer
a. Create a frontend IP pool
a. Open the load balancer, select frontend IP pool and click Add
b. Enter the name of the new frontend IP pool (for example hana-frontend)
a. Click OK
c. After the new frontend IP pool is created, write down its IP address
b. Create a backend pool
a. Open the load balancer, select backend pools and click Add
b. Enter the name of the new backend pool (for example hana-backend)
c. Click Add a virtual machine
d. Select the Availability Set you created earlier
e. Select the virtual machines of the SAP HANA cluster
f. Click OK
c. Create a health probe
a. Open the load balancer, select health probes and click Add
a. Enter the name of the new health probe (for example hana-hp)
b. Select TCP as protocol, port 62503, keep Interval 5 and Unhealthy threshold 2
c. Click OK
d. Create load balancing rules
a. Open the load balancer, select load balancing rules and click Add
b. Enter the name of the new load balancer rule (for example hana-lb-30315)
c. Select the frontend IP address, backend pool and health probe you created earlier (for example
hana-frontend)
d. Keep protocol TCP, enter port 30315
e. Increase idle timeout to 30 minutes
a. Make sure to enable Floating IP
f. Click OK
g. Repeat the steps above for port 30317
Deploy with template
You can use one of the quick start templates on github to deploy all required resources. The template deploys the
virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template:
1. Open the database template or the converged template on the Azure Portal The database template only creates
the load-balancing rules for a database whereas the converged template also creates the load-balancing rules
for an ASCS/SCS and ERS (Linux only) instance. If you plan to install an SAP NetWeaver based system and you
also want to install the ASCS/SCS instance on the same machines, use the converged template.
2. Enter the following parameters
a. Sap System Id
Enter the SAP system Id of the SAP system you want to install. The Id will be used as a prefix for the
resources that are deployed.
b. Stack Type (only applicable if you use the converged template)
Select the SAP NetWeaver stack type
c. Os Type
Select one of the Linux distributions. For this example, select SLES 12 BYOS
d. Db Type
Select HANA
e. Sap System Size
The amount of SAPS the new system will provide. If you are not sure how many SAPS the system will
require, please ask your SAP Technology Partner or System Integrator
f. System Availability
Select HA
g. Admin Username and Admin Password
A new user is created that can be used to log on to the machine.
h. New Or Existing Subnet
Determines whether a new virtual network and subnet should be created or an existing subnet should be
used. If you already have a virtual network that is connected to your on-premises network, select existing.
i. Subnet Id
The ID of the subnet to which the virtual machines should be connected to. Select the subnet of your VPN
or Express Route virtual network to connect the virtual machine to your on-premises network. The ID
usually looks like /subscriptions/ <subscription id >/resourceGroups/ <resource group name
>/providers/Microsoft.Network/virtualNetworks/ <virtual network name >/subnets/ <subnet name >
Setting up Linux HA
The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] - only
applicable to node 2.
1. [A] SLES for SAP BYOS only - Register SLES to be able to use the repositories
2. [A] SLES for SAP BYOS only - Add public-cloud module
3. [A] Update SLES
# insert the public key you copied in the last step into the authorized keys file on the second server
sudo vi /root/.ssh/authorized_keys
# insert the public key you copied in the last step into the authorized keys file on the first server
sudo vi /root/.ssh/authorized_keys
Create a volume group for the data files, one volume group for the log files and one for the shared
directory of SAP HANA
Create the mount directories and copy the UUID of all logical volumes
sudo mkdir -p /hana/data
sudo mkdir -p /hana/log
sudo mkdir -p /hana/shared
# write down the id of /dev/vg_hana_data/hana_data, /dev/vg_hana_log/hana_log and
/dev/vg_hana_shared/hana_shared
sudo blkid
sudo vi /etc/fstab
sudo mount -a
b. Plain Disks
For small or demo systems, you can place your HANA data and log files on one disk. The following
commands create a partition on /dev/sdc and format it with xfs.
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo ha-cluster-init
sudo ha-cluster-join
13. [A] Configure corosync to use other transport and add nodelist. Cluster will not work otherwise.
sudo vi /etc/corosync/corosync.conf
[...]
interface {
[...]
}
transport: udpu
}
nodelist {
node {
ring0_addr: < ip="" address="" of="" node="" 1="">
}
node {
ring0_addr: < ip="" address="" of="" node="" 2="">
}
}
logging {
[...]
Then restart the corosync service
PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbsql -u system -i 03 'CREATE USER hdbhasync PASSWORD "passwd"'
hdbsql -u system -i 03 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i 03 'ALTER USER hdbhasync DISABLE PASSWORD LIFETIME'
PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbuserstore SET hdbhaloc localhost:30315 hdbhasync passwd
PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbsql -u system -i 03 "BACKUP DATA USING FILE ('initialbackup')"
6. [1] Switch to the sapsid user (for example hdbadm) and create the primary site.
su - hdbadm
hdbnsutil -sr_enable -name=SITE1
7. [2] Switch to the sapsid user (for example hdbadm) and create the secondary site.
su - hdbadm
sapcontrol -nr 03 -function StopWait 600 10
hdbnsutil -sr_register --remoteHost=saphanavm1 --remoteInstance=03 --replicationMode=sync --name=SITE2
property $id="cib-bootstrap-options" \
no-quorum-policy="ignore" \
stonith-enabled="true" \
stonith-action="reboot" \
stonith-timeout="150s"
rsc_defaults $id="rsc-options" \
resource-stickiness="1000" \
migration-threshold="5000"
op_defaults $id="op-options" \
timeout="600"
sudo vi crm-saphanatop.txt
# enter the following to crm-saphana.txt
# replace the bold string with your instance number and HANA system id
ms msl_SAPHana_HDB_HDB03 rsc_SAPHana_HDB_HDB03 \
meta is-managed="true" notify="true" clone-max="2" clone-node-max="1" \
target-role="Started" interleave="true"
The virtual machine should now get restarted or stopped depending on your cluster configuration. If you set the
stonith-action to off, the virtual machine will be stopped and the resources are migrated to the running virtual
machine.
Once you start the virtual machine again, the SAP HANA resource will fail to start as secondary if you set
AUTOMATED_REGISTER="false". In this case, you need to configure the HANA instance as secondary by executing
the following command:
su - hdbadm
After the failover, you can start the service again. The SAP HANA resource on saphanavm1 will fail to start as
secondary if you set AUTOMATED_REGISTER="false". In this case, you need to configure the HANA instance as
secondary by executing the following command:
Testing a migration
You can migrate the SAP HANA master node by executing the following command
This should migrate the SAP HANA master node and the group that contains the virtual IP address to saphanavm2.
The SAP HANA resource on saphanavm1 will fail to start as secondary if you set AUTOMATED_REGISTER="false".
In this case, you need to configure the HANA instance as secondary by executing the following command:
su - hdbadm
# delete location contraints that are named like the following contraint. You should have two contraints, one
for the SAP HANA resource and one for the IP address group.
location cli-prefer-g_ip_HDB_HDB03 g_ip_HDB_HDB03 role=Started inf: saphanavm2
You also need to cleanup the state of the secondary node resource
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
Backup guide for SAP HANA on Azure Virtual
Machines
8/21/2017 13 min to read Edit Online
Getting Started
The backup guide for SAP HANA running on Azure virtual Machines will only describe Azure-specific topics. For
general SAP HANA backup related items, check the SAP HANA documentation (see SAP HANA backup
documentation later in this article).
The focus of this article is on two major backup possibilities for SAP HANA on Azure virtual machines:
HANA backup to the file system in an Azure Linux Virtual Machine (see SAP HANA Azure Backup on file level)
HANA backup based on storage snapshots using the Azure storage blob snapshot feature manually or Azure
Backup Service (see SAP HANA backup based on storage snapshots)
SAP HANA offers a backup API, which allows third-party backup tools to integrate directly with SAP HANA. (That is
not within the scope of this guide.) There is no direct integration of SAP HANA with Azure Backup service available
right now based on this API.
SAP HANA is officially supported on Azure VM type GS5 as single instance with an additional restriction to OLAP
workloads (see Find Certified IaaS Platforms on the SAP website). This article will be updated as new offerings for
SAP HANA on Azure become available.
There is also an SAP HANA hybrid solution available on Azure, where SAP HANA runs non-virtualized on physical
servers. However, this SAP HANA Azure backup guide covers a pure Azure environment where SAP HANA runs in
an Azure VM, not SAP HANA running on "large instances." See SAP HANA (large instances) overview and
architecture on Azure for more information about this backup solution on "large instances" based on storage
snapshots.
General information about SAP products supported on Azure can be found in SAP Note 1928533.
The following three figures give an overview of the SAP HANA backup options using native Azure capabilities
currently, and also show three potential future backup scenarios. The related articles SAP HANA Azure Backup on
file level and SAP HANA backup based on storage snapshots describe these options in more detail, including size
and performance considerations for SAP HANA backups that are multi-terabytes in size.
This figure shows the possibility of saving the current VM state, either via Azure Backup service or manual
snapshot of VM disks. With this approach, one doesn't have to manage SAP HANA backups. The challenge of the
disk snapshot scenario is file system consistency, and an application-consistent disk state. The consistency topic is
discussed in the section SAP HANA data consistency when taking storage snapshots later in this article.
Capabilities and restrictions of Azure Backup service related to SAP HANA backups are also discussed later in this
article.
This figure shows options for taking an SAP HANA file backup inside the VM, and then storing it HANA backup
files somewhere else using different tools. Taking a HANA backup requires more time than a snapshot-based
backup solution, but it has advantages regarding integrity and consistency. More details are provided later in this
article.
This figure shows a potential future SAP HANA backup scenario. If SAP HANA allowed taking backups from a
replication secondary, it would add additional options for backup strategies. Currently it isn't possible according to
a post in the SAP HANA Wiki:
"Is it possible to take backups on the secondary side?
No, currently you can only take data and log backups on the primary side. If automatic log backup is enabled,
after takeover to the secondary side, the log backups will automatically be written there."
Backups can be monitored in SAP HANA Cockpit while they are ongoing and, once it is finished, all the backup
details are available.
The previous screenshots were made from an Azure Windows VM. This one is an example using Firefox on an
Azure SLES 12 VM with Gnome desktop. It shows the option to define SAP HANA backup schedules in SAP HANA
Cockpit. As one can also see, it suggests date/time as a prefix for the backup files. In SAP HANA Studio, the default
prefix is "COMPLETE_DATA_BACKUP" when doing a full file backup. Using a unique prefix is recommended.
SAP HANA backup encryption
SAP HANA offers encryption of data and log. If SAP HANA data and log are not encrypted, then the backups are
also not encrypted. It is up to the customer to use some form of third-party solution to encrypt the SAP HANA
backups. See Data and Log Volume Encryption to find out more about SAP HANA encryption.
On Microsoft Azure, a customer could use the IaaS VM encryption feature to encrypt. For example, one could use
dedicated data disks attached to the VM, which are used to store SAP HANA backups, then make copies of these
disks.
Azure Backup service can handle encrypted VMs/disks (see How to back up and restore encrypted virtual
machines with Azure Backup).
Another option would be to maintain the SAP HANA VM and its disks without encryption, and store the SAP
HANA backup files in a storage account for which encryption was enabled (see Azure Storage Service Encryption
for Data at Rest).
Test setup
Test Virtual Machine on Azure
An SAP HANA installation in an Azure GS5 VM was used for the following backup/restore tests.
This figure shows part of the Azure portal overview for the HANA test VM.
Test backup size
A dummy table was filled up with data to get a total data backup size of over 200 GB to derive realistic
performance data. The figure was taken from the backup console in HANA Studio and shows the backup file size
of 229 GB for the HANA index server. For the tests, the default backup prefix "COMPLETE_DATA_BACKUP" in SAP
HANA Studio was used. In real production systems, a more useful prefix should be defined. SAP HANA Cockpit
suggests date/time.
Test tool to copy files directly to Azure storage
To transfer SAP HANA backup files directly to Azure blob storage, or Azure file shares, the blobxfer tool was used
because it supports both targets and it can be easily integrated into automation scripts due to its command-line
interface. The blobxfer tool is available on GitHub.
Test backup size estimation
It is important to estimate the backup size of SAP HANA. This estimate helps to improve performance by defining
the max backup file size for a number of backup files, due to parallelism during a file copy. (Those details are
explained later in this article.) One must also decide whether to do a full backup or a delta backup (incremental or
differential).
Fortunately, there is a simple SQL statement that estimates the size of the backup files: select * from
M_BACKUP_SIZE_ESTIMATIONS (see Estimate the Space Needed in the File System for a Data Backup).
For the test system, the output of this SQL statement matches almost exactly the real size of the full data backup
on disk.
Test HANA backup file size
The HANA Studio backup console allows one to restrict the max file size of HANA backup files. In the sample
environment, that feature makes it possible to get multiple smaller backup files instead of one 230-GB backup file.
Smaller file size has a significant impact on performance (see the related article SAP HANA Azure Backup on file
level).
Summary
Based on the test results the following tables show pros and cons of solutions to back up an SAP HANA database
running on Azure virtual machines.
Back up SAP HANA to the file system and copy backup files afterwards to the final backup destination
Keep HANA backups on VM disks No additional management efforts Eats up local VM disk space
SOLUTION PROS CONS
Blobxfer tool to copy backup files to Parallelism to copy multiple files, choice Additional tool maintenance and
blob storage to use cool blob storage custom scripting
Blob copy via Powershell or CLI No additional tool necessary, can be manual process, customer has to take
accomplished via Azure Powershell or care of scripting and management of
CLI copied blobs for restore
Blobxfer copy to Azure File Service Doesn't eat up space on local VM disks No direct write support by HANA
backup, size restriction of file share
currently at 5 TB
Azure Backup Agent Would be preferred solution Currently not available on Linux
Azure Backup Service Allows VM backup based on blob When not using file level restore, it
snapshots requires the creation of a new VM for
the restore process, which then implies
the need of a new SAP HANA license
key
Manual blob snapshots Flexibility to create and restore specific All manual work, which has to be done
VM disks without changing the unique by the customer
VM ID
Next steps
SAP HANA Azure Backup on file level describes the file-based backup option.
SAP HANA backup based on storage snapshots describes the storage snapshot-based backup option.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
SAP HANA Azure Backup on file level
8/21/2017 10 min to read Edit Online
Introduction
This is part of a three-part series of related articles on SAP HANA backup. Backup guide for SAP HANA on Azure
Virtual Machines provides an overview and information on getting started, and SAP HANA backup based on
storage snapshots covers the storage snapshot-based backup option.
Looking at the Azure VM sizes, one can see that a GS5 allows 64 attached data disks. For large SAP HANA
systems, a significant number of disks might already be taken for data and log files, possibly in combination with
software RAID for optimal disk IO throughput. The question then is where to store SAP HANA backup files, which
could fill up the attached data disks over time? See Sizes for Linux virtual machines in Azure for the Azure VM size
tables.
There is no SAP HANA backup integration available with Azure Backup service at this time. The standard way to
manage backup/restore at the file level is with a file-based backup via SAP HANA Studio or via SAP HANA SQL
statements. See SAP HANA SQL and System Views Reference for more information.
This figure shows the dialog of the backup menu item in SAP HANA Studio. When choosing type "file," one has to
specify a path in the file system where SAP HANA writes the backup files. Restore works the same way.
While this choice sounds simple and straight forward, there are some considerations. As mentioned before, an
Azure VM has a limitation of number of data disks that can be attached. There might not be capacity to store SAP
HANA backup files on the file systems of the VM, depending on the size of the database and disk throughput
requirements, which might involve software RAID using striping across multiple data disks. Various options for
moving these backup files, and managing file size restrictions and performance when handling terabytes of data,
are provided later in this article.
Another option, which offers more freedom regarding total capacity, is Azure blob storage. While a single blob is
also restricted to 1 TB, the total capacity of a single blob container is currently 500 TB. Additionally, it gives
customers the choice to select so-called "cool" blob storage, which has a cost benefit. See Azure Blob Storage: Hot
and cool storage tiers for details about cool blob storage.
For additional safety, use a geo-replicated storage account to store the SAP HANA backups. See Azure Storage
replication for details about storage account replication.
One could place dedicated VHDs for SAP HANA backups in a dedicated backup storage account that is geo-
replicated. Or else one could copy the VHDs that keep the SAP HANA backups to a geo-replicated storage
account, or to a storage account that is in a different region.
This screenshot is of the SAP HANA backup console in SAP HANA Studio. It took about 42 minutes to do the
backup of the 230 GB on a single Azure standard storage disk attached to the HANA VM using XFS file system.
This screenshot is of YaST on the SAP HANA test VM. One can see the 1-TB single disk for SAP HANA backup as
mentioned before. It took about 42 minutes to backup 230 GB. In addition, five 200-GB disks were attached and
software RAID md0 created, with striping on top of these five Azure data disks.
Repeating the same backup on software RAID with striping across five attached Azure standard storage data disks
brought the backup time from 42 minutes down to 10 minutes. The disks were attached without caching to the
VM. So it is obvious how important disk write throughput is for the backup time. One could then switch to Azure
premium storage to further accelerate the process for optimal performance. In general, Azure premium storage
should be used for production systems.
Not using md5 hash in the initial test, it took roughly 3000 seconds to copy the 230 GB to an Azure standard
storage account blob container.
In this screenshot, one can see how it looks on the Azure portal. A blob container named "sap-hana-backups" was
created and includes the four blobs, which represent the SAP HANA backup files. One of them has a size of
roughly 230 GB.
The HANA Studio backup console allows one to restrict the max file size of HANA backup files. In the sample
environment, it improved performance by making it possible to have multiple smaller backup files, instead of one
large 230-GB file.
Setting the backup file size limit on the HANA side doesn't improve the backup time, because the files are written
sequentially as shown in this figure. The file size limit was set to 60 GB, so the backup created four large data files
instead of the 230-GB single file.
To test parallelism of the blobxfer tool, the max file size for HANA backups was then set to 15 GB, which resulted
in 19 backup files. This configuration brought the time for blobxfer to copy the 230 GB to Azure blob storage from
3000 seconds down to 875 seconds.
This result is due to the limit of 60 MB/sec for writing an Azure blob. Parallelism via multiple blobs solves the
bottleneck, but there is a downside: increasing performance of the blobxfer tool to copy all these HANA backup
files to Azure blob storage puts load on both the HANA VM and the network. Operation of HANA system becomes
impacted.
After the backup to the local software RAID was completed, all VHDs involved were copied using the start-
azurestorageblobcopy PowerShell command (see Start-AzureStorageBlobCopy). As it only affects the dedicated
file system for keeping the backup files, there are no concerns about SAP HANA data or log file consistency on the
disk. A benefit of this command is that it works while the VM stays online. To be certain that no process writes to
the backup stripe set, be sure to unmount it before the blob copy, and mount it again afterwards. Or one could
use an appropriate way to "freeze" the file system. For example, via xfs_freeze for the XFS file system.
This screenshot shows the list of blobs in the "vhds" container on the Azure portal. The screenshot shows the five
VHDs, which were attached to the SAP HANA server VM to serve as the software RAID to keep SAP HANA backup
files. It also shows the five copies, which were taken via the blob copy command.
For testing purposes, the copies of the SAP HANA backup software RAID disks were attached to the app server
VM.
The app server VM was shut down to attach the disk copies. After starting the VM, the disks and the RAID were
discovered correctly (mounted via UUID). Only the mount point was missing, which was created via the YaST
partitioner. Afterwards the SAP HANA backup file copies became visible on OS level.
To verify the NFS use case, an NFS share from another Azure VM was mounted to the SAP HANA server VM.
There was no special NFS tuning applied.
The NFS share was a fast stripe set, like the one on the SAP HANA server. Nevertheless, it took 1 hour and 46
minutes to do the backup directly on the NFS share instead of 10 minutes, when writing to a local stripe set.
The alternative of doing a backup to a local stripe set and copying to the NFS share on OS level (a simple cp -avr
command) wasn't much quicker. It took 1 hour and 43 minutes.
So it works, but performance wasn't good for the 230-GB backup test. It would look even worse for multi
terabytes.
This figure shows that it took about 929 seconds to copy 19 SAP HANA backup files with a total size of roughly
230 GB to the Azure file share.
In this screenshot, one can see that the source directory structure on the SAP HANA VM was copied to the Azure
file share: one directory (hana_backup_fsl_15gb) and 19 individual backup files.
Storing SAP HANA backup files on Azure files could be an interesting option in the future when SAP HANA file
backups support it directly. Or when it becomes possible to mount Azure files via NFS and the maximum quota
limit is considerably higher than 5 TB.
Next steps
Backup guide for SAP HANA on Azure Virtual Machines gives an overview and information on getting started.
SAP HANA backup based on storage snapshots describes the storage snapshot-based backup option.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
SAP HANA backup based on storage snapshots
8/7/2017 8 min to read Edit Online
Introduction
This is part of a three-part series of related articles on SAP HANA backup. Backup guide for SAP HANA on Azure
Virtual Machines provides an overview and information on getting started, and SAP HANA Azure Backup on file
level covers the file-based backup option.
When using a VM backup feature for a single-instance all-in-one demo system, one should consider doing a VM
backup instead of managing HANA backups at the OS level. An alternative is to take Azure blob snapshots to
create copies of individual virtual disks, which are attached to a virtual machine, and keep the HANA data files. But
a critical point is app consistency when creating a VM backup or disk snapshot while the system is up and
running. See SAP HANA data consistency when taking storage snapshots in the related article Backup guide for
SAP HANA on Azure Virtual Machines. SAP HANA has a feature that supports these kinds of storage snapshots.
This screenshot shows that an SAP HANA data snapshot can be created via a SQL statement.
The snapshot then also appears in the backup catalog in SAP HANA Studio.
IMPORTANT
Confirm the HANA snapshot. Due to "Copy-on-Write," SAP HANA might require additional disk space in snapshot-prepare
mode, and it is not possible to start new backups until the SAP HANA snapshot is confirmed.
This figure shows part of the backup job list of an Azure Backup service, which was used to back up the HANA test
VM.
To show the job details, click the backup job in the Azure portal. Here, one can see the two phases. It might take a
few minutes until it shows the snapshot phase as completed. Most of the time is spent in the data transfer phase.
An Azure Backup service was created with the name "hana-backup-vault." The PS command Get-
AzureRmRecoveryServicesVault -Name hana-backup-vault retrieves the corresponding object. This object is
then used to set the backup context as seen on the next figure.
After setting the correct context, one can check for the backup job currently in progress, and then look for its job
details. The subtask list shows if the snapshot phase of the Azure backup job is already completed:
Once the job details are stored in a variable, it is simply PS syntax to get to the first array entry and retrieve the
status value. To complete the automation script, poll the value in a loop until it turns to "Completed."
This figure shows the Azure VM unique ID before and after the restore via Azure Backup service. The SAP
hardware key, which is used for SAP licensing, is using this unique VM ID. As a consequence, a new SAP license
has to be installed after a VM restore.
A new Azure Backup feature was presented in preview mode during the creation of this backup guide. It allows a
file level restore based on the VM snapshot that was taken for the VM backup. This avoids the need to deploy a
new VM, and therefore the unique VM ID stays the same and no new SAP HANA license key is required. More
documentation on this feature will be provided after it is fully tested.
Azure Backup will eventually allow backup of individual Azure virtual disks, plus files and directories from inside
the VM. A major advantage of Azure Backup is its management of all the backups, saving the customer from
having to do it. If a restore becomes necessary, Azure Backup will select the correct backup to use.
Next steps
Backup guide for SAP HANA on Azure Virtual Machines gives an overview and information on getting started.
SAP HANA backup based on file level covers the file-based backup option.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
Deploy SAP IDES EHP7 SP3 for SAP ERP 6.0 on
Azure
6/27/2017 4 min to read Edit Online
This article describes how to deploy an SAP IDES system running with SQL Server and the Windows operating
system on Azure via the SAP Cloud Appliance Library (SAP CAL) 3.0. The screenshots show the step-by-step
process. To deploy a different solution, follow the same steps.
To start with the SAP CAL, go to the SAP Cloud Appliance Library website. SAP also has a blog about the new SAP
Cloud Appliance Library 3.0.
NOTE
As of May 29, 2017, you can use the Azure Resource Manager deployment model in addition to the less-preferred classic
deployment model to deploy the SAP CAL. We recommend that you use the new Resource Manager deployment model and
disregard the classic deployment model.
If you already created an SAP CAL account that uses the classic model, you need to create another SAP CAL
account. This account needs to exclusively deploy into Azure by using the Resource Manager model.
After you sign in to the SAP CAL, the first page usually leads you to the Solutions page. The solutions offered on
the SAP CAL are steadily increasing, so you might need to scroll quite a bit to find the solution you want. The
highlighted Windows-based SAP IDES solution that is available exclusively on Azure demonstrates the deployment
process:
NOTE
The next steps show how to create an SAP CAL account for Resource Manager deployments. If you already have an
SAP CAL account that is linked to the classic deployment model, you need to follow these steps to create a new SAP
CAL account. The new SAP CAL account needs to deploy in the Resource Manager model.
2. To create a new SAP CAL account, the Accounts page shows two choices for Azure:
a. Microsoft Azure (classic) is the classic deployment model and is no longer preferred.
b. Microsoft Azure is the new Resource Manager deployment model.
3. Enter the Azure Subscription ID that can be found on the Azure portal.
4. To authorize the SAP CAL to deploy into the Azure subscription you defined, click Authorize. The following
page appears in the browser tab:
5. If more than one user is listed, choose the Microsoft account that is linked to be the coadministrator of the
Azure subscription you selected. The following page appears in the browser tab:
6. Click Accept. If the authorization is successful, the SAP CAL account definition displays again. After a short
time, a message confirms that the authorization process was successful.
7. To assign the newly created SAP CAL account to your user, enter your User ID in the text box on the right
and click Add.
8. To associate your account with the user that you use to sign in to the SAP CAL, click Review.
9. To create the association between your user and the newly created SAP CAL account, click Create.
NOTE
Before you can deploy the SAP IDES solution based on Windows and SQL Server, you might need to sign up for an SAP CAL
subscription. Otherwise, the solution might show up as Locked on the overview page.
Deploy a solution
1. After you set up an SAP CAL account, select The SAP IDES solution on Windows and SQL Server
solution. Click Create Instance, and confirm the usage and terms conditions.
2. On the Basic Mode: Create Instance page, you need to:
a. Enter an instance Name.
b. Select an Azure Region. You might need an SAP CAL subscription to get multiple Azure regions offered.
c. Enter the master Password for the solution, as shown:
3. Click Create. After some time, depending on the size and complexity of the solution (the SAP CAL provides
an estimate), the status is shown as active and ready for use:
4. To find the resource group and all its objects that were created by the SAP CAL, go to the Azure portal. The
virtual machine can be found starting with the same instance name that was given in the SAP CAL.
5. On the SAP CAL portal, go to the deployed instances and click Connect. The following pop-up window
appears:
6. Before you can use one of the options to connect to the deployed systems, click Getting Started Guide. The
documentation names the users for each of the connectivity methods. The passwords for those users are set
to the master password you defined at the beginning of the deployment process. In the documentation,
other more functional users are listed with their passwords, which you can use to sign in to the deployed
system.
Within a few hours, a healthy SAP IDES system is deployed in Azure.
If you bought an SAP CAL subscription, SAP fully supports deployments through the SAP CAL on Azure. The
support queue is BC-VCM-CAL.
Running SAP NetWeaver on Microsoft Azure SUSE
Linux VMs
9/15/2017 8 min to read Edit Online
This article describes various things to consider when you're running SAP NetWeaver on Microsoft Azure SUSE
Linux virtual machines (VMs). As of May 19, 2016 SAP NetWeaver is officially supported on SUSE Linux VMs on
Azure. All details regarding Linux versions, SAP kernel versions, and other prerequisites can be found in SAP Note
1928533 "SAP Applications on Azure: Supported Products and Azure VM types". Further documentation about SAP
on Linux VMs can be found here: Using SAP on Linux virtual machines (VMs).
The following information should help you avoid some potential pitfalls.
azure group deployment create "<deployment name>" -g "<resource group name>" --template-file "
<../../filename.json>"
For more information about JSON template files, see Authoring Azure Resource Manager templates and Azure
quickstart templates.
For more information about CLI and Azure Resource Manager, see Use the Azure CLI for Mac, Linux, and Windows
with Azure Resource Manager.
Logical volumes
In the past, if one needed a large logical volume across multiple Azure data disks (for example, for the SAP
database), it was recommended to use Raid Management tool MDADM since Linux Logical Volume Manager (LVM)
was not fully validated yet on Azure. To learn how to set up Linux RAID on Azure by using mdadm, see Configure
software RAID on Linux. In the meantime, as of beginning of May 2016, Linux Logical Volume Manager is fully
supported on Azure and can be used as an alternative to MDADM. For more information regarding LVM on Azure,
read:
Configure LVM on a Linux VM in Azure.
Gnome desktop
If you want to use the Gnome desktop to install a complete SAP demo system inside a single VM--including an SAP
GUI, browser, and SAP management console--use this hint to install it on the Azure SLES images:
For SLES 11:
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using the Resource Manager deployment model, which Microsoft recommends for new deployments
instead of the classic deployment model.
Microsoft Azure enables companies to acquire compute and storage resources in minimal time without lengthy
procurement cycles. Azure Virtual Machines allow companies to deploy classical applications, like SAP
NetWeaver based applications into Azure and extend their reliability and availability without having further
resources available on-premises. Azure Virtual Machine Services also supports cross-premises connectivity,
which enables companies to actively integrate Azure Virtual Machines into their on-premises domains, their
Private Clouds and their SAP System Landscape. This white paper describes the fundamentals of Microsoft
Azure Virtual Machine and provides a walk-through of planning and implementation considerations for SAP
NetWeaver installations in Azure and as such should be the document to read before starting actual
deployments of SAP NetWeaver on Azure. The paper complements the SAP Installation Documentation and
SAP Notes, which represent the primary resources for installations and deployments of SAP software on given
platforms.
Summary
Cloud Computing is a widely used term, which is gaining more and more importance within the IT industry,
from small companies up to large and multinational corporations.
Microsoft Azure is the Cloud Services Platform from Microsoft, which offers a wide spectrum of new
possibilities. Now customers are able to rapidly provision and de-provision applications as a service in the
cloud, so they are not limited to technical or budgeting restrictions. Instead of investing time and budget into
hardware infrastructure, companies can focus on the application, business processes, and its benefits for
customers and users.
With Microsoft Azure Virtual Machine Services, Microsoft offers a comprehensive Infrastructure as a Service
(IaaS) platform. SAP NetWeaver based applications are supported on Azure Virtual Machines (IaaS). This
whitepaper describes how to plan and implement SAP NetWeaver based applications within Microsoft Azure as
the platform of choice.
The paper itself focuses on two main aspects:
The first part describes two supported deployment patterns for SAP NetWeaver based applications on Azure.
It also describes general handling of Azure with SAP deployments in mind.
The second part details implementing the two different scenarios described in the first part.
For additional resources see chapter Resources in this document.
Definitions upfront
Throughout the document, we use the following terms:
IaaS: Infrastructure as a Service
PaaS: Platform as a Service
SaaS: Software as a Service
ARM: Azure Resource Manager
SAP Component: an individual SAP application such as ECC, BW, Solution Manager, or EP. SAP components
can be based on traditional ABAP or Java technologies or a non-NetWeaver based application such as
Business Objects.
SAP Environment: one or more SAP components logically grouped to perform a business function such as
Development, QAS, Training, DR, or Production.
SAP Landscape: This refers to the entire SAP assets in a customers IT landscape. The SAP landscape includes
all production and non-production environments.
SAP System: The combination of DBMS layer and application layer of, for example, an SAP ERP development
system, SAP BW test system, SAP CRM production system, etc.. In Azure deployments, it is not supported to
divide these two layers between on-premises and Azure. This means an SAP system is either deployed on-
premises or it is deployed in Azure. However, you can deploy the different systems of an SAP landscape into
either Azure or on-premises. For example, you could deploy the SAP CRM development and test systems in
Azure but the SAP CRM production system on-premises.
Cloud-Only deployment: A deployment where the Azure subscription is not connected via a site-to-site or
ExpressRoute connection to the on-premises network infrastructure. In common Azure documentation these
kinds of deployments are also described as Cloud-Only deployments. Virtual Machines deployed with this
method are accessed through the internet and a public IP address and/or a public DNS name assigned to the
VMs in Azure. For Microsoft Windows the on-premises Active Directory (AD) and DNS is not extended to
Azure in these types of deployments. Hence the VMs are not part of the on-premises Active Directory. Same
is true for Linux implementations using, for example, OpenLDAP + Kerberos.
NOTE
Cloud-Only deployment in this document is defined as complete SAP landscapes are running exclusively in Azure without
extension of Active Directory / OpenLDAP or name resolution from on-premises into public cloud. Cloud-Only
configurations are not supported for production SAP systems or configurations where SAP STMS or other on-premises
resources need to be used between SAP systems hosted on Azure and resources residing on-premises.
Cross-premises: Describes a scenario where VMs are deployed to an Azure subscription that has site-to-site,
multi-site, or ExpressRoute connectivity between the on-premises datacenter(s) and Azure. In common Azure
documentation, these kinds of deployments are also described as cross-premises scenarios. The reason for
the connection is to extend on-premises domains, on-premises Active Directory/OpenLDAP, and on-
premises DNS into Azure. The on-premises landscape is extended to the Azure assets of the subscription.
Having this extension, the VMs can be part of the on-premises domain. Domain users of the on-premises
domain can access the servers and can run services on those VMs (like DBMS services). Communication and
name resolution between VMs deployed on-premises and Azure deployed VMs is possible. This is the
scenario we expect most SAP assets to be deployed in. For more information, see this article and this.
NOTE
Cross-premises deployments of SAP systems where Azure Virtual Machines running SAP systems are members of an on-
premises domain are supported for production SAP systems. Cross-premises configurations are supported for deploying
parts or complete SAP landscapes into Azure. Even running the complete SAP landscape in Azure requires having those
VMs being part of on-premises domain and ADS/OpenLDAP. In former versions of the documentation, we talked about
Hybrid-IT scenarios, where the term Hybrid is rooted in the fact that there is a cross-premises connectivity between on-
premises and Azure. Plus, the fact that the VMs in Azure are part of the on-premises Active Directory / OpenLDAP.
Some Microsoft documentation describes cross-premises scenarios a bit differently, especially for DBMS HA
configurations. In the case of the SAP-related documents, the cross-premises scenario just boils down to having
a site-to-site or private (ExpressRoute) connectivity and the fact that the SAP landscape is distributed between
on-premises and Azure.
Resources
The following additional guides are available for the topic of SAP deployments on Azure:
Azure Virtual Machines planning and implementation for SAP NetWeaver (this document)
Azure Virtual Machines deployment for SAP NetWeaver
Azure Virtual Machines DBMS deployment for SAP NetWeaver
IMPORTANT
Wherever possible a link to the referring SAP Installation Guide is used (Reference InstGuide-01, see
http://service.sap.com/instguides). When it comes to the prerequisites and installation process, the SAP NetWeaver
Installation Guides should always be read carefully, as this document only covers specific tasks for SAP NetWeaver
systems installed in a Microsoft Azure Virtual Machine.
The following SAP Notes are related to the topic of SAP on Azure:
Also read the SCN Wiki that contains all SAP Notes for Linux.
General default limitations and maximum limitations of Azure subscriptions can be found in this article.
Possible Scenarios
SAP is often seen as one of the most mission-critical applications within enterprises. The architecture and
operations of these applications is mostly very complex and ensuring that you meet requirements on
availability and performance is important.
Thus enterprises have to think carefully about which applications can be run in a public cloud environment,
independent of the chosen cloud provider.
Possible system types for deploying SAP NetWeaver based applications within public cloud environments are
listed below:
1. Medium-sized production systems
2. Development systems
3. Testing systems
4. Prototype systems
5. Learning / Demonstration systems
In order to successfully deploy SAP systems into either Azure IaaS or IaaS in general, it is important to
understand the significant differences between the offerings of traditional outsourcers or hosters and IaaS
offerings. Whereas the traditional hoster or outsourcer adapts infrastructure (network, storage and server type)
to the workload a customer wants to host, it is instead the customers responsibility to choose the right
workload for IaaS deployments.
As a first step, customers need to verify the following items:
The SAP supported VM types of Azure
The SAP supported products/releases on Azure
The supported OS and DBMS releases for the specific SAP releases in Azure
SAPS throughput provided by different Azure SKUs
The answers to these questions can be read in SAP Note 1928533.
As a second step, Azure resource and bandwidth limitations need to be compared to actual resource
consumption of on-premises systems. Therefore, customers need to be familiar with the different capabilities of
the Azure types supported with SAP in the area of:
CPU and memory resources of different VM types and
IOPS bandwidth of different VM types and
Network capabilities of different VM types.
Most of that data can be found here (Linux) and here (Windows).
Keep in mind that the limits listed in the link above are upper limits. It does not mean that the limits for any of
the resources, for example IOPS can be provided under all circumstances. The exceptions though are the CPU
and memory resources of a chosen VM type. For the VM types supported by SAP, the CPU and memory
resources are reserved and as such available at any point in time for consumption within the VM.
The Microsoft Azure platform like other IaaS platforms is a multi-tenant platform. This means that storage,
network, and other resources are shared between tenants. Intelligent throttling and quota logic is used to
prevent one tenant from impacting the performance of another tenant (noisy neighbor) in a drastic way.
Though logic in Azure tries to keep variances in bandwidth experienced small, highly shared platforms tend to
introduce larger variances in resource/bandwidth availability than many customers are used to in their on-
premises deployments. As a result, you might experience different levels of bandwidth regarding networking or
storage I/O (the volume as well as latency) from minute to minute. The probability that an SAP system on Azure
could experience larger variances than in an on-premises system needs to be taken into account.
A last step is to evaluate availability requirements. It can happen, that the underlying Azure infrastructure needs
to get updated and requires the hosts running VMs to be rebooted. In these cases, VMs running on those hosts
would be shut down and restarted as well. The timing of such maintenance is done during non-core business
hours for a particular region but the potential window of a few hours during which a restart will occur is
relatively wide. There are various technologies within the Azure platform that can be configured to mitigate
some or all of the impact of such updates. Future enhancements of the Azure platform, DBMS, and SAP
application are designed to minimize the impact of such restarts.
In order to successfully deploy an SAP system onto Azure, the on-premises SAP system(s) Operating System,
Database, and SAP applications must appear on the SAP Azure support matrix, fit within the resources the Azure
infrastructure can provide and which can work with the Availability SLAs Microsoft Azure offers. As those
systems are identified, you need to decide on one of the following two deployment scenarios.
Cloud-Only - Virtual Machine deployments into Azure without dependencies on the on-premises customer
network
This scenario is typical for trainings or demo systems, where all the components of SAP and non-SAP software
are installed within a single VM. Production SAP systems are not supported in this deployment scenario. In
general, this scenario meets the following requirements:
The VMs themselves are accessible over the public network. Direct network connectivity for the applications
running within the VMs to the on-premises network of either the company owning the demos or trainings
content or the customer is not necessary.
In case of multiple VMs representing the trainings or demo scenario, network communications and name
resolution needs to work between the VMs. But communications between the set of VMs need to be isolated
so that several sets of VMs can be deployed side by side without interference.
Internet connectivity is required for the end user to remote login into to the VMs hosted in Azure. Depending
on the guest OS, Terminal Services/RDS or VNC/ssh is used to access the VM to either fulfill the training
tasks or perform the demos. If SAP ports such as 3200, 3300 & 3600 can also be exposed the SAP
application instance can be accessed from any Internet connected desktop.
The SAP system(s) (and VM(s)) represent a standalone scenario in Azure, which only requires public internet
connectivity for end-user access and does not require a connection to other VMs in Azure.
SAPGUI and a browser are installed and run directly on the VM.
A fast reset of a VM to the original state and new deployment of that original state again is required.
In the case of demo and training scenarios, which are realized in multiple VMs, an Active Directory /
OpenLDAP and/or DNS service is required for each set of VMs.
It is important to keep in mind that the VM(s) in each of the sets need to be deployed in parallel, where the VM
names in each of the set are the same.
Cross-Premises - Deployment of single or multiple SAP VMs into Azure with the requirement of being fully
integrated into the on-premises network
This scenario is a cross-premises scenario with many possible deployment patterns. It can be described as
simply as running some parts of the SAP landscape on-premises and other parts of the SAP landscape on
Azure. All aspects of the fact that part of the SAP components are running on Azure should be transparent for
end users. Hence the SAP Transport Correction System (STMS), RFC Communication, Printing, Security (like
SSO), etc. work seamlessly for the SAP systems running on Azure. But the cross-premises scenario also
describes a scenario where the complete SAP landscape runs in Azure with the customers domain and DNS
extended into Azure.
NOTE
This is the deployment scenario, which is supported for running productive SAP systems.
Read this article for more information on how to connect your on-premises network to Microsoft Azure
IMPORTANT
When we are talking about cross-premises scenarios between Azure and on-premises customer deployments, we are
looking at the granularity of whole SAP systems. Scenarios which are not supported for cross-premises scenarios are:
Running different layers of SAP applications in different deployment methods. For example running the DBMS layer
on-premises, but the SAP application layer in VMs deployed as Azure VMs or vice versa.
Some components of an SAP layer in Azure and some on-premises. For example splitting Instances of the SAP
application layer between on-premises and Azure VMs.
Distribution of VMs running SAP instances of one system over multiple Azure Regions is not supported.
The reason for these restrictions is the requirement for a very low latency high-performance network within one SAP
system, especially between the application instances and the DBMS layer of an SAP system.
IMPORTANT
For the use of SAP NetWeaver based applications, only the subset of VM types and configurations listed in SAP Note
1928533 are supported.
Azure Regions
Microsoft allows to deploy Virtual Machines into so called Azure Regions. An Azure Region may be one or
multiple data centers that are located in close proximity. For most of the geopolitical regions in the world
Microsoft has at least two Azure Regions. For example, in Europe there is an Azure Region of North Europe
and one of West Europe. Such two Azure Regions within a geopolitical region are separated by significant
enough distance so that natural or technical disasters do not affect both Azure Regions in the same geopolitical
region. Since Microsoft is steadily building out new Azure Regions in different geopolitical regions globally, the
number of these regions is steadily growing and as of Dec 2015 reached the number of 20 Azure Regions with
additional Regions announced already. You as a customer can deploy SAP systems into all these regions,
including the two Azure Regions in China. For current up-to-date information about Azure regions see this
website: https://azure.microsoft.com/regions/
The Microsoft Azure Virtual Machine Concept
Microsoft Azure offers an Infrastructure as a Service (IaaS) solution to host Virtual Machines with similar
functionalities as an on-premises virtualization solution. You are able to create Virtual Machines from within the
Azure portal, PowerShell or CLI, which also offer deployment and management capabilities.
Azure Resource Manager allows you to provision your applications using a declarative template. In a single
template, you can deploy multiple services along with their dependencies. You use the same template to
repeatedly deploy your application during every stage of the application life cycle.
More information about using Resource Manager templates can be found here:
[Deploy and manage virtual machines by using Azure Resource Manager templates and the Azure CLI]
[../../linux/create-ssh-secured-vm-from-template.md]
Manage virtual machines using Azure Resource Manager and PowerShell
https://azure.microsoft.com/documentation/templates/
Another interesting feature is the ability to create images from Virtual Machines, which allows you to prepare
certain repositories from which you are able to quickly deploy Virtual machine instances, which meet your
requirements.
More information about creating images from Virtual Machines can be found in this article (Linux) and this
article (Windows).
Fault Domains
Fault Domains represent a physical unit of failure, very closely related to the physical infrastructure contained in
data centers, and while a physical blade or rack can be considered a Fault Domain, there is no direct one-to-one
mapping between the two.
When you deploy multiple Virtual Machines as part of one SAP system in Microsoft Azure Virtual Machine
Services, you can influence the Azure Fabric Controller to deploy your application into different Fault Domains,
thereby meeting the requirements of the Microsoft Azure SLA. However, the distribution of Fault Domains over
an Azure Scale Unit (collection of hundreds of Compute nodes or Storage nodes and networking) or the
assignment of VMs to a specific Fault Domain is something over which you do not have direct control. In order
to direct the Azure fabric controller to deploy a set of VMs over different Fault Domains, you need to assign an
Azure Availability Set to the VMs at deployment time. For more information on Azure Availability Sets, see
chapter Azure Availability Sets in this document.
Upgrade Domains
Upgrade Domains represent a logical unit that help to determine how a VM within an SAP system, that consists
of SAP instances running in multiple VMs, is updated. When an upgrade occurs, Microsoft Azure goes through
the process of updating these Upgrade Domains one by one. By spreading VMs at deployment time over
different Upgrade Domains, you can protect your SAP system partly from potential downtime. In order to force
Azure to deploy the VMs of an SAP system spread over different Upgrade Domains, you need to set a specific
attribute at deployment time of each VM. Similar to Fault Domains, an Azure Scale Unit is divided into multiple
Upgrade Domains. In order to direct the Azure fabric controller to deploy a set of VMs over different Upgrade
Domains, you need to assign an Azure Availability Set to the VMs at deployment time. For more information on
Azure Availability Sets, see chapter Azure Availability Sets below.
Azure Availability Sets
Azure Virtual Machines within one Azure Availability Set are distributed by the Azure Fabric Controller over
different Fault and Upgrade Domains. The purpose of the distribution over different Fault and Upgrade
Domains is to prevent all VMs of an SAP system from being shut down in the case of infrastructure
maintenance or a failure within one Fault Domain. By default, VMs are not part of an Availability Set. The
participation of a VM in an Availability Set is defined at deployment time or later on by a reconfiguration and
re-deployment of a VM.
To understand the concept of Azure Availability Sets and the way Availability Sets relate to Fault and Upgrade
Domains, read this article
To define availability sets for ARM via a json template see the rest-api specs and search for "availability".
Storage: Microsoft Azure Storage and Data Disks
Microsoft Azure Virtual Machines utilize different storage types. When implementing SAP on Azure Virtual
Machine Services, it is important to understand the differences between these two main types of storage:
Non-Persistent, volatile storage.
Persistent storage.
The non-persistent storage is directly attached to the running Virtual Machines and resides on the compute
nodes themselves the local instance storage (temporary storage). The size depends on the size of the Virtual
Machine chosen when the deployment started. This storage type is volatile and therefore the disk is initialized
when a Virtual Machine instance is restarted. Typically, the pagefile for the operating system is located on this
temporary disk.
Windows
On Windows VMs the temp drive is mounted as drive D:\ in a deployed VM.
Linux
On Linux VMs, it's mounted as /mnt/resource or /mnt. See more details here:
How to Attach a Data Disk to a Linux Virtual Machine
https://docs.microsoft.com/azure/storage/storage-about-disks-and-vhds-linux#temporary-disk
The actual drive is volatile because it is getting stored on the host server itself. If the VM moved in a
redeployment (for example due to maintenance on the host or shutdown and restart) the content of the drive is
lost. Therefore, it is not an option to store any important data on this drive. The type of media used for this type
of storage differs between different VM series with very different performance characteristics which as of June
2015 look like:
A5-A7: Very limited performance. Not recommended for anything beyond page file
A8-A11: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput.
D-Series: Very good performance characteristics with some then thousand IOPS and >1GB/sec throughput.
DS-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput.
G-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput.
GS-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput.
Statements above are applying to the VM types that are certified with SAP. The VM-series with excellent IOPS
and throughput qualify for leverage by some DBMS features. For more information, see the DBMS Deployment
Guide.
Microsoft Azure Storage provides persisted storage and the typical levels of protection and redundancy seen on
SAN storage. Disks based on Azure Storage are virtual hard disk (VHDs) located in the Azure Storage Services.
The local OS-Disk (Windows C:\, Linux /dev/sda1) is stored on the Azure Storage, and additional Volumes/Disks
mounted to the VM get stored there, too.
It is possible to upload an existing VHD from on-premises or create empty ones from within Azure and attach
those to deployed VMs.
After creating or uploading a VHD into Azure Storage, it is possible to mount and attach those to an existing
Virtual Machine and to copy existing (unmounted) VHD.
As those VHDs are persisted, data and changes within those are safe when rebooting and recreating a Virtual
Machine instance. Even if an instance is deleted, these VHDs stay safe and can be redeployed or in case of non-
OS disks can be mounted to other VMs.
Within the network of Azure Storage different redundancy levels can be configured:
Minimum level that can be selected is local redundancy, which is equivalent to three-replica of the data
within the same data center of an Azure Region (see chapter Azure Regions).
Zone redundant storage, which spreads the three images over different data centers within the same Azure
Region.
Default redundancy level is geographic redundancy, which asynchronously replicates the content into
another three images of the data into another Azure Region, which is hosted in the same geopolitical region.
Also see the table on top of this article regarding the different redundancy options:
https://azure.microsoft.com/pricing/details/storage/
More information about Azure Storage can be found here:
https://azure.microsoft.com/documentation/services/storage/
https://azure.microsoft.com/services/site-recovery
https://docs.microsoft.com/rest/api/storageservices/Understanding-Block-Blobs--Append-Blobs--and-Page-
Blobs
https://blogs.msdn.com/b/azuresecurity/archive/2015/11/17/azure-disk-encryption-for-linux-and-
windows-virtual-machines-public-preview.aspx
Azure Standard Storage
Azure Standard storage was the type of storage available when Azure IaaS was released. There were IOPS
quotas enforced per single disk. Latency experienced was not in the same class as SAN/NAS devices typically
deployed for high-end SAP systems hosted on-premises. Nevertheless, the Azure Standard Storage proved
sufficient for many hundreds SAP systems meanwhile deployed in Azure.
Disks that are stored on Azure Standard Storage Accounts are charged based on the actual data that is stored,
the volume of storage transactions, outbound data transfers, and redundancy option chosen. Many disks can be
created at the maximum 1TB in size, but as long as those remain empty there is no charge. If you then fill one
VHD with 100GB each, you are charged for storing 100GB and not for the nominal size the VHD got created
with.
Azure Premium Storage
In April 2015, Microsoft introduced Azure Premium Storage. Premium Storage got introduced with the goal to
provide:
Better I/O latency.
Better throughput.
Less variability in I/O latency.
For that purpose, many changes were introduced of which the two most significant are:
Usage of SSD disks in the Azure Storage nodes
A new read cache that is backed by the local SSD of an Azure compute node
In opposite to Standard storage where capabilities did not change dependent on the size of the disk (or VHD),
Premium Storage currently has three different disk categories, which are shown in this article:
https://azure.microsoft.com/pricing/details/storage/unmanaged-disks/
You see that IOPS/disk and disk throughput/disk are dependent on the size category of the disks
Cost basis in the case of Premium Storage is not the actual data volume stored in such disks, but the size
category of such a disk, independent of the amount of the data that is stored within the disk.
You also can create disks on Premium Storage that are not directly mapping into the size categories shown. This
may be the case, especially when copying disks from Standard Storage into Premium Storage. In such cases a
mapping to the next largest Premium Storage disk option is performed.
Be aware that only certain VM series can benefit from the Azure Premium Storage. As of Dec 2015, these are
the DS- and GS-series. The DS-series is basically the same as D-series with the exception that DS-series has the
ability to mount Premium Storage based VMs additionally to disks that are hosted on Azure Standard Storage.
Same thing is valid for G-series compared to GS-series.
If you are checking out the part of the DS-series VMs in this article (Linux) and this article (Windows), you
realize that there are data volume limitations to Premium Storage disks on the granularity of the VM level.
Different DS-series or GS-series VMs also have different limitations in regards to the number of data disks that
can be mounted. These limits are documented in the article mentioned above as well. But in essence it means
that if you, for example, mount 32 x P30 disks to a single DS14 VM you can NOT get 32 x the maximum
throughput of a P30 disk. Instead the maximum throughput on VM level as documented in the article limits
data throughput.
More information on Premium Storage can be found here: http://azure.microsoft.com/blog/2015/04/16/azure-
premium-storage-now-generally-available-2
Managed Disks
Managed Disks are a new resource type in Azure Resource Manager that can be used instead of VHDs that are
stored in Azure Storage Accounts. Managed Disks automatically align with the Availability Set of the virtual
machine they are attached to and therefore increase the availability of your virtual machine and the services
that are running on the virtual machine. For more information, read the overview article.
We recommend to you use Managed disk, because they simplify the deployment and management of your
virtual machines. SAP currently only supports Premium Managed Disks. For more information, read SAP Note
1928533.
Azure Storage Accounts
When deploying services or VMs in Azure, deployment of VHDs and VM Images can be organized in units called
Azure Storage Accounts. When planning an Azure deployment, you need to carefully consider the restrictions of
Azure. On the one side, there is a limited number of Storage Accounts per Azure subscription. Although each
Azure Storage Account can hold a large number of VHD files, there is a fixed limit on the total IOPS per Storage
Account. When deploying hundreds of SAP VMs with DBMS systems creating significant IO calls, it is
recommended to distribute high IOPS DBMS VMs between multiple Azure Storage Accounts. Care must be
taken not to exceed the current limit of Azure Storage Accounts per subscription. Because storage is a vital part
of the database deployment for an SAP system, this concept is discussed in more detail in the already
referenced DBMS Deployment Guide.
More information about Azure Storage Accounts can be found in this article. Reading this article, you realize
that there are differences in the limitations between Azure Standard Storage Accounts and Premium Storage
Accounts. Major differences are the volume of data that can be stored within such a Storage Account. In
Standard Storage the volume is a magnitude larger than with Premium Storage. On the other side, the Standard
Storage Account is severely limited in IOPS (see column Total Request Rate), whereas the Azure Premium
Storage Account has no such limitation. We will discuss details and results of these differences when discussing
the deployments of SAP systems, especially the DBMS servers.
Within a Storage Account, you have the possibility to create different containers for the purpose of organizing
and categorizing different VHDs. These containers are usually used to, for example, separate VHDs of different
VMs. There are no performance implications in using just one container or multiple containers underneath a
single Azure Storage Account.
Within Azure a VHD name follows the following naming connection that needs to provide a unique name for
the VHD within Azure:
As mentioned the string above needs to uniquely identify the VHD that is stored on Azure Storage.
Microsoft Azure Networking
Microsoft Azure provides a network infrastructure, which allows the mapping of all scenarios, which we want to
realize with SAP software. The capabilities are:
Access from the outside, directly to the VMs via Windows Terminal Services or ssh/VNC
Access to services and specific ports used by applications within the VMs
Internal Communication and Name Resolution between a group of VMs deployed as Azure VMs
Cross-premises Connectivity between a customers on-premises network and the Azure network
Cross Azure Region or data center connectivity between Azure sites
More information can be found here: https://azure.microsoft.com/documentation/services/virtual-network/
There are many different possibilities to configure name and IP resolution in Azure. In this document, Cloud-
Only scenarios rely on the default of using Azure DNS (in contrast to defining an own DNS service). There is
also a new Azure DNS service, which can be used instead of setting up your own DNS server. More information
can be found in this article and on this page.
For cross-premises scenarios, we are relying on the fact that the on-premises AD/OpenLDAP/DNS has been
extended via VPN or private connection to Azure. For certain scenarios as documented here, it might be
necessary to have an AD/OpenLDAP replica installed in Azure.
Because networking and name resolution is a vital part of the database deployment for an SAP system, this
concept is discussed in more detail in the DBMS Deployment Guide.
A z u r e Vi r t u a l N e t w o r k s
By building up an Azure Virtual Network, you can define the address range of the private IP addresses allocated
by Azure DHCP functionality. In cross-premises scenarios, the IP address range defined is still allocated using
DHCP by Azure. However, Domain Name resolution is done on-premises (assuming that the VMs are a part of
an on-premises domain) and hence can resolve addresses beyond different Azure Cloud Services.
Every Virtual Machine in Azure needs to be connected to a Virtual Network.
More details can be found in this article and on this page.
NOTE
By default, once a VM is deployed you cannot change the Virtual Network configuration. The TCP/IP settings must be left
to the Azure DHCP server. Default behavior is Dynamic IP assignment.
The MAC address of the virtual network card may change, for example after resize and the Windows or Linux
guest OS picks up the new network card and automatically uses DHCP to assign the IP and DNS addresses in
this case.
St a t i c I P A ssi g n m e n t
It is possible to assign fixed or reserved IP addresses to VMs within an Azure Virtual Network. Running the VMs
in an Azure Virtual Network opens a great possibility to leverage this functionality if needed or required for
some scenarios. The IP assignment remains valid throughout the existence of the VM, independent of whether
the VM is running or shutdown. As a result, you need to take the overall number of VMs (running and stopped
VMS) into account when defining the range of IP addresses for the Virtual Network. The IP address remains
assigned either until the VM and its Network Interface is deleted or until the IP address gets de-assigned again.
For more information, read this article.
Mu l t i pl e N ICs per VM
You can define multiple virtual network interface cards (vNIC) for an Azure Virtual Machine. With the ability to
have multiple vNICs you can start to set up network traffic separation where, for example, client traffic is routed
through one vNIC and backend traffic is routed through a second vNIC. Dependent on the type of VM there are
different limitations in regards to the number of vNICs. Exact details, functionality, and restrictions can be found
in these articles:
Create a Windows VM with multiple NICs
Create a Linux VM with multiple NICs
Deploy multi NIC VMs using a template
Deploy multi NIC VMs using PowerShell
Deploy multi NIC VMs using the Azure CLI
Site-to-Site Connectivity
Cross-premises is Azure VMs and On-Premises linked with a transparent and permanent VPN connection. It is
expected to become the most common SAP deployment pattern in Azure. The assumption is that operational
procedures and processes with SAP instances in Azure should work transparently. This means you should be
able to print out of these systems as well as use the SAP Transport Management System (TMS) to transport
changes from a development system in Azure to a test system, which is deployed on-premises. More
documentation around site-to-site can be found in this article
VP N T u n n el Devi c e
In order to create a site-to-site connection (on-premises data center to Azure data center), you need to either
obtain and configure a VPN device, or use Routing and Remote Access Service (RRAS) which was introduced as
a software component with Windows Server 2012.
Create a virtual network with a site-to-site VPN connection using PowerShell
About VPN devices for Site-to-Site VPN Gateway connections
VPN Gateway FAQ
The Figure above shows two Azure subscriptions have IP address subranges reserved for usage in Virtual
Networks in Azure. The connectivity from the on-premises network to Azure is established via VPN.
Point-to-Site VPN
Point-to-site VPN requires every client machine to connect with its own VPN into Azure. For the SAP scenarios
we are looking at, point-to-site connectivity is not practical. Therefore, no further references are given to point-
to-site VPN connectivity.
More information can be found here
Configure a Point-to-Site connection to a VNet using the Azure portal
Configure a Point-to-Site connection to a VNet using PowerShell
Multi-Site VPN
Azure also nowadays offers the possibility to create Multi-Site VPN connectivity for one Azure subscription.
Previously a single subscription was limited to one site-to-site VPN connection. This limitation went away with
Multi-Site VPN connections for a single subscription. This makes it possible to leverage more than one Azure
Region for a specific subscription through cross-premises configurations.
For more documentation, please see this article
VNet to VNet Connection
Using Multi-Site VPN, you need to configure a separate Azure Virtual Network in each of the regions. However
very often you have the requirement that the software components in the different regions should
communicate with each other. Ideally this communication should not be routed from one Azure Region to on-
premises and from there to the other Azure Region. To shortcut, Azure offers the possibility to configure a
connection from one Azure Virtual Network in one region to another Azure Virtual Network hosted in another
region. This functionality is called VNet-to-VNet connection. More details on this functionality can be found
here: https://azure.microsoft.com/documentation/articles/vpn-gateway-vnet-vnet-rm-ps/.
Private Connection to Azure ExpressRoute
Microsoft Azure ExpressRoute allows the creation of private connections between Azure data centers and either
the customer's on-premises infrastructure or in a co-location environment. ExpressRoute is offered by various
MPLS (packet switched) VPN providers or other Network Service Providers. ExpressRoute connections do not
go over the public Internet. ExpressRoute connections offer higher security, more reliability through multiple
parallel circuits, faster speeds, and lower latencies than typical connections over the Internet.
Find more details on Azure ExpressRoute and offerings here:
https://azure.microsoft.com/documentation/services/expressroute/
https://azure.microsoft.com/pricing/details/expressroute/
https://azure.microsoft.com/documentation/articles/expressroute-faqs/
Express Route enables multiple Azure subscriptions through one ExpressRoute circuit as documented here
https://azure.microsoft.com/documentation/articles/expressroute-howto-linkvnet-arm/
https://azure.microsoft.com/documentation/articles/expressroute-howto-circuit-arm/
Forced tunneling in case of cross-premises
For VMs joining on-premises domains through site-to-site, point-to-site or ExpressRoute, you need to make
sure that the Internet proxy settings are getting deployed for all the users in those VMs as well. By default,
software running in those VMs or users using a browser to access the internet would not go through the
company proxy, but would connect straight through Azure to the internet. But even the proxy setting is not a
100% solution to direct the traffic through the company proxy since it is responsibility of software and services
to check for the proxy. If software running in the VM is not doing that or an administrator manipulates the
settings, traffic to the Internet can be detoured again directly through Azure to the Internet.
In order to avoid this, you can configure Forced Tunneling with site-to-site connectivity between on-premises
and Azure. The detailed description of the Forced Tunneling feature is published here
https://azure.microsoft.com/documentation/articles/vpn-gateway-forced-tunneling-rm/
Forced Tunneling with ExpressRoute is enabled by customers advertising a default route via the ExpressRoute
BGP peering sessions.
Summary of Azure Networking
This chapter contained many important points about Azure Networking. Here is a summary of the main points:
Azure Virtual Networks allows to set up the network according to your own needs
Azure Virtual Networks can be leveraged to assign IP address ranges to VMs or assign fixed IP addresses to
VMs
To set up a Site-To-Site or Point-To-Site connection you need to create an Azure Virtual Network first
Once a virtual machine has been deployed, it is no longer possible to change the Virtual Network assigned
to the VM
Quotas in Azure Virtual Machine Services
We need to be clear about the fact that the storage and network infrastructure is shared between VMs running
a variety of services in the Azure infrastructure. And just as in the customers own data centers, over-
provisioning of some of the infrastructure resources does take place to a degree. The Microsoft Azure Platform
uses disk, CPU, network, and other quotas to limit the resource consumption and to preserve consistent and
deterministic performance. The different VM types (A5, A6, etc.) have different quotas for the number of disks,
CPU, RAM, and Network.
NOTE
CPU and memory resources of the VM types supported by SAP are pre-allocated on the host nodes. This means that
once the VM is deployed, the resources on the host are available as defined by the VM type.
When planning and sizing SAP on Azure solutions the quotas for each virtual machine size must be considered.
The VM quotas are described here (Linux) and here (Windows).
The quotas described represent the theoretical maximum values. The limit of IOPS per disk may be achieved
with small IOs (8kb) but possibly may not be achieved with large IOs (1Mb). The IOPS limit is enforced on the
granularity of single disk.
As a rough decision tree to decide whether an SAP system fits into Azure Virtual Machine Services and its
capabilities or whether an existing system needs to be configured differently in order to deploy the system on
Azure, the decision tree below can be used:
Step 1: The most important information to start with is the SAPS requirement for a given SAP system. The SAPS
requirements need to be separated out into the DBMS part and the SAP application part, even if the SAP system
is already deployed on-premises in a 2-tier configuration. For existing systems, the SAPS related to the
hardware in use often can be determined or estimated based on existing SAP benchmarks. The results can be
found here: http://global.sap.com/campaigns/benchmark/index.epx. For newly deployed SAP systems, you
should have gone through a sizing exercise, which should determine the SAPS requirements of the system. See
also this blog and attached document for SAP sizing on Azure:
http://blogs.msdn.com/b/saponsqlserver/archive/2015/12/01/new-white-paper-on-sizing-sap-solutions-on-
azure-public-cloud.aspx
Step 2: For existing systems, the I/O volume and I/O operations per second on the DBMS server should be
measured. For newly planned systems, the sizing exercise for the new system also should give rough ideas of
the I/O requirements on the DBMS side. If unsure, you eventually need to conduct a Proof of Concept.
Step 3: Compare the SAPS requirement for the DBMS server with the SAPS the different VM types of Azure can
provide. The information on SAPS of the different Azure VM types is documented in SAP Note 1928533. The
focus should be on the DBMS VM first since the database layer is the layer in an SAP NetWeaver system that
does not scale out in the majority of deployments. In contrast, the SAP application layer can be scaled out. If
none of the SAP supported Azure VM types can deliver the required SAPS, the workload of the planned SAP
system cant be run on Azure. You either need to deploy the system on-premises or you need to change the
workload volume for the system.
Step 4: As documented here (Linux) and here (Windows), Azure enforces an IOPS quota per disk independent
whether you use Standard Storage or Premium Storage. Dependent on the VM type, the number of data disks,
which can be mounted varies. As a result, you can calculate a maximum IOPS number that can be achieved with
each of the different VM types. Dependent on the database file layout, you can stripe disks to become one
volume in the guest OS. However, if the current IOPS volume of a deployed SAP system exceeds the calculated
limits of the largest VM type of Azure and if there is no chance to compensate with more memory, the workload
of the SAP system can be impacted severely. In such cases, you can hit a point where you should not deploy the
system on Azure.
Step 5: Especially in SAP systems, which are deployed on-premises in 2-Tier configurations, the chances are
that the system might need to be configured on Azure in a 3-Tier configuration. In this step, you need to check
whether there is a component in the SAP application layer, which cant be scaled out and which would not fit
into the CPU and memory resources the different Azure VM types offer. If there indeed is such a component, the
SAP system and its workload cant be deployed into Azure. But if you can scale out the SAP application
components into multiple Azure VMs, the system can be deployed into Azure.
Step 6: If the DBMS and SAP application layer components can be run in Azure VMs, the configuration needs to
be defined with regard to:
Number of Azure VMs
VM types for the individual components
Number of VHDs in DBMS VM to provide enough IOPS
Administration and configuration tasks for the Virtual Machine instance are possible from within the Azure
portal.
Besides restarting and shutting down a Virtual Machine you can also attach, detach, and create data disks for
the Virtual Machine instance, to capture the instance for image preparation and configure the size of the Virtual
Machine instance.
The Azure portal provides basic functionality to deploy and configure VMs and many other Azure services.
However not all available functionality is covered by the Azure portal. In the Azure portal, its not possible to
perform tasks like:
Uploading VHDs to Azure
Copying VMs
Management via Microsoft Azure PowerShell cmdlets
Windows PowerShell is a powerful and extensible framework that has been widely adopted by customers
deploying larger numbers of systems in Azure. After the installation of PowerShell cmdlets on a desktop, laptop
or dedicated management station, the PowerShell cmdlets can be run remotely.
The process to enable a local desktop/laptop for the usage of Azure PowerShell cmdlets and how to configure
those for the usage with the Azure subscription(s) is described in this article.
More detailed steps on how to install, update, and configure the Azure PowerShell cmdlets can also be found in
this chapter of the Deployment Guide.
Customer experience so far has been that PowerShell (PS) is certainly the more powerful tool to deploy VMs
and to create custom steps in the deployment of VMs. All of the customers running SAP instances in Azure are
using PS cmdlets to supplement management tasks they do in the Azure portal or are even using PS cmdlets
exclusively to manage their deployments in Azure. Since the Azure-specific cmdlets share the same naming
convention as the more than 2000 Windows-related cmdlets, it is an easy task for Windows administrators to
leverage those cmdlets.
See example here: http://blogs.technet.com/b/keithmayer/archive/2015/07/07/18-steps-for-end-to-end-iaas-
provisioning-in-the-cloud-with-azure-resource-manager-arm-powershell-and-desired-state-configuration-
dsc.aspx
Deployment of the Azure Monitoring Extension for SAP (see chapter Azure Monitoring Solution for SAP in this
document) is only possible via PowerShell or CLI. Therefore it is mandatory to set up and configure PowerShell
or CLI when deploying or administering an SAP NetWeaver system in Azure.
As Azure provides more functionality, new PS cmdlets are going to be added that requires an update of the
cmdlets. Therefore it makes sense to check the Azure Download site at least once the month
https://azure.microsoft.com/downloads/ for a new version of the cmdlets. The new version is installed on top of
the older version.
For a general list of Azure-related PowerShell commands check here:
https://docs.microsoft.com/powershell/azure/overview.
Management via Microsoft Azure CLI commands
For customers who use Linux and want to manage Azure resources Powershell might not be an option.
Microsoft offers Azure CLI as an alternative. The Azure CLI provides a set of open source, cross-platform
commands for working with the Azure Platform. The Azure CLI provides much of the same functionality found
in the Azure portal.
For information about installation, configuration and how to use CLI commands to accomplish Azure tasks see
Install the Azure CLI
[Deploy and manage virtual machines by using Azure Resource Manager templates and the Azure CLI]
[../../linux/create-ssh-secured-vm-from-template.md]
Use the Azure CLI for Mac, Linux, and Windows with Azure Resource Manager
Also read chapter Azure CLI for Linux VMs in the Deployment Guide on how to use Azure CLI to deploy the
Azure Monitoring Extension for SAP.
Windows
See more details here: https://docs.microsoft.com/azure/virtual-machines/windows/upload-generalized-
managed The Windows settings (like Windows SID and hostname) must be abstracted/generalized on the
on-premises VM via the sysprep command.
Linux
Follow the steps described in these articles for SUSE, Red Hat, or Oracle Linux, to prepare a VHD to be
uploaded to Azure.
If you have already installed SAP content in your on-premises VM (especially for 2-Tier systems), you can adapt
the SAP system settings after the deployment of the Azure VM through the instance rename procedure
supported by the SAP Software Provisioning Manager (SAP Note 1619720). See chapters Preparation for
deploying a VM with a customer-specific image for SAP and Uploading a VHD from on-premises to Azure of
this document for on-premises preparation steps and upload of a generalized VM to Azure. Read chapter
Scenario 2: Deploying a VM with a custom image for SAP in the Deployment Guide for detailed steps of
deploying such an image in Azure.
Deploying a VM out of the Azure Marketplace
You would like to use a Microsoft or third-party provided VM image from the Azure Marketplace to deploy your
VM. After you deployed your VM in Azure, you follow the same guidelines and tools to install the SAP software
and/or DBMS inside your VM as you would do in an on-premises environment. For more detailed deployment
description, please see chapter Scenario 1: Deploying a VM out of the Azure Marketplace for SAP in the
Deployment Guide.
Preparing VMs with SAP for Azure
Before uploading VMs into Azure you need to make sure the VMs and VHDs fulfill certain requirements. There
are small differences depending on the deployment method that is used.
Preparation for moving a VM from on-premises to Azure with a non-generalized disk
A common deployment method is to move an existing VM which runs an SAP system from on-premises to
Azure. That VM and the SAP system in the VM just should run in Azure using the same hostname and very likely
the same SAP SID. In this case the guest OS of VM should not be generalized for multiple deployments. If the
on-premises network got extended into Azure (see chapter cross-premises - Deployment of single or multiple
SAP VMs into Azure with the requirement of being fully integrated into the on-premises network in this
document), then even the same domain accounts can be used within the VM as those were used before on-
premises.
Requirements when preparing your own Azure VM Disk are:
Originally the VHD containing the operating system could have a maximum size of 127GB only. This
limitation got eliminated at the end of March 2015. Now the VHD containing the operating system can be up
to 1TB in size as any other Azure Storage hosted VHD as well.
It needs to be in the fixed VHD format. Dynamic VHDs or VHDs in VHDx format are not yet supported on
Azure. Dynamic VHDs will be converted to static VHDs when you upload the VHD with PowerShell
commandlets or CLI
VHDs which are mounted to the VM and should be mounted again in Azure to the VM need to be in a fixed
VHD format as well. Please read this article (Linux) and this article (Windows) for size limits of data disks.
Dynamic VHDs will be converted to static VHDs when you upload the VHD with PowerShell commandlets or
CLI
Add another local account with administrator privileges which can be used by Microsoft support or which
can be assigned as context for services and applications to run in until the VM is deployed and more
appropriate users can be used.
For the case of using a Cloud-Only deployment scenario (see chapter Cloud-Only - Virtual Machine
deployments into Azure without dependencies on the on-premises customer network of this document) in
combination with this deployment method, domain accounts might not work once the Azure Disk is
deployed in Azure. This is especially true for accounts which are used to run services like the DBMS or SAP
applications. Therefore you need to replace such domain accounts with VM local accounts and delete the on-
premises domain accounts in the VM. Keeping on-premises domain users in the VM image is not an issue
when the VM is deployed in the cross-premises scenario as described in chapter Cross-Premises -
Deployment of single or multiple SAP VMs into Azure with the requirement of being fully integrated into the
on-premises network in this document.
If domain accounts were used as DBMS logins or users when running the system on-premises and those
VMs are supposed to be deployed in Cloud-Only scenarios, the domain users need to be deleted. You need
to make sure that the local administrator plus another VM local user is added as a login/user into the DBMS
as administrators.
Add other local accounts as those might be needed for the specific deployment scenario.
Windows
In this scenario no generalization (sysprep) of the VM is required to upload and deploy the VM on Azure.
Make sure that drive D:\ is not used. Set disk automount for attached disks as described in chapter Setting
automount for attached disks in this document.
Linux
In this scenario no generalization (waagent -deprovision) of the VM is required to upload and deploy the
VM on Azure. Make sure that /mnt/resource is not used and that ALL disks are mounted via uuid. For the
OS disk, make sure that the bootloader entry also reflects the uuid-based mount.
Windows
Make sure that drive D:\ is not used Set disk automount for attached disks as described in chapter Setting
automount for attached disks in this document.
Linux
Make sure that /mnt/resource is not used and that ALL disks are mounted via uuid. For the OS disk, make
sure the bootloader entry also reflects the uuid-based mount.
SAP GUI (for administrative and setup purposes) can be pre-installed in such a template.
Other software necessary to run the VMs successfully in cross-premises scenarios can be installed as long as
this software can work with the rename of the VM.
If the VM is prepared sufficiently to be generic and eventually independent of accounts/users not available in
the targeted Azure deployment scenario, the last preparation step of generalizing such an image is conducted.
Gen er al i z i n g a VM
Windows
The last step is to log in to a VM with an Administrator account. Open a Windows command window as
administrator. Go to %windir%\windows\system32\sysprep and execute sysprep.exe. A small window will
appear. It is important to check the Generalize option (the default is un-checked) and change the Shutdown
Option from its default of Reboot to Shutdown. This procedure assumes that the sysprep process is
executed on-premises in the Guest OS of a VM. If you want to perform the procedure with a VM already
running in Azure, follow the steps described in this article.
Linux
How to capture a Linux virtual machine to use as a Resource Manager template
In this case we want to upload a VHD, either with or without an OS in it, and mount it to a VM as a data disk or
use it as OS disk. This is a multi-step process
Powershell
Log in to your subscription with Login-AzureRmAccount
Set the subscription of your context with Set-AzureRmContext and parameter SubscriptionId or
SubscriptionName - see https://docs.microsoft.com/powershell/module/azurerm.profile/set-azurermcontext
Upload the VHD with Add-AzureRmVhd to an Azure Storage Account - see
https://docs.microsoft.com/powershell/module/azurerm.compute/add-azurermvhd
(Optional) Create a Managed Disk from the VHD with New-AzureRmDisk - see
https://docs.microsoft.com/powershell/module/azurerm.compute/new-azurermdisk
Set the OS disk of a new VM config to the VHD or Managed Disk with Set-AzureRmVMOSDisk - see
https://docs.microsoft.com/powershell/module/azurerm.compute/set-azurermvmosdisk
Create a new VM from the VM config with New-AzureRmVM - see
https://docs.microsoft.com/powershell/module/azurerm.compute/new-azurermvm
Add a data disk to a new VM with Add-AzureRmVMDataDisk - see
https://docs.microsoft.com/powershell/module/azurerm.compute/add-azurermvmdatadisk
Azure CLI 2.0
Log in to your subscription with az login
Select your subscription with az account set --subscription <subscription name or id >
Upload the VHD with az storage blob upload - see Using the Azure CLI with Azure Storage
(Optional) Create a Managed Disk from the VHD with az disk create - see
https://docs.microsoft.com/cli/azure/disk#az_disk_create
Create a new VM specifying the uploaded VHD or Managed Disk as OS disk with az vm create and
parameter --attach-os-disk
Add a data disk to a new VM with az vm disk attach and parameter --new
Template
Upload the VHD with Powershell or Azure CLI
(Optional) Create a Managed Disk from the VHD with Powershell, Azure CLI or the Azure portal
Deploy the VM with a JSON template referencing the VHD as shown in this example JSON template or using
Managed Disks as shown in this example JSON template.
Deployment of a VM Image
To upload an existing VM or VHD from the on-premises network in order to use it as an Azure VM image such a
VM or VHD need to meet the requirements listed in chapter Preparation for deploying a VM with a customer-
specific image for SAP of this document.
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep Technical
Reference for Windows or How to capture a Linux virtual machine to use as a Resource Manager template
for Linux
Log in to your subscription with Login-AzureRmAccount
Set the subscription of your context with Set-AzureRmContext and parameter SubscriptionId or
SubscriptionName - see https://docs.microsoft.com/powershell/module/azurerm.profile/set-azurermcontext
Upload the VHD with Add-AzureRmVhd to an Azure Storage Account - see
https://docs.microsoft.com/powershell/module/azurerm.compute/add-azurermvhd
(Optional) Create a Managed Disk Image from the VHD with New-AzureRmImage - see
https://docs.microsoft.com/powershell/module/azurerm.compute/new-azurermimage
Set the OS disk of a new VM config to the
VHD with Set-AzureRmVMOSDisk -SourceImageUri -CreateOption fromImage - see
https://docs.microsoft.com/powershell/module/azurerm.compute/set-azurermvmosdisk
Managed Disk Image Set-AzureRmVMSourceImage - see
https://docs.microsoft.com/powershell/module/azurerm.compute/set-azurermvmsourceimage
Create a new VM from the VM config with New-AzureRmVM - see
https://docs.microsoft.com/powershell/module/azurerm.compute/new-azurermvm
Azure CLI 2.0
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep Technical
Reference for Windows or How to capture a Linux virtual machine to use as a Resource Manager template
for Linux
Log in to your subscription with az login
Select your subscription with az account set --subscription <subscription name or id >
Upload the VHD with az storage blob upload - see Using the Azure CLI with Azure Storage
(Optional) Create a Managed Disk Image from the VHD with az image create - see
https://docs.microsoft.com/cli/azure/image#az_image_create
Create a new VM specifying the uploaded VHD or Managed Disk Image as OS disk with az vm create and
parameter --image
Template
Use sysprep on Windows or waagent -deprovision on Linux to generalize your VM - see Sysprep Technical
Reference for Windows or How to capture a Linux virtual machine to use as a Resource Manager template
for Linux
Upload the VHD with Powershell or Azure CLI
(Optional) Create a Managed Disk Image from the VHD with Powershell, Azure CLI or the Azure portal
Deploy the VM with a JSON template referencing the image VHD as shown in this example JSON template
or using the Managed Disk Image as shown in this example JSON template.
Downloading VHDs or Managed Disks to on-premises
Azure Infrastructure as a Service is not a one-way street of only being able to upload VHDs and SAP systems.
You can move SAP systems from Azure back into the on-premises world as well.
During the time of the download the VHDs or Managed Disks cant be active. Even when downloading disks
which are mounted to VMs, the VM needs to be shut down and deallocated. If you only want to download the
database content which then should be used to set up a new system on-premises and if it is acceptable that
during the time of the download and the setup of the new system that the system in Azure can still be
operational, you could avoid a long downtime by performing a compressed database backup into a disk and
just download that disk instead of also downloading the OS base VM.
Powershell
Downloading a Managed Disk
You first need to get access to the underlying blob of the Managed Disk. Then you can copy the
underlying blob to a new storage account and download the blob from this storage account.
Downloading a VHD
Once the SAP system is stopped and the VM is shut down, you can use the PowerShell cmdlet Save-
AzureRmVhd on the on-premises target to download the VHD disks back to the on-premises world. In
order to do that, you need the URL of the VHD which you can find in the Storage Section of the Azure
portal (need to navigate to the Storage Account and the storage container where the VHD was created)
and you need to know where the VHD should be copied to.
Then you can leverage the command by simply defining the parameter SourceUri as the URL of the VHD
to download and the LocalFilePath as the physical location of the VHD (including its name). The
command could look like:
Downloading a VHD
Once the SAP system is stopped and the VM is shut down, you can use the Azure CLI command azure
storage blob download on the on-premises target to download the VHD disks back to the on-premises
world. In order to do that, you need the name and the container of the VHD which you can find in the
Storage Section of the Azure portal (need to navigate to the Storage Account and the storage container
where the VHD was created) and you need to know where the VHD should be copied to.
Then you can leverage the command by simply defining the parameters blob and container of the VHD
to download and the destination as the physical target location of the VHD (including its name). The
command could look like:
az storage blob download --name <name of the VHD to download> --container-name <container of the VHD
to download> --account-name <storage account name of the VHD to download> --account-key <storage
account key> --file <destination of the VHD to download>
Data disks can also be Managed Disks. In this case, the Managed Disk is used to create a new Managed Disk
before being attached to the virtual machine. The name of the Managed Disk must be unique within a resource
group.
P o w e r sh e l l
You can use Azure PowerShell cmdlets to copy a VHD as shown in this article. To create a new Managed Disk,
use New-AzureRmDiskConfig and New-AzureRmDisk as shown in the following example.
$config = New-AzureRmDiskConfig -CreateOption Copy -SourceUri "/subscriptions/<subscription
id>/resourceGroups/<resource group>/providers/Microsoft.Compute/disks/<disk name>" -Location <location>
New-AzureRmDisk -ResourceGroupName <resource group name> -DiskName <disk name> -Disk $config
C L I 2 .0
You can use Azure CLI to copy a VHD as shown in this article. To create a new Managed Disk, use az disk create
as shown in the following example.
A z u r e St o r a g e t o o l s
http://storageexplorer.com/
Professional editions of Azure Storage Explorers can be found here:
http://www.cerebrata.com/
http://clumsyleaf.com/products/cloudxplorer
The copy of a VHD itself within a storage account is a process which takes only a few seconds (similar to SAN
hardware creating snapshots with lazy copy and copy on write). After you have a copy of the VHD file you can
attach it to a virtual machine or use it as an image to attach copies of the VHD to virtual machines.
P o w e r sh e l l
# attach a vhd to a vm
$vm = Get-AzureRmVM -ResourceGroupName <resource group name> -Name <vm name>
$vm = Add-AzureRmVMDataDisk -VM $vm -Name newdatadisk -VhdUri <path to vhd> -Caching <caching option> -
DiskSizeInGB $null -Lun <lun, for example 0> -CreateOption attach
$vm | Update-AzureRmVM
C L I 2 .0
# attach a vhd to a vm
az vm unmanaged-disk attach --resource-group <resource group name> --vm-name <vm name> --vhd-uri <path to
vhd>
You can also copy VHDs between subscriptions. For more information read this article.
The basic flow of the PS cmdlet logic looks like this:
Create a storage account context for the source storage account with New-AzureStorageContext - see
https://msdn.microsoft.com/library/dn806380.aspx
Create a storage account context for the target storage account with New-AzureStorageContext - see
https://msdn.microsoft.com/library/dn806380.aspx
Start the copy with
Start-AzureStorageBlobCopy -SrcBlob <source blob name> -SrcContainer <source container name> -SrcContext
<variable containing context of source storage account> -DestBlob <target blob name> -DestContainer <target
container name> -DestContext <variable containing context of target storage account>
Get-AzureStorageBlobCopyState -Blob <target blob name> -Container <target container name> -Context
<variable containing context of target storage account>
az storage blob copy start --source-blob <source blob name> --source-container <source container name> --
source-account-name <source storage account name> --source-account-key <source storage account key> --
destination-container <target container name> --destination-blob <target blob name> --account-name <target
storage account name> --account-key <target storage account name>
Windows
With many customers we saw configurations where, for example, SAP and DBMS binaries were not installed
on the c:\ drive where the OS was installed. There were various reasons for this, but when we went back to
the root, it usually was that the drives were small and OS upgrades needed additional space 10-15 years
ago. Both conditions do not apply these days too often anymore. Today the c:\ drive can be mapped on
large volume disks or VMs. In order to keep deployments simple in their structure, it is recommended to
follow the following deployment pattern for SAP NetWeaver systems in Azure
The Windows operating system pagefile should be on the D: drive (non-persistent disk)
Linux
Place the Linux swapfile under /mnt /mnt/resource on Linux as described in this article. The swap file can be
configured in the configuration file of the Linux Agent /etc/waagent.conf. Add or change the following
settings:
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=30720
To activate the changes, you need to restart the Linux Agent with
Please read SAP Note 1597355 for more details on the recommended swap file size
The number of disks used for the DBMS data files and the type of Azure Storage these disks are hosted on
should be determined by the IOPS requirements and the latency required. Exact quotas are described in this
article (Linux) and this article (Windows).
Experience of SAP deployments over the last 2 years taught us some lessons which can be summarized as:
IOPS traffic to different data files is not always the same since existing customer systems might have
differently sized data files representing their SAP database(s). As a result it turned out to be better using a
RAID configuration over multiple disks to place the data files LUNs carved out of those. There were
situations, especially with Azure Standard Storage where an IOPS rate hit the quota of a single disk against
the DBMS transaction log. In such scenarios the use of Premium Storage is recommended or alternatively
aggregating multiple Standard Storage disks with a software RAID.
Windows
Performance best practices for SQL Server in Azure Virtual Machines
Linux
Configure Software RAID on Linux
Configure LVM on a Linux VM in Azure
Azure Storage secrets and Linux I/O optimizations
Premium Storage is showing significant better performance, especially for critical transaction log writes. For
SAP scenarios that are expected to deliver production like performance, it is highly recommended to use
VM-Series that can leverage Azure Premium Storage.
Keep in mind that the disk which contains the OS, and as we recommend, the binaries of SAP and the database
(base VM) as well, is not anymore limited to 127GB. It now can have up to 1TB in size. This should be enough
space to keep all the necessary file including, for example, SAP batch job logs.
For more suggestions and more details, specifically for DBMS VMs, please consult the DBMS Deployment Guide
Disk Handling
In most scenarios you need to create additional disks in order to deploy the SAP database into the VM. We
talked about the considerations on number of disks in chapter VM/disk structure for SAP deployments of this
document. The Azure portal allows to attach and detach disks once a base VM is deployed. The disks can be
attached/detached when the VM is up and running as well as when it is stopped. When attaching a disk, the
Azure portal offers to attach an empty disk or an existing disk which at this point in time is not attached to
another VM.
Note: Disks can only be attached to one VM at any given time.
During the deployment of a new virtual machine, you can decide whether you want to use Managed Disks or
place your disks on Azure Storage Accounts. If you want to use Premium Storage, we recommend using
Managed Disks.
Next, you need to decide whether you want to create a new and empty disk or whether you want to select an
existing disk that was uploaded earlier and should be attached to the VM now.
IMPORTANT: You DO NOT want to use Host Caching with Azure Standard Storage. You should leave the Host
Cache preference at the default of NONE. With Azure Premium Storage you should enable Read Caching if the
I/O characteristic is mostly read like typical I/O traffic against database data files. In case of database transaction
log file no caching is recommended.
Windows
How to attach a data disk in the Azure portal
If disks are attached, you need to log in to the VM to open the Windows Disk Manager. If automount is not
enabled as recommended in chapter Setting automount for attached disks, the newly attached volume
needs to be taken online and initialized.
Linux
If disks are attached, you need to log in to the VM and initialize the disks as described in this article
If the new disk is an empty disk, you need to format the disk as well. For formatting, especially for DBMS data
and log files the same recommendations as for bare-metal deployments of the DBMS apply.
As already mentioned in chapter The Microsoft Azure Virtual Machine Concept, an Azure Storage account does
not provide infinite resources in terms of I/O volume, IOPS and data volume. Usually DBMS VMs are most
affected by this. It might be best to use a separate Storage Account for each VM if you have few high I/O
volume VMs to deploy in order to stay within the limit of the Azure Storage Account volume. Otherwise, you
need to see how you can balance these VMs between different Storage accounts without hitting the limit of
each single Storage Account. More details are discussed in the DBMS Deployment Guide. You should also keep
these limitations in mind for pure SAP application server VMs or other VMs which eventually might require
additional VHDs. These restrictions do not apply if you use Managed Disk. If you plan to use Premium Storage,
we recommend using Managed Disk.
Another topic which is relevant for Storage Accounts is whether the VHDs in a Storage Account are getting Geo-
replicated. Geo-replication is enabled or disabled on the Storage Account level and not on the VM level. If geo-
replication is enabled, the VHDs within the Storage Account would be replicated into another Azure data center
within the same region. Before deciding on this, you should think about the following restriction:
Azure Geo-replication works locally on each VHD in a VM and does not replicate the IOs in chronological order
across multiple VHDs in a VM. Therefore, the VHD that represents the base VM as well as any additional VHDs
attached to the VM are replicated independent of each other. This means there is no synchronization between
the changes in the different VHDs. The fact that the IOs are replicated independently of the order in which they
are written means that geo-replication is not of value for database servers that have their databases distributed
over multiple VHDs. In addition to the DBMS, there also might be other applications where processes write or
manipulate data in different VHDs and where it is important to keep the order of changes. If that is a
requirement, geo-replication in Azure should not be enabled. Dependent on whether you need or want geo-
replication for a set of VMs, but not for another set, you can already categorize VMs and their related VHDs into
different Storage Accounts that have geo-replication enabled or disabled.
Setting automount for attached disks
Windows
For VMs which are created from own Images or Disks, it is necessary to check and possibly set the
automount parameter. Setting this parameter will allow the VM after a restart or redeployment in Azure to
mount the attached/mounted drives again automatically. The parameter is set for the images provided by
Microsoft in the Azure Marketplace.
In order to set the automount, please check the documentation of the command-line executable diskpart.exe
here:
DiskPart Command-Line Options
Automount
The Windows command-line window should be opened as administrator.
If disks are attached, you need to log in to the VM to open the Windows Disk Manager. If automount is not
enabled as recommended in chapter Setting automount for attached disks, the newly attached volume
>needs to be taken online and initialized.
Linux
You need to initialize a newly attached empty disk as described in this article. You also need to add new
disks to the /etc/fstab.
Final Deployment
For the final deployment and exact steps, especially with regards to the deployment of SAP Extended
Monitoring, please refer to the Deployment Guide.
Windows
By default, the Windows Firewall within an Azure deployed VM is turned on. You now need to allow the SAP
Port to be opened, otherwise the SAP GUI will not be able to connect. To do this:
Open Control Panel\System and Security\Windows Firewall to Advanced Settings.
Now right-click on Inbound Rules and chose New Rule.
In the following Wizard chose to create a new Port rule.
In the next step of the wizard, leave the setting at TCP and type in the port number you want to open.
Since our SAP instance ID is 00, we took 3200. If your instance has a different instance number, the port
you defined earlier based on the instance number should be opened.
In the next part of the wizard, you need to leave the item Allow Connection checked.
In the next step of the wizard you need to define whether the rule applies for Domain, Private and Public
network. Please adjust it if necessary to your needs. However, connecting with SAP GUI from the outside
through the public network, you need to have the rule applied to the public network.
In the last step of the wizard, name the rule and save by pressing Finish
The rule becomes effective immediately.
Linux
The Linux images in the Azure Marketplace do not enable the iptables firewall by default and the connection
to your SAP system should work. If you enabled iptables or another firewall, please refer to the
documentation of iptables or the used firewall to allow inbound tcp traffic to port 32xx (where xx is the
system number of your SAP system).
Security recommendations
The SAP GUI does not connect immediately to any of the SAP instances (port 32xx) which are running, but first
connects via the port opened to the SAP message server process (port 36xx). In the past the very same port was
used by the message server for the internal communication to the application instances. To prevent on-
premises application servers from inadvertently communicating with a message server in Azure the internal
communication ports can be changed. It is highly recommended to change the internal communication
between the SAP message server and its application instances to a different port number on systems that have
been cloned from on-premises systems, such as a clone of development for project testing etc. This can be done
with the default profile parameter:
rdisp/msserv_internal
In this scenario (see chapter Cloud-Only of this document) we are implementing a typical training/demo system
scenario where the complete training/demo scenario is contained in a single VM. We assume that the
deployment is done through VM image templates. We also assume that multiple of these demo/trainings VMs
need to be deployed with the VMs having the same name.
The assumption is that you created a VM Image as described in some sections of chapter Preparing VMs with
SAP for Azure in this document.
The sequence of events to implement the scenario looks like this:
P o w e r sh e l l
Create a new storage account if you don't want to use Managed Disks
Create a new virtual network for every training/demo landscape to enable the usage of the same hostname
and IP addresses. The virtual network is protected by a Network Security Group that only allows traffic to
port 3389 to enable Remote Desktop access and port 22 for SSH.
Create a new public IP address that can be used to access the virtual machine from the internet
Create a virtual machine. For the Cloud-Only scenario every VM will have the same name. The SAP SID of
the SAP NetWeaver instances in those VMs will be the same as well. Within the Azure Resource Group, the
name of the VM needs to be unique, but in different Azure Resource Groups you can run VMs with the same
name. The default 'Administrator' account of Windows or 'root' for Linux are not valid. Therefore, a new
administrator user name needs to be defined together with a password. The size of the VM also needs to be
defined.
#####
# Create a new virtual machine with an official image from the Azure Marketplace
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzureRmVMConfig -VMName SAPERPDemo -VMSize Standard_D11
# select image
$vmconfig = Set-AzureRmVMSourceImage -VM $vmconfig -PublisherName "MicrosoftWindowsServer" -Offer
"WindowsServer" -Skus "2012-R2-Datacenter" -Version "latest"
$vmconfig = Set-AzureRmVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential
$cred -ProvisionVMAgent -EnableAutoUpdate
# $vmconfig = Set-AzureRmVMSourceImage -VM $vmconfig -PublisherName "SUSE" -Offer "SLES-SAP" -Skus "12-SP1"
-Version "latest"
# $vmconfig = Set-AzureRmVMSourceImage -VM $vmconfig -PublisherName "RedHat" -Offer "RHEL" -Skus "7.2" -
Version "latest"
# $vmconfig = Set-AzureRmVMSourceImage -VM $vmconfig -PublisherName "Oracle" -Offer "Oracle-Linux" -Skus
"7.2" -Version "latest"
# $vmconfig = Set-AzureRmVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential
$cred
#####
# Create a new virtual machine with a VHD that contains the private image that you want to use
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzureRmVMConfig -VMName SAPERPDemo -VMSize Standard_D11
$diskName="osfromimage"
$osDiskUri=$account.PrimaryEndpoints.Blob.ToString() + "vhds/" + $diskName + ".vhd"
$vmconfig = Set-AzureRmVMOSDisk -VM $vmconfig -Name $diskName -VhdUri $osDiskUri -CreateOption fromImage -
SourceImageUri <path to VHD that contains the OS image> -Windows
$vmconfig = Set-AzureRmVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential
$cred
#$vmconfig = Set-AzureRmVMOSDisk -VM $vmconfig -Name $diskName -VhdUri $osDiskUri -CreateOption fromImage -
SourceImageUri <path to VHD that contains the OS image> -Linux
#$vmconfig = Set-AzureRmVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential $cred
#####
# Create a new virtual machine with a Managed Disk Image
#####
$cred=Get-Credential -Message "Type the name and password of the local administrator account."
$vmconfig = New-AzureRmVMConfig -VMName SAPERPDemo -VMSize Standard_D11
CLI
The following example code can be used on Linux. For Windows, please either use PowerShell as described
above or adapt the example to use %rgName% instead of $rgName and set the environment variable using the
Windows command set.
Create a new resource group for every training/demo landscape
rgName=SAPERPDemo1
rgNameLower=saperpdemo1
az group create --name $rgName --location "North Europe"
az storage account create --resource-group $rgName --location "North Europe" --kind Storage --sku
Standard_LRS --name $rgNameLower
Create a new virtual network for every training/demo landscape to enable the usage of the same hostname
and IP addresses. The virtual network is protected by a Network Security Group that only allows traffic to
port 3389 to enable Remote Desktop access and port 22 for SSH.
az network nsg create --resource-group $rgName --location "North Europe" --name SAPERPDemoNSG
az network nsg rule create --resource-group $rgName --nsg-name SAPERPDemoNSG --name SAPERPDemoNSGRDP --
protocol \* --source-address-prefix \* --source-port-range \* --destination-address-prefix \* --
destination-port-range 3389 --access Allow --priority 100 --direction Inbound
az network nsg rule create --resource-group $rgName --nsg-name SAPERPDemoNSG --name SAPERPDemoNSGSSH --
protocol \* --source-address-prefix \* --source-port-range \* --destination-address-prefix \* --
destination-port-range 22 --access Allow --priority 101 --direction Inbound
az network vnet create --resource-group $rgName --name SAPERPDemoVNet --location "North Europe" --address-
prefixes 10.0.1.0/24
az network vnet subnet create --resource-group $rgName --vnet-name SAPERPDemoVNet --name Subnet1 --address-
prefix 10.0.1.0/24 --network-security-group SAPERPDemoNSG
Create a new public IP address that can be used to access the virtual machine from the internet
az network public-ip create --resource-group $rgName --name SAPERPDemoPIP --location "North Europe" --dns-
name $rgNameLower --allocation-method Dynamic
Create a virtual machine. For the Cloud-Only scenario every VM will have the same name. The SAP SID of
the SAP NetWeaver instances in those VMs will be the same as well. Within the Azure Resource Group, the
name of the VM needs to be unique, but in different Azure Resource Groups you can run VMs with the same
name. The default 'Administrator' account of Windows or 'root' for Linux are not valid. Therefore, a new
administrator user name needs to be defined together with a password. The size of the VM also needs to be
defined.
#####
# Create virtual machines using storage accounts
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest --admin-username <username> --admin-
password <password> --size Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-
container-name vhds --os-disk-name os
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image SUSE:SLES-SAP:12-SP1:latest --admin-username <username> --admin-password <password> --size
Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-
name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image RedHat:RHEL:7.2:latest --admin-username <username> --admin-password <password> --size Standard_D11 --
use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --
authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image "Oracle:Oracle-Linux:7.2:latest" --admin-username <username> --admin-password <password> --size
Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-
name os --authentication-type password
#####
# Create virtual machines using Managed Disks
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest --admin-username <username> --admin-
password <password> --size Standard_DS11_v2 --os-disk-name os
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image SUSE:SLES-SAP:12-SP1:latest --admin-username <username> --admin-password <password> --size
Standard_DS11_v2 --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image RedHat:RHEL:7.2:latest --admin-username <username> --admin-password <password> --size
Standard_DS11_v2 --os-disk-name os --authentication-type password
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
image "Oracle:Oracle-Linux:7.2:latest" --admin-username <username> --admin-password <password> --size
Standard_DS11_v2 --os-disk-name os --authentication-type password
#####
# Create a new virtual machine with a VHD that contains the private image that you want to use
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
os-type Windows --admin-username <username> --admin-password <password> --size Standard_D11 --use-
unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --image <path
to image vhd>
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
os-type Linux --admin-username <username> --admin-password <password> --size Standard_D11 --use-unmanaged-
disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --image <path to image
vhd> --authentication-type password
#####
# Create a new virtual machine with a Managed Disk Image
#####
az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
admin-username <username> --admin-password <password> --size Standard_DS11_v2 --os-disk-name os --image
<managed disk image id>
#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --
admin-username <username> --admin-password <password> --size Standard_DS11_v2 --os-disk-name os --image
<managed disk image id> --authentication-type password
Optionally add additional disks and restore necessary content. Be aware that all blob names (URLs to the
blobs) must be unique within Azure.
Te m p l a t e
You can use the sample templates on the azure-quickstart-templates repository on github.
Simple Linux VM
Simple Windows VM
VM from image
Implement a set of VMs which need to communicate within Azure
This Cloud-Only scenario is a typical scenario for training and demo purposes where the software representing
the demo/training scenario is spread over multiple VMs. The different components installed in the different
VMs need to communicate with each other. Again, in this scenario no on-premises network communication or
cross-premises scenario is needed.
This scenario is an extension of the installation described in chapter Single VM with SAP NetWeaver
demo/training scenario of this document. In this case more virtual machines will be added to an existing
resource group. In the following example the training landscape consists of an SAP ASCS/SCS VM, a VM
running a DBMS and an SAP Application Server instance VM.
Before you build this scenario you need to think about basic settings as already exercised in the scenario before.
Resource Group and Virtual Machine naming
All resource group names must be unique. Develop your own naming scheme of your resources, such as
<rg-name >-suffix.
The virtual machine name has to be unique within the resource group.
Set up Network for communication between the different VMs
To prevent naming collisions with clones of the same training/demo landscapes, you need to create an Azure
Virtual Network for every landscape. DNS name resolution will be provided by Azure or you can configure your
own DNS server outside Azure (not to be further discussed here). In this scenario we do not configure our own
DNS. For all virtual machines inside one Azure Virtual Network, communication via hostnames will be enabled.
The reasons to separate training or demo landscapes by virtual networks and not only resource groups could
be:
The SAP landscape as set up needs its own AD/OpenLDAP and a Domain Server needs to be part of each of
the landscapes.
The SAP landscape as set up has components that need to work with fixed IP addresses.
More details about Azure Virtual Networks and how to define them can be found in this article.
Dispatcher sapdp <nn> see * 3201 3200 3299 SAP Dispatcher, used
by SAP GUI for
Windows and Java
Message server sapms <sid > see ** 3600 free sapms <anySID sid = SAP-System-ID
>
Gateway sapgw <nn > see * 3301 free SAP gateway, used
for CPIC and RFC
communication
Setting up your on-premises TCP/IP based network printers in an Azure VM is overall the same as in your
corporate network, assuming you do have a VPN Site-To-Site tunnel or ExpressRoute connection established.
Windows
To do this:
Some network printers come with a configuration wizard which makes it easy to set up your printer in
an Azure VM. If no wizard software has been distributed with the printer the manual way to set up the
printer is to create a new TCP/IP printer port.
Open Control Panel -> Devices and Printers -> Add a printer
Choose Add a printer using a TCP/IP address or hostname
Type in the IP address of the printer
Printer Port standard 9100
If necessary install the appropriate printer driver manually.
Linux
like for Windows just follow the standard procedure to install a network printer
just follow the public Linux guides for SUSE or Red Hat and Oracle Linux on how to add a printer.
H o st - b a se d p r i n t e r o v e r SM B (sh a r e d p r i n t e r ) i n C r o ss- P r e m i se s sc e n a r i o
Host-based printers are not network-compatible by design. But a host-based printer can be shared among
computers on a network as long as the printer is connected to a powered-on computer. Connect your corporate
network either Site-To-Site or ExpressRoute and share your local printer. The SMB protocol uses NetBIOS
instead of DNS as name service. The NetBIOS host name can be different from the DNS host name. The
standard case is that the NetBIOS host name and the DNS host name are identical. The DNS domain does not
make sense in the NetBIOS name space. Accordingly, the fully qualified DNS host name consisting of the DNS
host name and DNS domain must not be used in the NetBIOS name space.
The printer share is identified by a unique name in the network:
Host name of the SMB host (always needed).
Name of the share (always needed).
Name of the domain if printer share is not in the same domain as SAP system.
Additionally, a user name and a password may be required to access the printer share.
How to:
Windows
Share your local printer. In the Azure VM, open the Windows Explorer and type in the share name of the
printer. A printer installation wizard will guide you through the installation process.
Linux
Here are some examples of documentation about configuring network printers in Linux or including a
chapter regarding printing in Linux. It will work the same way in an Azure Linux VM as long as the VM is
part of a VPN:
SLES https://en.opensuse.org/SDB:Printing_via_SMB_(Samba)_Share_or_Windows_Share
RHEL or Oracle Linux https://access.redhat.com/documentation/en-
US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/sec-Printer_Configuration.html#s1-
printing-smb-printer
U SB P r i n t e r (p r i n t e r fo r w a r d i n g )
In Azure the ability of the Remote Desktop Services to provide users the access to their local printer devices in a
remote session is not available.
Windows
More details on printing with Windows can be found here:
http://technet.microsoft.com/library/jj590748.aspx.
Integration of SAP Azure Systems into Correction and Transport System (TMS) in Cross-Premises
The SAP Change and Transport System (TMS) needs to be configured to export and import transport request
across systems in the landscape. We assume that the development instances of an SAP system (DEV) are
located in Azure whereas the quality assurance (QA) and productive systems (PRD) are on-premises.
Furthermore, we assume that there is a central transport directory.
C o n fi g u r i n g t h e T r a n sp o r t D o m a i n
Configure your Transport Domain on the system you designated as the Transport Domain Controller as
described in Configuring the Transport Domain Controller. A system user TMSADM will be created and the
required RFC destination will be generated. You may check these RFC connections using the transaction SM59.
Hostname resolution must be enabled across your transport domain.
How to:
In our scenario we decided the on-premises QAS system will be the CTS domain controller. Call transaction
STMS. The TMS dialog box appears. A Configure Transport Domain dialog box is displayed. (This dialog box
only appears if you have not yet configured a transport domain.)
Make sure that the automatically created user TMSADM is authorized (SM59 -> ABAP Connection ->
TMSADM@E61.DOMAIN_E61 -> Details -> Utilities(M) -> Authorization Test). The initial screen of
transaction STMS should show that this SAP System is now functioning as the controller of the transport
domain as shown here:
Supportability
Azure Monitoring Solution for SAP
In order to enable the monitoring of mission critical SAP systems on Azure the SAP monitoring tools
SAPOSCOL or SAP Host Agent get data off the Azure Virtual Machine Service host via an Azure Monitoring
Extension for SAP. Since the demands by SAP were very specific to SAP applications, Microsoft decided not to
generically implement the required functionality into Azure, but leave it for customers to deploy the necessary
monitoring components and configurations to their Virtual Machines running in Azure. However, deployment
and lifecycle management of the monitoring components will be mostly automated by Azure.
Solution design
The solution developed to enable SAP Monitoring is based on the architecture of Azure VM Agent and Extension
framework. The idea of the Azure VM Agent and Extension framework is to allow installation of software
application(s) available in the Azure VM Extension gallery within a VM. The principle idea behind this concept is
to allow (in cases like the Azure Monitoring Extension for SAP), the deployment of special functionality into a
VM and the configuration of such software at deployment time.
The 'Azure VM Agent' that enables handling of specific Azure VM Extensions within the VM is injected into
Windows VMs by default on VM creation in the Azure portal. In case of SUSE, Red Hat or Oracle Linux, the VM
agent is already part of the Azure Marketplace image. In case one would upload a Linux VM from on-premises
to Azure the VM agent has to be installed manually.
The basic building blocks of the Monitoring solution in Azure for SAP looks like this:
As shown in the block diagram above, one part of the monitoring solution for SAP is hosted in the Azure VM
Image and Azure Extension Gallery which is a globally replicated repository that is managed by Azure
Operations. It is the responsibility of the joint SAP/MS team working on the Azure implementation of SAP to
work with Azure Operations to publish new versions of the Azure Monitoring Extension for SAP.
When you deploy a new Windows VM, the Azure VM Agent is automatically added into the VM. The function
of this agent is to coordinate the loading and configuration of the Azure Extensions for monitoring of SAP
NetWeaver Systems. For Linux VMs the Azure VM Agent is already part of the Azure Marketplace OS image.
However, there is a step that still needs to be executed by the customer. This is the enablement and
configuration of the performance collection. The process related to the configuration is automated by a
PowerShell script or CLI command. The PowerShell script can be downloaded in the Microsoft Azure Script
Center as described in the Deployment Guide.
The overall Architecture of the Azure monitoring solution for SAP looks like:
For the exact how-to and for detailed steps of using these PowerShell cmdlets or CLI command
during deployments, follow the instructions given in the Deployment Guide.
Integration of Azure located SAP instance into SAProuter
SAP instances running in Azure need to be accessible from SAProuter as well.
A SAProuter enables the TCP/IP communication between participating systems if there is no direct IP
connection. This provides the advantage that no end-to-end connection between the communication partners is
necessary on network level. The SAProuter is listening on port 3299 by default. To connect SAP instances
through a SAProuter you need to give the SAProuter string and host name with any attempt to connect.
A special deployment scenario by some customers is the direct exposure of the SAP Enterprise Portal to the
Internet while the virtual machine host is connected to the company network via site-to-site VPN tunnel or
ExpressRoute. For such a scenario, you have to make sure that specific ports are open and not blocked by
firewall or network security group. The same mechanics would need to be applied when you want to connect to
an SAP Java instance from on-premises in a Cloud-Only scenario.
The initial portal URI is http(s): <Portalserver >:5XX00/irj where the port is formed by 50000 plus
(Systemnumber 100). The default portal URI of SAP system 00 is <dns name >. <azure region
>.Cloudapp.azure.com:PublicPort/irj. For more details, have a look at
http://help.sap.com/saphelp_nw70ehp1/helpdata/de/a2/f9d7fed2adc340ab462ae159d19509/frameset.htm.
If you want to customize the URL and/or ports of your SAP Enterprise Portal, please check this documentation:
Change Portal URL
Change Default port numbers, Portal port numbers
High Availability (HA) and Disaster Recovery (DR) for SAP NetWeaver
running on Azure Virtual Machines
Definition of terminologies
The term high availability (HA) is generally related to a set of technologies that minimizes IT disruptions by
providing business continuity of IT services through redundant, fault-tolerant or failover protected components
inside the same data center. In our case, within one Azure Region.
Disaster recovery (DR) is also targeting minimizing IT services disruption, and their recovery but across
different data centers, that are usually located hundreds of kilometers away. In our case usually between
different Azure Regions within the same geopolitical region or as established by you as a customer.
Overview of High Availability
We can separate the discussion about SAP high availability in Azure into two parts:
Azure infrastructure high availability, for example HA of compute (VMs), network, storage etc. and its
benefits for increasing SAP application availability.
SAP application high availability, for example HA of SAP software components:
SAP application servers
SAP ASCS/SCS instance
DB server
and how it can be combined with Azure infrastructure HA.
SAP High Availability in Azure has some differences compared to SAP High Availability in an on-premises
physical or virtual environment. The following paper from SAP describes standard SAP High Availability
configurations in virtualized environments on Windows: http://scn.sap.com/docs/DOC-44415. There is no
sapinst-integrated SAP-HA configuration for Linux like it exists for Windows. Regarding SAP HA on-premises
for Linux find more information here: http://scn.sap.com/docs/DOC-8541.
Azure Infrastructure High Availability
There is currently a single-VM SLA of 99.9%. To get an idea how the availability of a single VM might look like
you can simply build the product of the different available Azure SLAs:
https://azure.microsoft.com/support/legal/sla/.
The basis for the calculation is 30 days per month, or 43200 minutes. Therefore, 0.05% downtime corresponds
to 21.6 minutes. As usual, the availability of the different services will multiply in the following way:
(Availability Service #1/100) * (Availability Service #2/100) * (Availability Service #3/100) *
Like:
(99.95/100) * (99.9/100) * (99.9/100) = 0.9975 or an overall availability of 99.75%.
Virtual Machine (VM) High Availability
There are two types of Azure platform events that can affect the availability of your virtual machines: planned
maintenance and unplanned maintenance.
Planned maintenance events are periodic updates made by Microsoft to the underlying Azure platform to
improve overall reliability, performance, and security of the platform infrastructure that your virtual
machines run on.
Unplanned maintenance events occur when the hardware or physical infrastructure underlying your virtual
machine has faulted in some way. This may include local network failures, local disk failures, or other rack
level failures. When such a failure is detected, the Azure platform will automatically migrate your virtual
machine from the unhealthy physical server hosting your virtual machine to a healthy physical server. Such
events are rare, but may also cause your virtual machine to reboot.
More details can be found in this documentation: http://azure.microsoft.com/documentation/articles/virtual-
machines-manage-availability
Azure Storage Redundancy
The data in your Microsoft Azure Storage Account is always replicated to ensure durability and high availability,
meeting the Azure Storage SLA even in the face of transient hardware failures.
Since Azure Storage is keeping three images of the data by default, RAID5 or RAID1 across multiple Azure disks
are not necessary.
More details can be found in this article: http://azure.microsoft.com/documentation/articles/storage-
redundancy/
Utilizing Azure Infrastructure VM Restart to Achieve Higher Availability of SAP Applications
If you decide not to use functionalities like Windows Server Failover Clustering (WSFC) or Pacemaker on Linux
(currently only supported for SLES 12 and higher), Azure VM Restart is utilized to protect an SAP System
against planned and unplanned downtime of the Azure physical server infrastructure and overall underlying
Azure platform.
NOTE
It is important to mention that Azure VM Restart primarily protects VMs and NOT applications. VM Restart does not
offer high availability for SAP applications, but it does offer a certain level of infrastructure availability and therefore
indirectly higher availability of SAP systems. There is also no SLA for the time it will take to restart a VM after a planned
or unplanned host outage. Therefore, this method of high availability is not suitable for critical components of an SAP
system like (A)SCS or DBMS.
Another important infrastructure element for high availability is storage. For example Azure Storage SLA is 99,9
% availability. If one deploys all VMs with its disks into a single Azure Storage Account, potential Azure Storage
unavailability will cause unavailability of all VMs that are placed in that Azure Storage Account, and also all SAP
components running inside of those VMs.
Instead of putting all VMs into one single Azure Storage Account, you can also use dedicated storage accounts
for each VM, and in this way increase overall VM and SAP application availability by using multiple independent
Azure Storage Accounts.
Azure Managed Disks are automatically placed in the Fault Domain of the virtual machine they are attached to.
If you place two virtual machines in an availability set and use Managed Disks, the platform will take care of
distributing the Managed Disks into different Fault Domains as well. If you plan to use Premium Storage, we
highly recommend using Manage Disks as well.
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure HA and storage accounts
could look like this:
A sample architecture of an SAP NetWeaver system that uses Azure infrastructure HA and Managed Disks could
look like this:
For critical SAP components we achieved the following so far:
High Availability of SAP Application Servers (AS)
SAP application server instances are redundant components. Each SAP AS instance is deployed on its
own VM, that is running in a different Azure Fault and Upgrade Domain (see chapters Fault Domains and
Upgrade Domains). This is ensured by using Azure Availability Sets (see chapter Azure Availability Sets).
Potential planned or unplanned unavailability of an Azure Fault or Upgrade Domain will cause
unavailability of a restricted number of VMs with their SAP AS instances.
Each SAP AS instance is placed in its own Azure Storage account potential unavailability of one Azure
Storage Account will cause unavailability of only one VM with its SAP AS instance. However, be aware
that there is a limit of Azure Storage Accounts within one Azure subscription. To ensure automatic start
of (A)SCS instance after the VM reboot, make sure to set the Autostart parameter in (A)SCS instance start
profile described in chapter Using Autostart for SAP instances. Please also read chapter High Availability
for SAP Application Servers for more details.
Even if you use Managed Disks, those disks are also stored in an Azure Storage Account and can be
unavailable in an event of a storage outage.
Higher Availability of SAP (A)SCS instance
Here we utilize Azure VM Restart to protect the VM with installed SAP (A)SCS instance. In the case of
planned or unplanned downtime of Azure severs, VMs will be restarted on another available server. As
mentioned earlier, Azure VM Restart primarily protects VMs and NOT applications, in this case the
(A)SCS instance. Through the VM Restart well reach indirectly higher availability of SAP (A)SCS
instance. To insure automatic start of (A)SCS instance after the VM reboot, make sure to set Autostart
parameter in (A)SCS instance start profile described in chapter Using Autostart for SAP instances. This
means the (A)SCS instance as a Single Point of Failure (SPOF) running in a single VM will be the
determinative factor for the availability of the whole SAP landscape.
Higher Availability of DBMS Server
Here, similar to the SAP (A)SCS instance use case, we utilize Azure VM Restart to protect the VM with
installed DBMS software, and we achieve higher availability of DBMS software through VM Restart.
DBMS running in a single VM is also a SPOF, and it is the determinative factor for the availability of the
whole SAP landscape.
SAP Application High Availability on Azure IaaS
To achieve full SAP system high availability, we need to protect all critical SAP system components, for example
redundant SAP application servers, and unique components (for example Single Point of Failure) like SAP
(A)SCS instance and DBMS.
High Availability for SAP Application Servers
For the SAP application servers/dialog instances its not necessary to think about a specific high availability
solution. High availability is simply achieved by redundancy and thereby having enough of them in different
virtual machines. They should all be placed in the same Azure Availability Set to avoid that the VMs might be
updated at the same time during planned maintenance downtime. The basic functionality which builds on
different Upgrade and Fault Domains within an Azure Scale Unit was already introduced in chapter Upgrade
Domains. Azure Availability Sets were presented in chapter Azure Availability Sets of this document.
There is no infinite number of Fault and Upgrade Domains that can be used by an Azure Availability Set within
an Azure Scale Unit. This means that putting a number of VMs into one Availability Set, sooner or later more
than one VM ends up in the same Fault or Upgrade Domain.
Deploying a few SAP application server instances in their dedicated VMs and assuming that we got five
Upgrade Domains, the following picture emerges at the end. The actual max number of fault and update
domains within an availability set might change in the future:
The architecture for SAP HA on Linux on Azure is basically the same as for Windows as described above. As of
Jan 2016 there is no SAP (A)SCS HA solution supported yet on Linux on Azure
As a consequence as of January 2016 an SAP-Linux-Azure system cannot achieve the same availability as an
SAP-Windows-Azure system because of missing HA for the (A)SCS instance and the single-instance SAP ASE
database.
Using Autostart for SAP instances
SAP offered the functionality to start SAP instances immediately after the start of the OS within the VM. The
exact steps were documented in SAP Knowledge Base Article 1909114. However, SAP is not recommending to
use the setting anymore because there is no control in the order of instance restarts, assuming more than one
VM got affected or multiple instances ran per VM. Assuming a typical Azure scenario of one SAP application
server instance in a VM and the case of a single VM eventually getting restarted, the Autostart is not really
critical and can be enabled by adding this parameter:
Autostart = 1
Into the start profile of the SAP ABAP and/or Java instance.
NOTE
The Autostart parameter can have some downfalls as well. In more detail, the parameter triggers the start of an SAP
ABAP or Java instance when the related Windows/Linux service of the instance is started. That certainly is the case when
the operating system boots up. However, restarts of SAP services are also a common thing for SAP Software Lifecycle
Management functionality like SUM or other updates or upgrades. These functionalities are not expecting an instance to
be restarted automatically at all. Therefore, the Autostart parameter should be disabled before running such tasks. The
Autostart parameter also should not be used for SAP instances that are clustered, like ASCS/SCS/CI.
NOTE
As of Dec 2015 using VM Backup does NOT keep the unique VM ID which is used for SAP licensing. This means that a
restore from a VM backup requires installation of a new SAP license key as the restored VM is considered to be a new
VM and not a replacement of the former one which was saved.
Windows
Theoretically, VMs that run databases can be backed up in a consistent manner as well if the DBMS system supports the
Windows VSS (Volume Shadow Copy Service
https://msdn.microsoft.com/library/windows/desktop/bb968832(v=vs.85).aspx) as, for example, SQL Server does.
However, be aware that based on Azure VM backups point-in-time restores of databases are not possible. Therefore, the
recommendation is to perform backups of databases with DBMS functionality instead of relying on Azure VM Backup.
To get familiar with Azure Virtual Machine Backup please start here: https://docs.microsoft.com/azure/backup/backup-
azure-vms.
Other possibilities are to use a combination of Microsoft Data Protection Manager installed in an Azure VM and Azure
Backup to backup/restore databases. More information can be found here:
https://docs.microsoft.com/azure/backup/backup-azure-dpm-introduction.
Linux
There is no equivalent to Windows VSS in Linux. Therefore only file-consistent backups are possible but not application-
consistent backups. The SAP DBMS backup should be done using DBMS functionality. The file system which includes the
SAP-related data can be saved, for example, using tar as described here:
http://help.sap.com/saphelp_nw70ehp2/helpdata/en/d3/c0da3ccbb04d35b186041ba6ac301f/content.htm
Summary
The key points of High Availability for SAP systems in Azure are:
At this point in time, the SAP single point of failure cannot be secured exactly the same way as it can be done
in on-premises deployments. The reason is that Shared Disk clusters cant yet be built in Azure without the
use of 3rd party software.
For the DBMS layer you need to use DBMS functionality that does not rely on shared disk cluster technology.
Details are documented in the DBMS Guide.
To minimize the impact of problems within Fault Domains in the Azure infrastructure or host maintenance,
you should use Azure Availability Sets:
It is recommended to have one Availability Set for the SAP application layer.
It is recommended to have a separate Availability Set for the SAP DBMS layer.
It is NOT recommended to apply the same Availability set for VMs of different SAP systems.
It is recommended to use Premium Managed Disks.
For Backup purposes of the SAP DBMS layer, please check the DBMS Guide.
Backing up SAP Dialog instances makes little sense since it is usually faster to redeploy simple dialog
instances.
Backing up the VM which contains the global directory of the SAP system and with it all the profiles of the
different instances, does make sense and should be performed with Windows Backup or, for example, tar on
Linux. Since there are differences between Windows Server 2008 (R2) and Windows Server 2012 (R2), which
make it easier to back up using the more recent Windows Server releases, we recommend to run Windows
Server 2012 (R2) as Windows guest operating system.
High availability for SAP NetWeaver on Azure VMs
8/21/2017 49 min to read Edit Online
Azure Virtual Machines is the solution for organizations that need compute, storage, and network resources, in
minimal time, and without lengthy procurement cycles. You can use Azure Virtual Machines to deploy classic
applications like SAP NetWeaver-based ABAP, Java, and an ABAP+Java stack. Extend reliability and availability
without additional on-premises resources. Azure Virtual Machines supports cross-premises connectivity, so you
can integrate Azure Virtual Machines into your organization's on-premises domains, private clouds, and SAP
system landscape.
In this article, we cover the steps that you can take to deploy high-availability SAP systems in Azure by using the
Azure Resource Manager deployment model. We walk you through these major tasks:
Find the right SAP Notes and installation guides, listed in the Resources section. This article complements SAP
installation documentation and SAP Notes, which are the primary resources that can help you install and deploy
SAP software on specific platforms.
Learn the differences between the Azure Resource Manager deployment model and the Azure classic
deployment model.
Learn about Windows Server Failover Clustering quorum modes, so you can select the model that is right for
your Azure deployment.
Learn about Windows Server Failover Clustering shared storage in Azure services.
Learn how to help protect single-point-of-failure components like Advanced Business Application
Programming (ABAP) SAP Central Services (ASCS)/SAP Central Services (SCS) and database management
systems (DBMS), and redundant components like SAP Application Server, in Azure.
Follow a step-by-step example of an installation and configuration of a high-availability SAP system in a
Windows Server Failover Clustering cluster in Azure by using Azure Resource Manager.
Learn about additional steps required to use Windows Server Failover Clustering in Azure, but which are not
needed in an on-premises deployment.
To simplify deployment and configuration, in this article, we use the SAP three-tier high-availability Resource
Manager templates. The templates automate deployment of the entire infrastructure that you need for a high-
availability SAP system. The infrastructure also supports SAP Application Performance Standard (SAPS) sizing of
your SAP system.
Prerequisites
Before you start, make sure that you meet the prerequisites that are described in the following sections. Also, be
sure to check all resources listed in the Resources section.
In this article, we use Azure Resource Manager templates for three-tier SAP NetWeaver. For a helpful overview of
templates, see SAP Azure Resource Manager templates.
Resources
These articles cover SAP deployments in Azure:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP NetWeaver
Azure Virtual Machines DBMS deployment for SAP NetWeaver
Azure Virtual Machines high availability for SAP NetWeaver (this guide)
NOTE
Whenever possible, we give you a link to the referring SAP installation guide (see the SAP installation guides). For
prerequisites and information about the installation process, it's a good idea to read the SAP NetWeaver installation guides
carefully. This article covers only specific tasks for SAP NetWeaver-based systems that you can use with Azure Virtual
Machines.
2243692 Use of Azure Premium SSD Storage for SAP DBMS Instance
Learn more about the limitations of Azure subscriptions, including general default limitations and maximum
limitations.
NOTE
You don't need shared disks for high availability with some DBMS applications, like with SQL Server. SQL Server Always On
replicates DBMS data and log files from the local disk of one cluster node to the local disk of another cluster node. In that
case, the Windows cluster configuration doesn't need a shared disk.
Figure 2: Windows Server Failover Clustering configuration in Azure without a shared disk
Shared disk in Azure with SIOS DataKeeper
You need cluster shared storage for a high-availability SAP ASCS/SCS instance. As of September 2016, Azure
doesn't offer shared storage that you can use to create a shared storage cluster. You can use third-party software
SIOS DataKeeper Cluster Edition to create a mirrored storage that simulates cluster shared storage. The SIOS
solution provides real-time synchronous data replication. This is how you can create a shared disk resource for a
cluster:
1. Attach an additional Azure virtual hard disk (VHD) to each of the virtual machines (VMs) in a Windows cluster
configuration.
2. Run SIOS DataKeeper Cluster Edition on both virtual machine nodes.
3. Configure SIOS DataKeeper Cluster Edition so that it mirrors the content of the additional VHD attached volume
from the source virtual machine to the additional VHD attached volume of the target virtual machine. SIOS
DataKeeper abstracts the source and target local volumes, and then presents them to Windows Server Failover
Clustering as one shared disk.
Get more information about SIOS DataKeeper.
Figure 3: Windows Server Failover Clustering configuration in Azure with SIOS DataKeeper
NOTE
You don't need shared disks for high availability with some DBMS products, like SQL Server. SQL Server Always On replicates
DBMS data and log files from the local disk of one cluster node to the local disk of another cluster node. In this case, the
Windows cluster configuration doesn't need a shared disk.
Figure 6: Windows Server Failover Clustering for an SAP ASCS/SCS configuration in Azure with SIOS DataKeeper
High-availability DBMS instance
The DBMS also is a single point of contact in an SAP system. You need to protect it by using a high-availability
solution. Figure 7 shows a SQL Server Always On high-availability solution in Azure, with Windows Server Failover
Clustering and the Azure internal load balancer. SQL Server Always On replicates DBMS data and log files by using
its own DBMS replication. In this case, you don't need cluster shared disks, which simplifies the entire setup.
NOTE
All IP addresses of the network cards and Azure internal load balancers are dynamic by default. Change them to static IP
addresses. We describe how to do this later in the article.
Deploy virtual machines with corporate network connectivity (cross-premises) to use in production
For production SAP systems, deploy Azure virtual machines with corporate network connectivity (cross-premises)
by using Azure Site-to-Site VPN or Azure ExpressRoute.
NOTE
You can use your Azure Virtual Network instance. The virtual network and subnet have already been created and prepared.
1. In the Azure portal, on the Parameters blade, in the NEWOREXISTINGSUBNET box, select existing.
2. In the SUBNETID box, add the full string of your prepared Azure network SubnetID where you plan to deploy
your Azure virtual machines.
3. To get a list of all Azure network subnets, run this PowerShell command:
/subscriptions/<SubscriptionId>/resourceGroups/<VPNName>/providers/Microsoft.Network/virtualNetworks/az
ureVnet/subnets/<SubnetName>
NOTE
You also need to deploy at least one dedicated virtual machine for Active Directory and DNS in the same Azure Virtual
Network instance. The template doesn't create these virtual machines.
IMPORTANT
Don't make any changes to the network settings inside the guest operating system. This includes IP addresses, DNS servers,
and subnet. Configure all your network settings in Azure. The Dynamic Host Configuration Protocol (DHCP) service
propagates your settings.
DNS IP addresses
To set the required DNS IP addresses, do the following steps.
1. In the Azure portal, on the DNS servers blade, make sure that your virtual network DNS servers option is set
to Custom DNS.
2. Select your settings based on the type of network you have. For more information, see the following
resources:
Corporate network connectivity (cross-premises): Add the IP addresses of the on-premises DNS servers.
You can extend on-premises DNS servers to the virtual machines that are running in Azure. In that
scenario, you can add the IP addresses of the Azure virtual machines on which you run the DNS service.
Cloud-only deployment: Deploy an additional virtual machine in the same Virtual Network instance that
serves as a DNS server. Add the IP addresses of the Azure virtual machines that you've set up to run DNS
service.
NOTE
If you change the IP addresses of the DNS servers, you need to restart the Azure virtual machines to apply the
change and propagate the new DNS servers.
In our example, the DNS service is installed and configured on these Windows virtual machines:
VIRTUAL MACHINE HOST
VIRTUAL MACHINE ROLE NAME NETWORK CARD NAME STATIC IP ADDRESS
Host names and static IP addresses for the SAP ASCS/SCS clustered instance and DBMS clustered instance
For on-premises deployment, you need these reserved host names and IP addresses:
VIRTUAL HOST NAME ROLE VIRTUAL HOST NAME VIRTUAL STATIC IP ADDRESS
When you create the cluster, create the virtual host names pr1-ascs-vir and pr1-dbms-vir and the associated IP
addresses that manage the cluster itself. For information about how to do this, see Collect cluster nodes in a cluster
configuration.
You can manually create the other two virtual host names, pr1-ascs-sap and pr1-dbms-sap, and the associated IP
addresses, on the DNS server. The clustered SAP ASCS/SCS instance and the clustered DBMS instance use these
resources. For information about how to do this, see Create a virtual host name for a clustered SAP ASCS/SCS
instance.
Set static IP addresses for the SAP virtual machines
After you deploy the virtual machines to use in your cluster, you need to set static IP addresses for all virtual
machines. Do this in the Azure Virtual Network configuration, and not in the guest operating system.
1. In the Azure portal, select Resource Group > Network Card > Settings > IP Address.
2. On the IP addresses blade, under Assignment, select Static. In the IP address box, enter the IP address
that you want to use.
NOTE
If you change the IP address of the network card, you need to restart the Azure virtual machines to apply the
change.
Figure 13: Set static IP addresses for the network card of each virtual machine
Repeat this step for all network interfaces, that is, for all virtual machines, including virtual machines that
you want to use for your Active Directory/DNS service.
In our example, we have these virtual machines and static IP addresses:
Figure 14: Set static IP addresses for the internal load balancer for the SAP ASCS/SCS instance
In our example, we have two Azure internal load balancers that have these static IP addresses:
AZURE INTERNAL LOAD BALANCER ROLE AZURE INTERNAL LOAD BALANCER NAME STATIC IP ADDRESS
Default ASCS/SCS load balancing rules for the Azure internal load balancer
The SAP Azure Resource Manager template creates the ports you need:
An ABAP ASCS instance, with the default instance number 00
A Java SCS instance, with the default instance number 01
When you install your SAP ASCS/SCS instance, you must use the default instance number 00 for your ABAP ASCS
instance and the default instance number 01 for your Java SCS instance.
Next, create required internal load balancing endpoints for the SAP NetWeaver ports.
To create required internal load balancing endpoints, first, create these load balancing endpoints for the SAP
NetWeaver ABAP ASCS ports:
CONCRETE PORTS FOR (ASCS INSTANCE
WITH INSTANCE NUMBER 00) (ERS WITH
SERVICE/LOAD BALANCING RULE NAME DEFAULT PORT NUMBERS 10)
Figure 15: Default ASCS/SCS load balancing rules for the Azure internal load balancer
Set the IP address of the load balancer pr1-lb-dbms to the IP address of the virtual host name of the DBMS
instance.
Change the ASCS/SCS default load balancing rules for the Azure internal load balancer
If you want to use different numbers for the SAP ASCS or SCS instances, you must change the names and values of
their ports from default values.
1. In the Azure portal, select <SID>-lb-ascs load balancer > Load Balancing Rules.
2. For all load balancing rules that belong to the SAP ASCS or SCS instance, change these values:
Name
Port
Back-end port
For example, if you want to change the default ASCS instance number from 00 to 31, you need to make the
changes for all ports listed in Table 1.
Here's an example of an update for port lbrule3200.
Figure 16: Change the ASCS/SCS default load balancing rules for the Azure internal load balancer
Add Windows virtual machines to the domain
After you assign a static IP address to the virtual machines, add the virtual machines to the domain.
HKLM\SYSTEM\CURRENTCONTROLSET\SERVICES\TCPIP\PARAMETE
PATH RS
Value 120000
HKLM\SYSTEM\CURRENTCONTROLSET\SERVICES\TCPIP\PARAMETE
PATH RS
Value 120000
Figure 25: Cluster core service is up and running, and with the correct IP address
7. Add the second cluster node.
Now that the core cluster service is up and running, you can add the second cluster node.
IMPORTANT
Be sure that the Add all eligible storage to the cluster check box is NOT selected.
1. Select a file share witness instead of a quorum disk. SIOS DataKeeper supports this option.
In the examples in this article, the file share witness is on the Active Directory/DNS server that is running in
Azure. The file share witness is called domcontr-0. Because you would have configured a VPN connection
to Azure (via Site-to-Site VPN or Azure ExpressRoute), your Active Directory/DNS service is on-premises
and isn't suitable to run a file share witness.
NOTE
If your Active Directory/DNS service runs only on-premises, don't configure your file share witness on the Active
Directory/DNS Windows operating system that is running on-premises. Network latency between cluster nodes
running in Azure and Active Directory/DNS on-premises might be too large and cause connectivity issues. Be sure to
configure the file share witness on an Azure virtual machine that is running close to the cluster node.
The quorum drive needs at least 1,024 MB of free space. We recommend 2,048 MB of free space for the
quorum drive.
2. Add the cluster name object.
Figure 30: Assign the permissions on the share for the cluster name object
Be sure that the permissions include the authority to change data in the share for the cluster name object (in
our example, pr1-ascs-vir$).
3. To add the cluster name object to the list, select Add. Change the filter to check for computer objects, in
addition to those shown in Figure 31.
Figure 33: Set the security attributes for the cluster name object on the file share quorum
Se t t h e fi l e sh a r e w i t n e ss q u o r u m i n F a i l o v e r C l u st e r M a n a g e r
Figure 37: Define the file share location for the witness share
5. Select the changes you want, and then select Next. You need to successfully reconfigure the cluster
configuration as shown in Figure 38.
Figure 40: Installation progress bar when you install the .NET Framework 3.5 by using the Add Roles and
Features Wizard
Use the command-line tool dism.exe. For this type of installation, you need to access the SxS directory on
the Windows installation media. At an elevated command prompt, type:
Figure 44: Enter the domain user name and password for the SIOS DataKeeper installation
5. Install the license key for your SIOS DataKeeper instance as shown in Figure 45.
Figure 45: Enter your SIOS DataKeeper license key
6. When prompted, restart the virtual machine.
Set up SIOS DataKeeper
After you install SIOS DataKeeper on both nodes, you need to start the configuration. The goal of the configuration
is to have synchronous data replication between the additional VHDs attached to each of the virtual machines.
1. Start the DataKeeper Management and Configuration tool, and then select Connect Server. (In Figure 46,
this option is circled in red.)
Figure 50: Define the base data for the node, which should be the current source node
5. Define the name, TCP/IP address, and disk volume of the target node.
Figure 51: Define the base data for the node, which should be the current target node
6. Define the compression algorithms. In our example, we recommend that you compress the replication
stream. Especially in resynchronization situations, the compression of the replication stream dramatically
reduces resynchronization time. Note that compression uses the CPU and RAM resources of a virtual
machine. As the compression rate increases, so does the volume of CPU resources used. You also can adjust
this setting later.
7. Another setting you need to check is whether the replication occurs asynchronously or synchronously.
When you protect SAP ASCS/SCS configurations, you must use synchronous replication.
Figure 54: DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active
Failover Cluster Manager now shows the disk as a DataKeeper disk, as shown in Figure 55.
Figure 55: Failover Cluster Manager shows the disk that DataKeeper replicated
IMPORTANT
Be sure not to place your page file on DataKeeper mirrored volumes. DataKeeper does not support mirrored volumes. You
can leave your page file on the temporary drive D of an Azure virtual machine, which is the default. If it's not already there,
move the Windows page file to drive D of your Azure virtual machine.
IMPORTANT
The IP address that you assign to the virtual host name of the ASCS/SCS instance must be the same as the IP
address that you assigned to Azure Load Balancer (<SID>-lb-ascs).
The IP address of the virtual SAP ASCS/SCS host name (pr1-ascs-sap) is the same as the IP address of
Azure Load Balancer (pr1-lb-ascs).
Figure 56: Define the DNS entry for the SAP ASCS/SCS cluster virtual name and TCP/IP address
2. To define the IP address assigned to the virtual host name, select DNS Manager > Domain.
Figure 57: New virtual name and TCP/IP address for SAP ASCS/SCS cluster configuration
Install the SAP first cluster node
1. Execute the first cluster node option on cluster node A. For example, on the pr1-ascs-0 host.
2. To keep the default ports for the Azure internal load balancer, select:
ABAP system: ASCS instance number 00
Java system: SCS instance number 01
ABAP+Java system: ASCS instance number 00 and SCS instance number 01
To use instance numbers other than 00 for the ABAP ASCS instance and 01 for the Java SCS instance, first
you need to change the Azure internal load balancer default load balancing rules, described in Change the
ASCS/SCS default load balancing rules for the Azure internal load balancer.
The next few tasks aren't described in the standard SAP installation documentation.
NOTE
The SAP installation documentation describes how to install the first ASCS/SCS cluster node.
enque/encni/set_so_keepalive = true
For example, to the SAP SCS instance profile and corresponding path:
<ShareDisk>:\usr\sap\PR1\SYS\profile\PR1_SCS01_pr1-ascs-sap
2. Define a probe port. The default probe port number is 0. In our example, we use probe port 62000.
Clear-Host
$SAPClusterRoleName = "SAP $SAPSID"
$SAPIPresourceName = "SAP $SAPSID IP"
$SAPIPResourceClusterParameters = Get-ClusterResource $SAPIPresourceName | Get-ClusterParameter
$IPAddress = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "Address" }).Value
$NetworkName = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "Network" }).Value
$SubnetMask = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "SubnetMask" }).Value
$OverrideAddressMatch = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq
"OverrideAddressMatch" }).Value
$EnableDhcp = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "EnableDhcp" }).Value
$OldProbePort = ($SAPIPResourceClusterParameters | Where-Object {$_.Name -eq "ProbePort" }).Value
Write-Host "Current configuration parameters for SAP IP cluster resource '$SAPIPresourceName' are:" -
ForegroundColor Cyan
Get-ClusterResource -Name $SAPIPresourceName | Get-ClusterParameter
Write-Host
Write-Host "Current probe port property of the SAP cluster resource '$SAPIPresourceName' is
'$OldProbePort'." -ForegroundColor Cyan
Write-Host
Write-Host "Setting the new probe port property of the SAP cluster resource '$SAPIPresourceName' to
'$ProbePort' ..." -ForegroundColor Cyan
Write-Host
Write-Host
$ActivateChanges = Read-Host "Do you want to take restart SAP cluster role '$SAPClusterRoleName', to
activate the changes (yes/no)?"
Write-Host
write-host "Taking SAP cluster IP resource '$SAPIPresourceName' offline ..." -ForegroundColor Cyan
Stop-ClusterResource -Name $SAPIPresourceName
sleep 5
After you bring the SAP <SID> cluster role online, verify that ProbePort is set to the new value.
$SAPSID = "PR1" # SAP <SID>
Figure 59: Probe the cluster port after you set the new value
Open the Windows firewall probe port
You need to open a Windows firewall probe port on both cluster nodes. Use the following script to open a
Windows firewall probe port. Update the PowerShell variables for your environment.
New-NetFirewallRule -Name AzureProbePort -DisplayName "Rule for Azure Probe Port" -Direction Inbound -Action
Allow -Protocol TCP -LocalPort $ProbePort
The ProbePort is set to 62000. Now you can access the file share \\ascsha-clsap\sapmnt from other hosts, such
as from ascsha-dbas.
Install the database instance
To install the database instance, follow the process described in the SAP installation documentation.
Install the second cluster node
To install the second cluster, follow the steps in the SAP installation guide.
Change the start type of the SAP ERS Windows service instance
Change the start type of the SAP ERS Windows service to Automatic (Delayed Start) on both cluster nodes.
Figure 60: Change the service type for the SAP ERS instance to delayed automatic
Install the SAP Primary Application Server
Install the Primary Application Server (PAS) instance <SID>-di-0 on the virtual machine that you've designated to
host the PAS. There are no dependencies on Azure or DataKeeper-specific settings.
Install the SAP Additional Application Server
Install an SAP Additional Application Server (AAS) on all the virtual machines that you've designated to host an
SAP Application Server instance. For example, on <SID>-di-1 to <SID>-di-<n>.
NOTE
This finishes the installation of a high-availability SAP NetWeaver system. Next, proceed with failover testing.
Figure 62: In SIOS DataKeeper, replicate the local volume from cluster node A to cluster node B
Failover from node A to node B
1. Choose one of these options to initiate a failover of the SAP <SID> cluster group from cluster node A to
cluster node B:
Use Failover Cluster Manager
Use Failover Cluster PowerShell
$SAPSID = "PR1" # SAP <SID>
2. Restart cluster node A within the Windows guest operating system (this initiates an automatic failover of the
SAP <SID> cluster group from node A to node B).
3. Restart cluster node A from the Azure portal (this initiates an automatic failover of the SAP <SID> cluster group
from node A to node B).
4. Restart cluster node A by using Azure PowerShell (this initiates an automatic failover of the SAP <SID>
cluster group from node A to node B).
After failover, the SAP <SID> cluster group is running on cluster node B. For example, it's running on pr1-
ascs-1.
Figure 63: In Failover Cluster Manager, the SAP <SID> cluster group is running on cluster node B
The shared disk is now mounted on cluster node B. SIOS DataKeeper is replicating data from source volume
drive S on cluster node B to target volume drive S on cluster node A. For example, it's replicating from pr1-
ascs-1 [10.0.0.41] to pr1-ascs-0 [10.0.0.40].
Figure 64: SIOS DataKeeper replicates the local volume from cluster node B to cluster node A
High availability for SAP NetWeaver on Azure VMs
on SUSE Linux Enterprise Server for SAP applications
7/31/2017 32 min to read Edit Online
This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster
framework and install a highly available SAP NetWeaver 7.50 system. In the example configurations, installation
commands etc. ASCS instance number 00, ERS instance number 02 and SAP System ID NWS is used. The names of
the resources (for example virtual machines, virtual networks) in the example assume that you have used the
converged template with SAP system ID NWS to create the resources.
Read the following SAP Notes and papers first
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2205917 has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
SAP Note 1944799 has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring Extension
for SAP.
SAP Community WIKI has all required SAP Notes for Linux.
Azure Virtual Machines planning and implementation for SAP on Linux
Azure Virtual Machines deployment for SAP on Linux (this article)
Azure Virtual Machines DBMS deployment for SAP on Linux
SAP HANA SR Performance Optimized Scenario
The guide contains all required information to set up SAP HANA System Replication on-premises. Use this guide
as a baseline.
Highly Available NFS Storage with DRBD and Pacemaker The guide contains all required information to set up a
highly available NFS server. Use this guide as a baseline.
Overview
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS server is configured in a separate
cluster and can be used by multiple SAP systems.
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS and the SAP HANA database use
virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. The
following list shows the configuration of the load balancer.
NFS Server
Frontend configuration
IP address 10.0.0.4
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the NFS cluster
Probe Port
Port 61000
Loadbalancing rules
2049 TCP
2049 UDP
(A )SCS
Frontend configuration
IP address 10.0.0.10
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster
Probe Port
Port 620<nr>
Loadbalancing rules
32<nr> TCP
36<nr> TCP
39<nr> TCP
81<nr> TCP
5<nr>13 TCP
5<nr>14 TCP
5<nr>16 TCP
ERS
Frontend configuration
IP address 10.0.0.11
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS
cluster
Probe Port
Port 621<nr>
Loadbalancing rules
33<nr> TCP
5<nr>13 TCP
5<nr>14 TCP
5<nr>16 TCP
SAP HANA
Frontend configuration
IP address 10.0.0.12
Backend configuration
Connected to primary network interfaces of all virtual machines that should be part of the HANA cluster
Probe Port
Port 625<nr>
Loadbalancing rules
3<nr>15 TCP
3<nr>17 TCP
# insert the public key you copied in the last step into the authorized keys file on the second server
sudo vi /root/.ssh/authorized_keys
# insert the public key you copied in the last step into the authorized keys file on the first server
sudo vi /root/.ssh/authorized_keys
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo ha-cluster-init
sudo ha-cluster-join
10. [A] Configure corosync to use other transport and add nodelist. Cluster will not work otherwise.
sudo vi /etc/corosync/corosync.conf
sudo vi /etc/drbd.d/NWS_nfs.res
Insert the configuration for the new drbd device and exit
resource NWS_nfs {
protocol C;
disk {
on-io-error pass_on;
}
on prod-nfs-0 {
address 10.0.0.5:7790;
device /dev/drbd0;
disk /dev/vg_NFS/NWS;
meta-disk internal;
}
on prod-nfs-1 {
address 10.0.0.6:7790;
device /dev/drbd0;
disk /dev/vg_NFS/NWS;
meta-disk internal;
}
}
17. [1] Wait until the new drbd devices are synchronized
crm(live)configure# commit
crm(live)configure# exit
crm(live)configure# commit
crm(live)configure# exit
crm(live)configure# commit
crm(live)configure# exit
crm(live)configure# commit
crm(live)configure# exit
crm(live)configure# commit
crm(live)configure# exit
6. [1] Create a virtual IP resource and health-probe for the internal load balancer
crm(live)configure# commit
crm(live)configure# exit
# replace the bold string with your subscription id, resource group, tenant id, service principal id and
password
crm(live)configure# commit
crm(live)configure# exit
# insert the public key you copied in the last step into the authorized keys file on the second server
sudo vi /root/.ssh/authorized_keys
# insert the public key you copied in the last step into the authorized keys file on the first server
sudo vi /root/.ssh/authorized_keys
If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the SUSE
download page
# example for patch for SLES 12 SP1
sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1
# example for patch for SLES 12 SP2
sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2017-886=1
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
sudo ha-cluster-init
sudo ha-cluster-join
11. [A] Configure corosync to use other transport and add nodelist. Cluster will not work otherwise.
sudo vi /etc/corosync/corosync.conf
[...]
interface {
[...]
}
transport: udpu
}
nodelist {
node {
# IP address of nws-cl-0
ring0_addr: 10.0.0.14
}
node {
# IP address of nws-cl-1
ring0_addr: 10.0.0.13
}
}
logging {
[...]
sudo vi /etc/drbd.d/NWS_ascs.res
Insert the configuration for the new drbd device and exit
resource NWS_ascs {
protocol C;
disk {
on-io-error pass_on;
}
on nws-cl-0 {
address 10.0.0.14:7791;
device /dev/drbd0;
disk /dev/vg_NWS/NWS_ASCS;
meta-disk internal;
}
on nws-cl-1 {
address 10.0.0.13:7791;
device /dev/drbd0;
disk /dev/vg_NWS/NWS_ASCS;
meta-disk internal;
}
}
sudo vi /etc/drbd.d/NWS_ers.res
Insert the configuration for the new drbd device and exit
resource NWS_ers {
protocol C;
disk {
on-io-error pass_on;
}
on nws-cl-0 {
address 10.0.0.14:7792;
device /dev/drbd1;
disk /dev/vg_NWS/NWS_ERS;
meta-disk internal;
}
on nws-cl-1 {
address 10.0.0.13:7792;
device /dev/drbd1;
disk /dev/vg_NWS/NWS_ERS;
meta-disk internal;
}
}
19. [1] Wait until the new drbd devices are synchronized
crm(live)configure# commit
crm(live)configure# exit
sudo vi /etc/auto.master
sudo vi /etc/auto.direct
sudo vi /etc/waagent.conf
crm(live)configure# commit
crm(live)configure# exit
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r
3. [1] Create a virtual IP resource and health-probe for the internal load balancer
crm(live)configure# commit
# WARNING: Resources nc_NWS_ASCS,nc_NWS_ERS,nc_NWS_nfs violate uniqueness for parameter "binfile":
"/usr/bin/nc"
# Do you still want to commit (y/n)? y
crm(live)configure# exit
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r
NOTE
Please use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will
fail.
sudo vi /sapmnt/NWS/profile/NWS_ASCS00_nws-ascs
ERS profile
sudo vi /sapmnt/NWS/profile/NWS_ERS02_nws-ers
8. [1] Add the ASCS and ERS SAP services to the sapservice file
Add the ASCS service entry to the second node and copy the ERS service entry to the first node.
crm(live)configure# commit
crm(live)configure# exit
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r
# replace the bold string with your subscription id, resource group, tenant id, service principal id and
password
crm(live)configure# commit
crm(live)configure# exit
Install database
In this example an SAP HANA System Replication is installed and configured. SAP HANA will run in the same
cluster as the SAP NetWeaver ASCS/SCS and ERS. You can also install SAP HANA on a dedicated cluster. See High
Availability of SAP HANA on Azure Virtual Machines (VMs) for more information.
Prepare for SAP HANA installation
We generally recommend using LVM for volumes that store data and log files. For testing purposes, you can also
choose to store the data and log file directly on a plain disk.
1. [A] LVM
The example below assumes that the virtual machines have four data disks attached that should be used to
create two volumes.
Create physical volumes for all disks that you want to use.
Create a volume group for the data files, one volume group for the log files and one for the shared directory
of SAP HANA
Create the mount directories and copy the UUID of all logical volumes
sudo vi /etc/auto.direct
PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbsql -u system -i 03 'CREATE USER hdbhasync PASSWORD "passwd"'
hdbsql -u system -i 03 'GRANT DATA ADMIN TO hdbhasync'
hdbsql -u system -i 03 'ALTER USER hdbhasync DISABLE PASSWORD LIFETIME'
PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbuserstore SET hdbhaloc localhost:30315 hdbhasync <passwd>
PATH="$PATH:/usr/sap/HDB/HDB03/exe"
hdbsql -u system -i 03 "BACKUP DATA USING FILE ('initialbackup')"
6. [1] Switch to the HANA sapsid user and create the primary site.
su - hdbadm
hdbnsutil -sr_enable -name=SITE1
7. [2] Switch to the HANA sapsid user and create the secondary site.
su - hdbadm
sapcontrol -nr 03 -function StopWait 600 10
hdbnsutil -sr_register --remoteHost=nws-cl-0 --remoteInstance=03 --replicationMode=sync --name=SITE2
# replace the bold string with your instance number and HANA system id
crm(live)configure# commit
crm(live)configure# exit
# replace the bold string with your instance number, HANA system id and the frontend IP address of the
Azure load balancer.
crm(live)configure# commit
crm(live)configure# exit
Make sure that the cluster status is ok and that all resources are started. It is not important on which node
the resources are running.
sudo crm_mon -r
sudo vi /etc/hosts
Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
# IP address of the load balancer frontend configuration for NFS
10.0.0.4 nws-nfs
# IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
10.0.0.10 nws-ascs
# IP address of the load balancer frontend configuration for SAP NetWeaver ERS
10.0.0.11 nws-ers
# IP address of the load balancer frontend configuration for database
10.0.0.12 nws-db
# IP address of the application server
10.0.0.8 nws-di-0
3. Configure autofs
sudo vi /etc/auto.master
sudo vi /etc/auto.direct
su - nwsadm
hdbuserstore SET DEFAULT nws-db:30315 SAPABAP1 <password of ABAP schema>
Next steps
Azure Virtual Machines planning and implementation for SAP
Azure Virtual Machines deployment for SAP
Azure Virtual Machines DBMS deployment for SAP
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large
instances), see SAP HANA (large instances) high availability and disaster recovery on Azure.
To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see High
Availability of SAP HANA on Azure Virtual Machines (VMs)
Create an SAP NetWeaver multi-SID configuration
8/21/2017 7 min to read Edit Online
In September 2016, Microsoft released a feature where you can manage multiple virtual IP addresses by using an
Azure internal load balancer. This functionality already exists in the Azure external load balancer.
If you have an SAP deployment, you can use an internal load balancer to create a Windows cluster configuration
for SAP ASCS/SCS, as documented in the guide for high-availability SAP NetWeaver on Windows VMs.
This article focuses on how to move from a single ASCS/SCS installation to an SAP multi-SID configuration by
installing additional SAP ASCS/SCS clustered instances into an existing Windows Server Failover Clustering
(WSFC) cluster. When this process is completed, you will have configured an SAP multi-SID cluster.
NOTE
This feature is available only in the Azure Resource Manager deployment model.
Prerequisites
You have already configured a WSFC cluster that is used for one SAP ASCS/SCS instance, as discussed in the guide
for high-availability SAP NetWeaver on Windows VMs and as shown in this diagram.
Target architecture
The goal is to install multiple SAP ABAP ASCS or SAP Java SCS clustered instances in the same WSFC cluster, as
illustrated here:
NOTE
There is a limit to the number of private front-end IPs for each Azure internal load balancer.
The maximum number of SAP ASCS/SCS instances in one WSFC cluster is equal to the maximum number of private front-end
IPs for each Azure internal load balancer.
For more information about load-balancer limits, see "Private front end IP per load balancer" in Networking limits:
Azure Resource Manager.
The complete landscape with two high-availability SAP systems would look like this:
IMPORTANT
The setup must meet the following conditions:
The SAP ASCS/SCS instances must share the same WSFC cluster.
Each DBMS SID must have its own dedicated WSFC cluster.
SAP application servers that belong to one SAP system SID must have their own dedicated VMs.
NOTE
For SAP ASCS/SCS cluster instances, each IP address requires a unique probe port. For example, if one IP address on an Azure
internal load balancer uses probe port 62300, no other IP address on that load balancer can use probe port 62300.
For our purposes, because probe port 62300 is already reserved, we are using probe port 62350.
You can install additional SAP ASCS/SCS instances in the existing WSFC cluster with two nodes:
Create a virtual host name for the clustered SAP ASCS/SCS instance on the DNS server
You can create a DNS entry for the virtual host name of the ASCS/SCS instance by using the following parameters:
pr5-sap-cl 10.0.0.50
The new host name and IP address are displayed in the DNS Manager, as shown in the following screenshot:
The procedure for creating a DNS entry is also described in detail in the main guide for high-availability SAP
NetWeaver on Windows VMs.
NOTE
The new IP address that you assign to the virtual host name of the additional ASCS/SCS instance must be the same as the
new IP address that you assigned to the SAP Azure load balancer.
In our scenario, the IP address is 10.0.0.50.
$count = $ILB.FrontendIpConfigurations.Count + 1
$FrontEndConfigurationName ="lbFrontendASCS$count"
$LBProbeName = "lbProbeASCS$count"
Write-Host "Creating load balancing rules for the ports: '$Ports' ... " -ForegroundColor Green
$ILB | Set-AzureRmLoadBalancer
Write-Host "Succesfully added new IP '$ILBIP' to the internal load balancer '$ILBName'!" -ForegroundColor
Green
After the script has run, the results are displayed in the Azure portal, as shown in the following screenshot:
Add disks to cluster machines, and configure the SIOS cluster share disk
You must add a new cluster share disk for each additional SAP ASCS/SCS instance. For Windows Server 2012 R2,
the WSFC cluster share disk currently in use is the SIOS DataKeeper software solution.
Do the following:
1. Add an additional disk or disks of the same size (which you need to stripe) to each of the cluster nodes, and
format them.
2. Configure storage replication with SIOS DataKeeper.
This procedure assumes that you have already installed SIOS DataKeeper on the WSFC cluster machines. If you
have installed it, you must now configure replication between the machines. The process is described in detail in
the main guide for high-availability SAP NetWeaver on Windows VMs.
Next steps
Networking limits: Azure Resource Manager
Multiple VIPs for Azure Load Balancer
Guide for high-availability SAP NetWeaver on Windows VMs
Azure Virtual Machines deployment for SAP
NetWeaver
8/21/2017 43 min to read Edit Online
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using the Resource Manager deployment model, which Microsoft recommends for new deployments
instead of the classic deployment model.
Azure Virtual Machines is the solution for organizations that need compute and storage resources, in minimal
time, and without lengthy procurement cycles. You can use Azure Virtual Machines to deploy classical
applications, like SAP NetWeaver-based applications, in Azure. Extend an application's reliability and availability
without additional on-premises resources. Azure Virtual Machines supports cross-premises connectivity, so
you can integrate Azure Virtual Machines into your organization's on-premises domains, private clouds, and
SAP system landscape.
In this article, we cover the steps to deploy SAP applications on virtual machines (VMs) in Azure, including
alternate deployment options and troubleshooting. This article builds on the information in Azure Virtual
Machines planning and implementation for SAP NetWeaver. It also complements SAP installation
documentation and SAP Notes, which are the primary resources for installing and deploying SAP software.
Prerequisites
Setting up an Azure virtual machine for SAP software deployment involves multiple steps and resources.
Before you start, make sure that you meet the prerequisites for installing SAP software on virtual machines in
Azure.
Local computer
To manage Windows or Linux VMs, you can use a PowerShell script and the Azure portal. For both tools, you
need a local computer running Windows 7 or a later version of Windows. If you want to manage only Linux
VMs and you want to use a Linux computer for this task, you can use Azure CLI.
Internet connection
To download and run the tools and scripts that are required for SAP software deployment, you must be
connected to the Internet. The Azure VM that is running the Azure Enhanced Monitoring Extension for SAP also
needs access to the Internet. If the Azure VM is part of an Azure virtual network or on-premises domain, make
sure that the relevant proxy settings are set, as described in Configure the proxy.
Microsoft Azure subscription
You need an active Azure account.
Topology and networking
You need to define the topology and architecture of the SAP deployment in Azure:
Azure storage accounts to be used
Virtual network where you want to deploy the SAP system
Resource group to which you want to deploy the SAP system
Azure region where you want to deploy the SAP system
SAP configuration (two-tier or three-tier)
VM sizes and the number of additional data disks to be mounted to the VMs
SAP Correction and Transport System (CTS) configuration
Create and configure Azure storage accounts (if required) or Azure virtual networks before you begin the SAP
software deployment process. For information about how to create and configure these resources, see Azure
Virtual Machines planning and implementation for SAP NetWeaver.
SAP sizing
Know the following information, for SAP sizing:
Projected SAP workload, for example, by using the SAP Quick Sizer tool, and the SAP Application
Performance Standard (SAPS) number
Required CPU resource and memory consumption of the SAP system
Required input/output (I/O) operations per second
Required network bandwidth of eventual communication between VMs in Azure
Required network bandwidth between on-premises assets and the Azure-deployed SAP system
Resource groups
In Azure Resource Manager, you can use resource groups to manage all the application resources in your
Azure subscription. For more information, see Azure Resource Manager overview.
Resources
SAP resources
When you are setting up your SAP software deployment, you need the following SAP resources:
SAP Note 1928533, which has:
List of Azure VM sizes that are supported for the deployment of SAP software
Important capacity information for Azure VM sizes
Supported SAP software, and operating system (OS) and database combinations
Required SAP kernel version for Windows and Linux on Microsoft Azure
SAP Note 2015553 lists prerequisites for SAP-supported SAP software deployments in Azure.
SAP Note 2178632 has detailed information about all monitoring metrics reported for SAP in Azure.
SAP Note 1409604 has the required SAP Host Agent version for Windows in Azure.
SAP Note 2191498 has the required SAP Host Agent version for Linux in Azure.
SAP Note 2243692 has information about SAP licensing on Linux in Azure.
SAP Note 1984787 has general information about SUSE Linux Enterprise Server 12.
SAP Note 2002167 has general information about Red Hat Enterprise Linux 7.x.
SAP Note 2069760 has general information about Oracle Linux 7.x.
SAP Note 1999351 has additional troubleshooting information for the Azure Enhanced Monitoring
Extension for SAP.
SAP Note 1597355 has general information about swap-space for Linux.
SAP on Azure SCN page has news and a collection of useful resources.
SAP Community WIKI has all required SAP Notes for Linux.
SAP-specific PowerShell cmdlets that are part of Azure PowerShell.
SAP-specific Azure CLI commands that are part of Azure CLI.
Windows resources
These Microsoft articles cover SAP deployments in Azure:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP NetWeaver (this article)
Azure Virtual Machines DBMS deployment for SAP NetWeaver
Windows
To prepare a Windows image that you can use to deploy multiple virtual machines, the Windows settings
(like Windows SID and hostname) must be abstracted or generalized on the on-premises VM. You can use
sysprep to do this.
Linux
To prepare a Linux image that you can use to deploy multiple virtual machines, some Linux settings must
be abstracted or generalized on the on-premises VM. You can use waagent -deprovision to do this. For
more information, see Capture a Linux virtual machine running on Azure and the Azure Linux agent user
guide.
You can prepare and create a custom image, and then use it to create multiple new VMs. This is described in
Azure Virtual Machines planning and implementation for SAP NetWeaver. Set up your database content either
by using SAP Software Provisioning Manager to install a new SAP system (restores a database backup from a
disk that's attached to the virtual machine) or by directly restoring a database backup from Azure storage, if
your DBMS supports it. For more information, see Azure Virtual Machines DBMS deployment for SAP
NetWeaver. If you have already installed an SAP system on your on-premises VM (especially for two-tier
systems), you can adapt the SAP system settings after the deployment of the Azure VM by using the System
Rename procedure supported by SAP Software Provisioning Manager (SAP Note 1619720). Otherwise, you
can install the SAP software after you deploy the Azure VM.
The following flowchart shows the SAP-specific sequence of steps for deploying a VM from a custom image:
Windows
Azure Virtual Machine Agent overview
Linux
Azure Linux Agent User Guide
The following flowchart shows the sequence of steps for moving an on-premises VM by using a non-
generalized Azure VHD:
If the disk is already uploaded and defined in Azure (see Azure Virtual Machines planning and implementation
for SAP NetWeaver), do the tasks described in the next few sections.
Create a virtual machine
To create a deployment by using a private OS disk through the Azure portal, use the SAP template published in
the azure-quickstart-templates GitHub repository. You also can manually create a virtual machine, by using
PowerShell.
Two-tier configuration (only one virtual machine) template (sap-2-tier-user-disk)
To create a two-tier system by using only one virtual machine, use this template.
Two-tier configuration (only one virtual machine) template - Managed Disk (sap-2-tier-user-
disk-md)
To create a two-tier system by using only one virtual machine and a Managed Disk, use this template.
In the Azure portal, enter the following parameters for the template:
1. Basics:
Subscription: The subscription to use to deploy the template.
Resource group: The resource group to use to deploy the template. You can create a new resource
group or select an existing resource group in the subscription.
Location: Where to deploy the template. If you selected an existing resource group, the location of
that resource group is used.
2. Settings:
SAP System ID: The SAP System ID.
OS type: The operating system type you want to deploy (Windows or Linux).
SAP system size: The size of the SAP system.
The number of SAPS the new system provides. If you are not sure how many SAPS the system
requires, ask your SAP Technology Partner or System Integrator.
Storage type (two-tier template only): The type of storage to use.
For larger systems, we highly recommend using Azure Premium Storage. For more information
about storage types, see the following resources:
Use of Azure Premium SSD Storage for SAP DBMS Instance
Microsoft Azure Storage in Azure Virtual Machine DBMS deployment for SAP NetWeaver
Premium Storage: High-performance storage for Azure Virtual Machine workloads
Introduction to Microsoft Azure Storage
OS disk VHD URI (unmanaged disk template only): The URI of the private OS disk, for example,
https://<accountname>.blob.core.windows.net/vhds/osdisk.vhd.
OS disk Managed Disk Id (managed disk template only): The Id of the Managed Disk OS disk,
/subscriptions/92d102f7-81a5-4df7-9877-
54987ba97dd9/resourceGroups/group/providers/Microsoft.Compute/disks/WIN
New or existing subnet: Determines whether a new virtual network and subnet are created, or an
existing subnet is used. If you already have a virtual network that is connected to your on-premises
network, select Existing.
Subnet ID: The ID of the subnet to which the virtual machines will connect to. Select the subnet
of your VPN or Azure ExpressRoute virtual network to use to connect the virtual machine to your
on-premises network. The ID usually looks like this:
/subscriptions/<subscription id>/resourceGroups/<resource group
name>/providers/Microsoft.Network/virtualNetworks/<virtual network name>/subnets/<subnet
name>
3. Terms and conditions:
Review and accept the legal terms.
4. Select Purchase.
Install the VM Agent
To use the templates described in the preceding section, the VM Agent must be installed on the OS disk, or the
deployment will fail. Download and install the VM Agent in the VM, as described in Download, install, and
enable the Azure VM Agent.
If you don't use the templates described in the preceding section, you can also install the VM Agent afterwards.
Join a domain (Windows only )
If your Azure deployment is connected to an on-premises Active Directory or DNS instance via an Azure site-
to-site VPN connection or ExpressRoute (this is called cross-premises in Azure Virtual Machines planning and
implementation for SAP NetWeaver), it is expected that the VM is joining an on-premises domain. For more
information about considerations for this task, see Join a VM to an on-premises domain (Windows only).
Configure proxy settings
Depending on how your on-premises network is configured, you might need to set up the proxy on your VM. If
your VM is connected to your on-premises network via VPN or ExpressRoute, the VM might not be able to
access the Internet, and won't be able to download the required extensions or collect monitoring data. For
more information, see Configure the proxy.
Configure monitoring
To be sure SAP supports your environment, set up the Azure Monitoring Extension for SAP as described in
Configure the Azure Enhanced Monitoring Extension for SAP. Check the prerequisites for SAP monitoring, and
required minimum versions of SAP Kernel and SAP Host Agent, in the resources listed in SAP resources.
Monitoring check
Check whether monitoring is working, as described in Checks and troubleshooting for setting up end-to-end
monitoring.
6. Select Install, and then accept the Microsoft Software License Terms.
7. PowerShell is installed. Select Finish to close the installation wizard.
Check frequently for updates to the PowerShell cmdlets, which usually are updated monthly. The easiest way to
check for updates is to do the preceding installation steps, up to the installation page shown in step 5. The
release date and release number of the cmdlets are included on the page shown in step 5. Unless stated
otherwise in SAP Note 1928533 or SAP Note 2015553, we recommend that you work with the latest version of
Azure PowerShell cmdlets.
To check the version of the Azure PowerShell cmdlets that are installed on your computer, run this PowerShell
command:
(Get-Module AzureRm.Compute).Version
azure --version
If the agent is already installed, to update the Azure Linux Agent, do the steps described in Update the Azure
Linux Agent on a VM to the latest version from GitHub.
Configure the proxy
The steps you take to configure the proxy in Windows are different from the way you configure the proxy in
Linux.
Windows
Proxy settings must be set up correctly for the Local System account to access the Internet. If your proxy
settings are not set by Group Policy, you can configure the settings for the Local System account.
1. Go to Start, enter gpedit.msc, and then select Enter.
2. Select Computer Configuration > Administrative Templates > Windows Components > Internet
Explorer. Make sure that the setting Make proxy settings per-machine (rather than per-user) is
disabled or not configured.
3. In Control Panel, go to Network and Sharing Center > Internet Options.
4. On the Connections tab, select the LAN settings button.
5. Clear the Automatically detect settings check box.
6. Select the Use a proxy server for your LAN check box, and then enter the proxy address and port.
7. Select the Advanced button.
8. In the Exceptions box, enter the IP address 168.63.129.16. Select OK.
Linux
Configure the correct proxy in the configuration file of the Microsoft Azure Guest Agent, which is located at
\etc\waagent.conf.
Set the following parameters:
1. HTTP proxy host. For example, set it to proxy.corp.local.
HttpProxy.Host=<proxy host>
The proxy settings in \etc\waagent.conf also apply to the required VM extensions. If you want to use the Azure
repositories, make sure that the traffic to these repositories is not going through your on-premises intranet. If
you created user-defined routes to enable forced tunneling, make sure that you add a route that routes traffic
to the repositories directly to the Internet, and not through your site-to-site VPN connection.
SLES
You also need to add routes for the IP addresses listed in \etc\regionserverclnt.cfg. The following figure
shows an example:
RHEL
You also need to add routes for the IP addresses of the hosts listed in \etc\yum.repos.d\rhui-load-
balancers. For an example, see the preceding figure.
Oracle Linux
There are no repositories for Oracle Linux on Azure. You need to configure your own repositories for
Oracle Linux or use the public repositories.
For more information about user-defined routes, see User-defined routes and IP forwarding.
Configure the Azure Enhanced Monitoring Extension for SAP
When you've prepared the VM as described in Deployment scenarios of VMs for SAP on Azure, the Azure VM
Agent is installed on the virtual machine. The next step is to deploy the Azure Enhanced Monitoring Extension
for SAP, which is available in the Azure Extension Repository in the global Azure datacenters. For more
information, see Azure Virtual Machines planning and implementation for SAP NetWeaver.
You can use PowerShell or Azure CLI to install and configure the Azure Enhanced Monitoring Extension for
SAP. To install the extension on a Windows or Linux VM by using a Windows machine, see Azure PowerShell.
To install the extension on a Linux VM by using a Linux desktop, see Azure CLI.
Azure PowerShell for Linux and Windows VMs
To install the Azure Enhanced Monitoring Extension for SAP by using PowerShell:
1. Make sure that you have installed the latest version of the Azure PowerShell cmdlet. For more information,
see Deploying Azure PowerShell cmdlets.
2. Run the following PowerShell cmdlet. For a list of available environments, run
commandlet Get-AzureRmEnvironment . If you want to use global Azure, your environment is AzureCloud.
For Azure in China, select AzureChinaCloud.
After you enter your account data and identify the Azure virtual machine, the script deploys the required
extensions and enables the required features. This can take several minutes. For more information about
Set-AzureRmVMAEMExtension , see Set-AzureRmVMAEMExtension.
The Set-AzureRmVMAEMExtension configuration does all the steps to configure host monitoring for SAP.
The script output includes the following information:
Confirmation that monitoring for the OS disk and all additional data disks has been configured.
The next two messages confirm the configuration of Storage Metrics for a specific storage account.
One line of output gives the status of the actual update of the monitoring configuration.
Another line of output confirms that the configuration has been deployed or updated.
The last line of output is informational. It shows your options for testing the monitoring configuration.
To check that all steps of Azure Enhanced Monitoring have been executed successfully, and that the Azure
Infrastructure provides the necessary data, proceed with the readiness check for the Azure Enhanced
Monitoring Extension for SAP, as described in Readiness check for Azure Enhanced Monitoring for SAP.
Wait 15-30 minutes for Azure Diagnostics to collect the relevant data.
Azure CLI for Linux VMs
To install the Azure Enhanced Monitoring Extension for SAP by using Azure CLI:
1. Install Azure CLI 1.0, as described in Install the Azure CLI 1.0.
2. Sign in with your Azure account:
azure login
5. Verify that the Azure Enhanced Monitoring Extension is active on the Azure Linux VM. Check whether the
file \var\lib\AzureEnhancedMonitor\PerfCounters exists. If it exists, at a command prompt, run this
command to display information collected by the Azure Enhanced Monitor:
cat /var/lib/AzureEnhancedMonitor/PerfCounters
2;cpu;Current Hw Frequency;;0;2194.659;MHz;60;1444036656;saplnxmon;
2;cpu;Max Hw Frequency;;0;2194.659;MHz;0;1444036656;saplnxmon;
NOTE
Azperflib.exe runs in a loop and updates the collected counters every 60 seconds. To end the loop, close the
Command Prompt window.
If the Azure Enhanced Monitoring Extension is not installed, or the AzureEnhancedMonitoring service is not
running, the extension has not been configured correctly. For detailed information about how to deploy the
extension, see Troubleshooting the Azure monitoring infrastructure for SAP.
C h e c k t h e o u t p u t o f a z p e r fl i b .e x e
Azperflib.exe output shows all populated Azure performance counters for SAP. At the bottom of the list of
collected counters, a summary and health indicator show the status of Azure monitoring.
Check the result returned for the Counters total output, which is reported as empty, and for Health status,
shown in the preceding figure.
Interpret the resulting values as follows:
AZPERFLIB.EXE RESULT VALUES AZURE MONITORING HEALTH STATUS
API Calls - not available Counters that are not available might be either not
applicable to the virtual machine configuration, or are errors.
See Health status.
Counters total - empty The following two Azure storage counters can be empty:
Storage Read Op Latency Server msec
Storage Read Op Latency E2E msec
All other counters must have values.
If the Health status value is not OK, follow the instructions in Health check for Azure monitoring infrastructure
configuration.
Run the readiness check on a Linux VM
1. Connect to the Azure Virtual Machine by using SSH.
2. Check the output of the Azure Enhanced Monitoring Extension.
a. Run more /var/lib/AzureEnhancedMonitor/PerfCounters
Expected result: Returns list of performance counters. The file should not be empty.
b. Run cat /var/lib/AzureEnhancedMonitor/PerfCounters | grep Error
Expected result: Returns one line where the error is none, for example,
3;config;Error;;0;0;none;0;1456416792;tst-servercs;
c. Run more /var/lib/AzureEnhancedMonitor/LatestErrorRecord
Expected result: Displays one entry similar to: python /usr/sbin/waagent -daemon
2. Make sure that the Azure Enhanced Monitoring Extension is installed and running.
a. Run sudo sh -c 'ls -al /var/lib/waagent/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux-*/'
Expected result: Lists the content of the Azure Enhanced Monitoring Extension directory.
b. Run ps -ax | grep AzureEnhanced
3. Install SAP Host Agent as described in SAP Note 1031096, and check the output of saposcol .
a. Run /usr/sap/hostctrl/exe/saposcol -d
3. Enter your account data and identify the Azure virtual machine.
4. The script tests the configuration of the virtual machine you select.
Make sure that every health check result is OK. If some checks do not display OK, run the update cmdlet as
described in Configure the Azure Enhanced Monitoring Extension for SAP. Wait 15 minutes, and repeat the
checks described in Readiness check for Azure Enhanced Monitoring for SAP and Health check for Azure
Monitoring Infrastructure Configuration. If the checks still indicate a problem with some or all counters, see
Troubleshooting the Azure monitoring infrastructure for SAP.
Troubleshooting the Azure monitoring infrastructure for SAP
Azure performance counters do not show up at all
The AzureEnhancedMonitoring Windows service collects performance metrics in Azure. If the service has not
been installed correctly or if it is not running in your VM, no performance metrics can be collected.
T h e i n st a l l a t i o n d i r e c t o r y o f t h e A z u r e En h a n c e d M o n i t o r i n g Ex t e n si o n i s e m p t y
Issue
The installation directory
C:\Packages\Plugins\Microsoft.AzureCAT.AzureEnhancedMonitoring.AzureCATExtensionHandler\
<version>\drop is empty.
So l u t i o n
The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might need to
restart the machine or rerun the Set-AzureRmVMAEMExtension configuration script.
Se r v i c e fo r A z u r e En h a n c e d M o n i t o r i n g d o e s n o t e x i st
Issue
So l u t i o n
If the service does not exist, the Azure Enhanced Monitoring Extension for SAP has not been installed correctly.
Redeploy the extension by using the steps described for your deployment scenario in Deployment scenarios of
VMs for SAP in Azure.
After you deploy the extension, after one hour, check again whether the Azure performance counters are
provided in the Azure VM.
Se r v i c e fo r A z u r e En h a n c e d M o n i t o r i n g e x i st s, b u t fa i l s t o st a r t
Issue
The AzureEnhancedMonitoring Windows service exists and is enabled, but fails to start. For more information,
check the application event log.
So l u t i o n
The configuration is incorrect. Restart the monitoring extension for the VM, as described in Configure the Azure
Enhanced Monitoring Extension for SAP.
Some Azure performance counters are missing
The AzureEnhancedMonitoring Windows service collects performance metrics in Azure. The service gets data
from several sources. Some configuration data is collected locally, and some performance metrics are read
from Azure Diagnostics. Storage counters are used from your logging on the storage subscription level.
If troubleshooting by using SAP Note 1999351 doesn't resolve the issue, rerun the Set-AzureRmVMAEMExtension
configuration script. You might have to wait an hour because storage analytics or diagnostics counters might
not be created immediately after they are enabled. If the problem persists, open an SAP customer support
message on the component BC-OP-NT-AZR for Windows or BC-OP-LNX-AZR for a Linux virtual machine.
The directory \var\lib\waagent\ does not have a subdirectory for the Azure Enhanced Monitoring extension.
So l u t i o n
The extension is not installed. Determine whether this is a proxy issue (as described earlier). You might need to
restart the machine and/or rerun the Set-AzureRmVMAEMExtension configuration script.
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic.
This article covers using the Resource Manager deployment model, which Microsoft recommends for new deployments
instead of the classic deployment model.
This guide is part of the documentation on implementing and deploying the SAP software on Microsoft
Azure. Before reading this guide, read the Planning and Implementation Guide. This document covers the
deployment of various Relational Database Management Systems (RDBMS) and related products in
combination with SAP on Microsoft Azure Virtual Machines (VMs) using the Azure Infrastructure as a Service
(IaaS) capabilities.
The paper complements the SAP Installation Documentation and SAP Notes, which represent the primary
resources for installations and deployments of SAP software on given platforms.
General considerations
In this chapter, considerations of running SAP-related DBMS systems in Azure VMs are introduced. There are
few references to specific DBMS systems in this chapter. Instead the specific DBMS systems are handled
within this paper, after this chapter.
Definitions upfront
Throughout the document, we use the following terms:
IaaS: Infrastructure as a Service.
PaaS: Platform as a Service.
SaaS: Software as a Service.
SAP Component: an individual SAP application such as ECC, BW, Solution Manager, or EP. SAP
components can be based on traditional ABAP or Java technologies or a non-NetWeaver based
application such as Business Objects.
SAP Environment: one or more SAP components logically grouped to perform a business function such as
Development, QAS, Training, DR, or Production.
SAP Landscape: This refers to the entire SAP assets in a customers IT landscape. The SAP landscape
includes all production and non-production environments.
SAP System: The combination of DBMS layer and application layer of, for example, an SAP ERP
development system, SAP BW test system, SAP CRM production system, etc. In Azure deployments, it is
not supported to divide these two layers between on-premises and Azure. This means an SAP system is
either deployed on-premises or it is deployed in Azure. However, you can deploy the different systems of
an SAP landscape in Azure or on-premises. For example, you could deploy the SAP CRM development and
test systems in Azure but the SAP CRM production system on-premises.
Cloud-Only deployment: A deployment where the Azure subscription is not connected via a site-to-site or
ExpressRoute connection to the on-premises network infrastructure. In common Azure documentation
these kinds of deployments are also described as Cloud-Only deployments. Virtual Machines deployed
with this method are accessed through the Internet and public Internet endpoints assigned to the VMs in
Azure. The on-premises Active Directory (AD) and DNS is not extended to Azure in these types of
deployments. Hence the VMs are not part of the on-premises Active Directory. Note: Cloud-Only
deployments in this document are defined as complete SAP landscapes, which are running exclusively in
Azure without extension of Active Directory or name resolution from on-premises into public cloud.
Cloud-Only configurations are not supported for production SAP systems or configurations where SAP
STMS or other on-premises resources need to be used between SAP systems hosted on Azure and
resources residing on-premises.
Cross-Premises: Describes a scenario where VMs are deployed to an Azure subscription that has site-to-
site, multi-site, or ExpressRoute connectivity between the on-premises datacenter(s) and Azure. In
common Azure documentation, these kinds of deployments are also described as Cross-Premises
scenarios. The reason for the connection is to extend on-premises domains, on-premises Active Directory,
and on-premises DNS into Azure. The on-premises landscape is extended to the Azure assets of the
subscription. Having this extension, the VMs can be part of the on-premises domain. Domain users of the
on-premises domain can access the servers and can run services on those VMs (like DBMS services).
Communication and name resolution between VMs deployed on-premises and VMs deployed in Azure is
possible. We expect this to be the most common scenario for deploying SAP assets on Azure. For more
information, see this article and this article.
NOTE
Cross-Premises deployments of SAP systems where Azure Virtual Machines running SAP systems are members of an
on-premises domain are supported for production SAP systems. Cross-Premises configurations are supported for
deploying parts or complete SAP landscapes into Azure. Even running the complete SAP landscape in Azure requires
having those VMs being part of on-premises domain and ADS. In former versions of the documentation, we talked
about Hybrid-IT scenarios, where the term Hybrid is rooted in the fact that there is a cross-premises connectivity
between on-premises and Azure. In this case Hybrid also means that the VMs in Azure are part of the on-premises
Active Directory.
Some Microsoft documentation describes Cross-Premises scenarios a bit differently, especially for DBMS HA
configurations. In the case of the SAP-related documents, the Cross-Premises scenario just boils down to
having a site-to-site or private (ExpressRoute) connectivity and to the fact that the SAP landscape is
distributed between on-premises and Azure.
Resources
The following guides are available for the topic of SAP deployments on Azure:
Azure Virtual Machines planning and implementation for SAP NetWeaver
Azure Virtual Machines deployment for SAP NetWeaver
Azure Virtual Machines DBMS deployment for SAP NetWeaver (this document)
The following SAP Notes are related to the topic of SAP on Azure:
2233094 DB6: SAP Applications on Azure Using IBM DB2 for Linux,
UNIX, and Windows - Additional Information
Also read the SCN Wiki that contains all SAP Notes for Linux.
You should have a working knowledge about the Microsoft Azure Architecture and how Microsoft Azure
Virtual Machines are deployed and operated. You can find more information at
https://azure.microsoft.com/documentation/
NOTE
We are not discussing Microsoft Azure Platform as a Service (PaaS) offerings of the Microsoft Azure Platform. This
paper is about running a database management system (DBMS) in Microsoft Azure Virtual Machines (IaaS) just as you
would run the DBMS in your on-premises environment. Database capabilities and functionalities between these two
offers are very different and should not be mixed up with each other. See also:
https://azure.microsoft.com/services/sql-database/
Since we are discussing IaaS, in general the Windows, Linux, and DBMS installation and configuration are
essentially the same as any virtual machine or bare metal machine you would install on-premises. However,
there are some architecture and system management implementation decisions, which are different when
utilizing IaaS. The purpose of this document is to explain the specific architectural and system management
differences that you must be prepared for when using IaaS.
In general, the overall areas of difference that this paper discusses are:
Planning the proper VM/disk layout of SAP systems to ensure you have the proper data file layout and can
achieve enough IOPS for your workload.
Networking considerations when using IaaS.
Specific database features to use in order to optimize the database layout.
Backup and restore considerations in IaaS.
Utilizing different types of images for deployment.
High Availability in Azure IaaS.
Linux
Linux Azure VMs automatically mount a drive at /mnt/resource that is a non-persisted drive backed by
local disks on the Azure compute node. Because it is non-persisted, this means that any changes made to
content in /mnt/resource are lost when the VM is rebooted. By any changes, we mean files saved,
directories created, applications installed, etc.
Dependent on the Azure VM-series, the local disks on the compute node show different performance, which
can be categorized like:
A0-A7: Very limited performance. Not usable for anything beyond windows page file
A8-A11: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput
D-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput
DS-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec
throughput
G-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec throughput
GS-Series: Very good performance characteristics with some ten thousand IOPS and >1GB/sec
throughput
Statements above are applying to the VM types that are certified with SAP. The VM-series with excellent IOPS
and throughput qualify for leverage by some DBMS features, like tempdb or temporary table space.
Caching for VMs and data disks
When we create data disks through the portal or when we mount uploaded disks to VMs, we can choose
whether the I/O traffic between the VM and those disks located in Azure storage are cached. Azure Standard
and Premium Storage use two different technologies for this type of cache. In both cases, the cache itself
would be disk backed on the same drives used by the temporary disk (D:\ on Windows or /mnt/resource on
Linux) of the VM.
For Azure Standard Storage the possible cache types are:
No caching
Read caching
Read and Write caching
In order to get consistent and deterministic performance, you should set the caching on Azure Standard
Storage for all disks containing DBMS-related data files, log files, and table space to 'NONE'. The
caching of the VM can remain with the default.
For Azure Premium Storage the following caching options exist:
No caching
Read caching
Recommendation for Azure Premium Storage is to leverage Read caching for data files of the SAP
database and chose No caching for the disks of log file(s).
Software RAID
As already stated above, you need to balance the number of IOPS needed for the database files across the
number of disks you can configure and the maximum IOPS an Azure VM provides per disk or Premium
Storage disk type. Easiest way to deal with the IOPS load over disks is to build a software RAID over the
different disks. Then place a number of data files of the SAP DBMS on the LUNS carved out of the software
RAID. Dependent on the requirements you might want to consider the usage of Premium Storage as well
since two of the three different Premium Storage disks provide higher IOPS quota than disks based on
Standard Storage. Besides the significant better I/O latency provided by Azure Premium Storage.
Same applies to the transaction log of the different DBMS systems. With many of them just adding more Tlog
files does not help since the DBMS systems write into one of the files at a time only. If higher IOPS rates are
needed than a single Standard Storage based disk can deliver, you can stripe over multiple Standard Storage
disks or you can use a larger Premium Storage disk type that beyond higher IOPS rates also delivers factors
lower latency for the write I/Os into the transaction log.
Situations experienced in Azure deployments, which would favor using a software RAID are:
Transaction Log/Redo Log require more IOPS than Azure provides for a single disk. As mentioned above
this can be solved by building a LUN over multiple disks using a software RAID.
Uneven I/O workload distribution over the different data files of the SAP database. In such cases one can
experience one data file hitting the quota rather often. Whereas other data files are not even getting close
to the IOPS quota of a single disk. In such cases the easiest solution is to build one LUN over multiple
disks using a software RAID.
You dont know what the exact I/O workload per data file is and only roughly know what the overall IOPS
workload against the DBMS is. Easiest to do is to build one LUN with the help of a software RAID. The sum
of quotas of multiple disks behind this LUN should then fulfill the known IOPS rate.
Windows
We recommend using Windows Storage Spaces if you run on Windows Server 2012 or higher. It is more
efficient than Windows Striping of earlier Windows versions. You might need to create the Windows
Storage Pools and Storage Spaces by PowerShell commands when using Windows Server 2012 as
Operating System. The PowerShell commands can be found here
https://technet.microsoft.com/library/jj851254.aspx
Linux
Only MDADM and LVM (Logical Volume Manager) are supported to build a software RAID on Linux. For
more information, read the following articles:
Configure Software RAID on Linux (for MDADM)
Configure LVM on a Linux VM in Azure
Considerations for leveraging VM-series, which are able to work with Azure Premium Storage usually are:
Demands for I/O latencies that are close to what SAN/NAS devices deliver.
Demand for factors better I/O latency than Azure Standard Storage can deliver.
Higher IOPS per VM than what could be achieved with multiple Standard Storage disks against a certain
VM type.
Since the underlying Azure Storage replicates each disk to at least three storage nodes, simple RAID 0
striping can be used. There is no need to implement RAID5 or RAID1.
Microsoft Azure Storage
Microsoft Azure Storage stores the base VM (with OS) and disks or BLOBs to at least three separate storage
nodes. When creating a storage account or managed disk, there is a choice of protection as shown here:
Azure Storage Local Replication (Locally Redundant) provides levels of protection against data loss due to
infrastructure failure that few customers could afford to deploy. As shown above there are four different
options with a fifth being a variation of one of the first three. Looking closer at them we can distinguish:
Premium Locally Redundant Storage (LRS): Azure Premium Storage delivers high-performance, low-
latency disk support for virtual machines running I/O-intensive workloads. There are three replicas of the
data within the same Azure datacenter of an Azure region. The copies are in different Fault and Upgrade
Domains (for concepts see this chapter in the Planning Guide). In case of a replica of the data going out of
service due to a storage node failure or disk failure, a new replica is generated automatically.
Locally Redundant Storage (LRS): In this case, there are three replicas of the data within the same Azure
datacenter of an Azure region. The copies are in different Fault and Upgrade Domains (for concepts see
this chapter in the Planning Guide). In case of a replica of the data going out of service due to a storage
node failure or disk failure, a new replica is generated automatically.
Geo Redundant Storage (GRS): In this case, there is an asynchronous replication that feeds an additional
three replicas of the data in another Azure Region, which is in most of the cases in the same geographical
region (like North Europe and West Europe). This results in three additional replicas, so that there are six
replicas in sum. A variation of this is an addition where the data in the geo replicated Azure region can be
used for read purposes (Read-Access Geo-Redundant).
Zone Redundant Storage (ZRS): In this case, the three replicas of the data remain in the same Azure
Region. As explained in this chapter of the Planning Guide an Azure region can be a number of
datacenters in close proximity. In the case of LRS the replicas would be distributed over the different
datacenters that make one Azure region.
More information can be found here.
NOTE
For DBMS deployments, the usage of Geo Redundant Storage is not recommended
Azure Storage Geo-Replication is asynchronous. Replication of individual disks mounted to a single VM are not
synchronized in lock step. Therefore, it is not suitable to replicate DBMS files that are distributed over different disks or
deployed against a software RAID based on multiple disks. DBMS software requires that the persistent disk storage is
precisely synchronized across different LUNs and underlying disks/spindles. DBMS software uses various mechanisms
to sequence IO write activities and a DBMS reports that the disk storage targeted by the replication is corrupted if
these vary even by a few milliseconds. Hence if one really wants a database configuration with a database stretched
across multiple disks geo-replicated, such a replication needs to be performed with database means and functionality.
One should not rely on Azure Storage Geo-Replication to perform this job.
The problem is simplest to explain with an example system. Lets assume you have an SAP system uploaded into
Azure, which has eight disks containing data files of the DBMS plus one disk containing the transaction log file. Each
one of these nine disks have data written to them in a consistent method according to the DBMS, whether the data is
being written to the data or transaction log files.
In order to properly geo-replicate the data and maintain a consistent database image, the content of all nine disks
would have to be geo-replicated in the exact order the I/O operations were executed against the nine different disks.
However, Azure Storage geo-replication does not allow to declare dependencies between disks. This means Microsoft
Azure Storage geo-replication doesnt know about the fact that the contents in these nine different disks are related to
each other and that the data changes are consistent only when replicating in the order the I/O operations happened
across all the nine disks.
Besides chances being high that the geo-replicated images in the scenario do not provide a consistent database image,
there also is a performance penalty that shows up with geo redundant storage that can severely impact performance.
In summary, do not use this type of storage redundancy for DBMS type workloads.
If we want to create highly available configurations of DBMS deployments (independent of the individual
DBMS HA functionality used), the DBMS VMs would need to:
Add the VMs to the same Azure Virtual Network
(https://azure.microsoft.com/documentation/services/virtual-network/)
The VMs of the HA configuration should also be in the same subnet. Name resolution between the
different subnets is not possible in Cloud-Only deployments, only IP resolution works. Using site-to-site or
ExpressRoute connectivity for Cross-Premises deployments, a network with at least one subnet is already
established. Name resolution is done according to the on-premises AD policies and network
infrastructure.
IP Addresses
It is highly recommended to setup the VMs for HA configurations in a resilient way. Relying on IP addresses
to address the HA partner(s) within the HA configuration is not reliable in Azure unless static IP addresses are
used. There are two "Shutdown" concepts in Azure:
Shut down through Azure portal or Azure PowerShell cmdlet Stop-AzureRmVM: In this case, the Virtual
Machine gets shutdown and de-allocated. Your Azure account is no longer charged for this VM so the only
charges that incur are for the storage used. However, if the private IP address of the network interface was
not static, the IP address is released and it is not guaranteed that the network interface gets the old IP
address assigned again after a restart of the VM. Performing the shut down through the Azure portal or by
calling Stop-AzureRmVM automatically causes de-allocation. If you do not want to deallocate the machine
use Stop-AzureRmVM -StayProvisioned
If you shut down the VM from an OS level, the VM gets shut down and NOT de-allocated. However, in this
case, your Azure account is still charged for the VM, despite the fact that it is shutdown. In such a case, the
assignment of the IP address to a stopped VM remains intact. Shutting down the VM from within does not
automatically force de-allocation.
Even for Cross-Premises scenarios, by default a shutdown and de-allocation means de-assignment of the IP
addresses from the VM, even if on-premises policies in DHCP settings are different.
The exception is if one assigns a static IP address to a network interface as described here.
In such a case the IP address remains fixed as long as the network interface is not deleted.
IMPORTANT
In order to keep the whole deployment simple and manageable, the clear recommendation is to setup the VMs
partnering in a DBMS HA or DR configuration within Azure in a way that there is a functioning name resolution
between the different VMs involved.
IMPORTANT
We are not discussing Microsoft Azure SQL Database, which is a Platform as a Service offer of the Microsoft Azure
Platform. The discussion in this paper is about running the SQL Server product as it is known for on-premises
deployments in Azure Virtual Machines, leveraging the Infrastructure as a Service capability of Azure. Database
capabilities and functionalities between these two offers are different and should not be mixed up with each other. See
also: https://azure.microsoft.com/services/sql-database/
Be aware that the D:\ drive has different sizes dependent on the VM type. Dependent on the size requirement
of tempdb you might be forced to pair tempdb data and log files with the SAP database data and log files in
cases where D:\ drive is too small.
Formatting the disks
For SQL Server the NTFS block size for disks containing SQL Server data and log files should be 64K. There is
no need to format the D:\ drive. This drive comes pre-formatted.
In order to make sure that the restore or creation of databases is not initializing the data files by zeroing the
content of the files, one should make sure that the user context the SQL Server service is running in has a
certain permission. Usually users in the Windows Administrator group have these permissions. If the SQL
Server service is run in the user context of non-Windows Administrator user, you need to assign that user the
User Right Perform volume maintenance tasks. See the details in this Microsoft Knowledge Base Article:
https://support.microsoft.com/kb/2574695
Impact of database compression
In configurations where I/O bandwidth can become a limiting factor, every measure, which reduces IOPS
might help to stretch the workload one can run in an IaaS scenario like Azure. Therefore, if not yet done,
applying SQL Server PAGE compression is strongly recommended by both SAP and Microsoft before
uploading an existing SAP database to Azure.
The recommendation to perform Database Compression before uploading to Azure is given out of two
reasons:
The amount of data to be uploaded is lower.
The duration of the compression execution is shorter assuming that one can use stronger hardware with
more CPUs or higher I/O bandwidth or less I/O latency on-premises.
Smaller database sizes might lead to less costs for disk allocation
Database compression works as well in an Azure Virtual Machines as it does on-premises. For more details
on how to compress an existing SAP SQL Server database, check here:
https://blogs.msdn.com/b/saponsqlserver/archive/2010/10/08/compressing-an-sap-database-using-report-
msscompress.aspx
SQL Server 2014 Storing Database Files directly on Azure Blob Storage
SQL Server 2014 opens the possibility to store database files directly on Azure Blob Store without the
wrapper of a VHD around them. Especially with using Standard Azure Storage or smaller VM types this
enables scenarios where you can overcome the limits of IOPS that would be enforced by a limited number of
disks that can be mounted to some smaller VM types. This works for user databases however not for system
databases of SQL Server. It also works for data and log files of SQL Server. If youd like to deploy an SAP SQL
Server database this way instead of wrapping it into VHDs, keep the following in mind:
The Storage Account used needs to be in the same Azure Region as the one that is used to deploy the VM
SQL Server is running in.
Considerations listed earlier regarding the distribution of VHDs over different Azure Storage Accounts
apply for this method of deployments as well. Means the I/O operations count against the limits of the
Azure Storage Account.
Details about this type of deployment are listed here: https://docs.microsoft.com/sql/relational-
databases/databases/sql-server-data-files-in-microsoft-azure
In order to store SQL Server data files directly on Azure Premium Storage, you need to have a minimum SQL
Server 2014 patch release, which is documented here: https://support.microsoft.com/kb/3063054. Storing
SQL Server data files on Azure Standard Storage does work with the released version of SQL Server 2014.
However, the very same patches contain another series of fixes, which make the direct usage of Azure Blob
Storage for SQL Server data files and backups more reliable. Therefore we recommend using these patches in
general.
SQL Server 2014 Buffer Pool Extension
SQL Server 2014 introduced a new feature, which is called Buffer Pool Extension. This functionality extends
the buffer pool of SQL Server, which is kept in memory with a second level cache that is backed by local SSDs
of a server or VM. This enables to keep a larger working set of data in memory. Compared to accessing
Azure Standard Storage the access into the extension of the buffer pool, which is stored on local SSDs of an
Azure VM is many factors faster. Therefore, leveraging the local D:\ drive of the VM types that have excellent
IOPS and throughput could be a very reasonable way to reduce the IOPS load against Azure Storage and
improve response times of queries dramatically. This applies especially when not using Premium Storage. In
case of Premium Storage and the usage of the Premium Azure Read Cache on the compute node, as
recommended for data files, no significant differences are expected. Reason is that both caches (SQL Server
Buffer Pool Extension and Premium Storage Read Cache) are using the local disks of the compute nodes. For
more details about this functionality, check this documentation: https://docs.microsoft.com/sql/database-
engine/configure-windows/buffer-pool-extension
Backup/Recovery considerations for SQL Server
When deploying SQL Server into Azure your backup methodology must be reviewed. Even if the system is
not a productive system, the SAP database hosted by SQL Server must be backed up periodically. Since Azure
Storage keeps three images, a backup is now less important in respect to compensating a storage crash. The
priority reason for maintaining a proper backup and recovery plan is more that you can compensate for
logical/manual errors by providing point in time recovery capabilities. So the goal is to either use backups to
restore the database back to a certain point in time or to use the backups in Azure to seed another system by
copying the existing database. For example, you could transfer from a 2-Tier SAP configuration to a 3-Tier
system setup of the same system by restoring a backup.
There are three different ways to backup SQL Server to Azure Storage:
1. SQL Server 2012 CU4 and higher can natively backup databases to a URL. This is detailed in the blog New
functionality in SQL Server 2014 Part 5 Backup/Restore Enhancements. See chapter SQL Server 2012
SP1 CU4 and later.
2. SQL Server releases prior to SQL 2012 CU4 can use a redirection functionality to backup to a VHD and
basically move the write stream towards an Azure Storage location that has been configured. See chapter
SQL Server 2012 SP1 CU3 and earlier releases.
3. The final method is to perform a conventional SQL Server backup to disk command onto a disk device.
This is identical to the on-premises deployment pattern and is not discussed in detail in this document.
SQL Server 2012 SP1 CU4 and later
This functionality allows you to directly backup to Azure BLOB storage. Without this method, you must
backup to other disks, which would consume disk and IOPS capacity. The idea is basically this:
The advantage in this case is that one doesnt need to spend disks to store SQL Server backups on. So you
have fewer disks allocated and the whole bandwidth of disk IOPS can be used for data and log files. Note that
the maximum size of a backup is limited to a maximum of 1 TB as documented in the section Limitations in
this article: https://docs.microsoft.com/sql/relational-databases/backup-restore/sql-server-backup-to-
url#limitations. If the backup size, despite using SQL Server Backup compression would exceed 1 TB in size,
the functionality described in chapter SQL Server 2012 SP1 CU3 and earlier releases in this document needs
to be used.
Related documentation describing the restore of databases from backups against Azure Blob Store
recommend not to restore directly from Azure BLOB store if the backup is >25GB. The recommendation in
this article is simply based on performance considerations and not due to functional restrictions. Therefore,
different conditions may apply on a case by case basis.
Documentation on how this type of backup is set up and leveraged can be found in this tutorial
An example of the sequence of steps can be read here.
Automating backups, it is of highest importance to make sure that the BLOBs for each backup are named
differently. Otherwise they are overwritten and the restore chain is broken.
In order not to mix up things between the three different types of backups, it is advisable to create different
containers underneath the storage account used for backups. The containers could be by VM only or by VM
and Backup type. The schema could look like:
In the example above, the backups would not be performed into the same storage account where the VMs are
deployed. There would be a new storage account specifically for the backups. Within the storage accounts,
there would be different containers created with a matrix of the type of backup and the VM name. Such
segmentation makes it easier to administrate the backups of the different VMs.
The BLOBs one directly writes the backups to, are not adding to the count of the data disks of a VM. Hence
one could maximize the maximum of data disks mounted of the specific VM SKU for the data and transaction
log file and still execute a backup against a storage container.
SQL Server 2012 SP1 CU3 and earlier releases
The first step you must perform in order to achieve a backup directly against Azure Storage would be to
download the msi, which is linked to this KBA article.
Download the x64 installation file and the documentation. The file installs a program called: Microsoft SQL
Server Backup to Microsoft Azure Tool. Read the documentation of the product thoroughly. The tool basically
works in the following way:
From the SQL Server side, a disk location for the SQL Server backup is defined (dont use the D:\ drive for
this).
The tool allows you to define rules, which can be used to direct different types of backups to different
Azure Storage containers.
Once the rules are in place, the tool redirects the write stream of the backup to one of the VHDs/disks to
the Azure Storage location, which was defined earlier.
The tool leaves a small stub file of a few KB size on the VHD/Disk, which was defined for the SQL Server
backup. This file should be left on the storage location since it is required to restore again from
Azure Storage.
If you have lost the stub file (for example through loss of the storage media that contained the stub
file) and you have chosen the option of backing up to a Microsoft Azure Storage account, you may
recover the stub file through Microsoft Azure Storage by downloading it from the storage container
in which it was placed. You should then place the stub file into a folder on the local machine where
the Tool is configured to detect and upload to the same container with the same encryption
password if encryption was used with the original rule.
This means the schema as described above for more recent releases of SQL Server can be put in place as well
for SQL Server releases, which are not allowing direct address an Azure Storage location.
This method should not be used with more recent SQL Server releases, which support backing up natively
against Azure Storage. Exceptions are where limitations of the native backup into Azure are blocking native
backup execution into Azure.
Other possibilities to backup SQL Server databases
Other possibilities to backup databases is to attach additional data disks to a VM that you use to store
backups on. In such a case, you would need to make sure that the disks are not running full. If that is the case,
you would need to unmount the disks and so to speak archive it and replace it with a new empty disk. If you
go down that path, you want to keep these VHDs in separate Azure Storage Accounts from the ones that the
VHDs with the database files.
A second possibility is to use a large VM that can have many disks attached, for example a D14 with 32VHDs.
Use Storage Spaces to build a flexible environment where you could build shares that are used then as
backup targets for the different DBMS servers.
Some best practices got documented here as well.
Performance considerations for backups/restores
As in bare-metal deployments, backup/restore performance is dependent on how many volumes can be read
in parallel and what the throughput of those volumes might be. In addition, the CPU consumption used by
backup compression may play a significant role on VMs with just up to eight CPU threads. Therefore, one can
assume:
The fewer the number of disks used to store the data files, the smaller the overall throughput in reading.
The smaller the number of CPU threads in the VM, the more severe the impact of backup compression.
The fewer targets (BLOBs, VHDs, or disks) to write the backup to, the lesser the throughput.
The smaller the VM size, the smaller the storage throughput quota writing and reading from Azure
Storage. Independent of whether the backups are directly stored on Azure Blob or whether they are stored
in VHDs that again are stored in Azure Blobs.
When using a Microsoft Azure Storage BLOB as the backup target in more recent releases, you are restricted
to designating only one URL target for each specific backup.
But when using the Microsoft SQL Server Backup to Microsoft Azure Tool in older releases, you can define
more than one file target. With more than one target, the backup can scale and the throughput of the backup
is higher. This would result then in multiple files as well in the Azure Storage account. In our testing, using
multiple file destinations one can definitely achieve the throughput, which one could achieve with the backup
extensions implemented in from SQL Server 2012 SP1 CU4 on. You also are not blocked by the 1TB limit as
in the native backup into Azure.
However, keep in mind, the throughput also is dependent on the location of the Azure Storage Account you
use for the backup. An idea might be to locate the storage account in a different region than the VMs are
running in. For example you would run the VM configuration in West-Europe, but put the Storage Account
that you use to back up against in North-Europe. That certainly has an impact on the backup throughput and
is not likely to generate a throughput of 150MB/sec as it seems to be possible in cases where the target
storage and the VMs are running in the same regional datacenter.
Managing Backup BLOBs
There is a requirement to manage the backups on your own. Since the expectation is that many blobs are
created by executing frequent transaction log backups, administration of those blobs easily can overburden
the Azure portal. Therefore, it is recommendable to leverage an Azure storage explorer. There are several
good ones available, which can help to manage an Azure storage account
Microsoft Visual Studio with Azure SDK installed (https://azure.microsoft.com/downloads/)
Microsoft Azure Storage Explorer (https://azure.microsoft.com/downloads/)
Third party tools
For a more complete discussion of Backup and SAP on Azure, refer to the SAP Backup Guide for more
information.
Using a SQL Server image out of the Microsoft Azure Marketplace
Microsoft offers VMs in the Azure Marketplace, which already contain versions of SQL Server. For SAP
customers who require licenses for SQL Server and Windows, this might be an opportunity to basically cover
the need for licenses by spinning up VMs with SQL Server already installed. In order to use such images for
SAP, the following considerations need to be made:
The SQL Server non-Evaluation versions acquire higher costs than just a Windows-only VM deployed
from Azure Marketplace. See these articles to compare prices:
https://azure.microsoft.com/pricing/details/virtual-machines/windows/ and
https://azure.microsoft.com/pricing/details/virtual-machines/sql-server-enterprise/.
You only can use SQL Server releases, which are supported by SAP, like SQL Server 2012.
The collation of the SQL Server instance, which is installed in the VMs offered in the Azure Marketplace is
not the collation SAP NetWeaver requires the SQL Server instance to run. You can change the collation
though with the directions in the following section.
Changing the SQL Server Collation of a Microsoft Windows/SQL Server VM
Since the SQL Server images in the Azure Marketplace are not set up to use the collation, which is required by
SAP NetWeaver applications, it needs to be changed immediately after the deployment. For SQL Server 2012,
this can be done with the following steps as soon as the VM has been deployed and an administrator is able
to log into the deployed VM:
Open a Windows Command Window as administrator.
Change the directory to C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\SQLServer2012.
Execute the command: Setup.exe /QUIET /ACTION=REBUILDDATABASE
/INSTANCENAME=MSSQLSERVER /SQLSYSADMINACCOUNTS= <local_admin_account_name >
/SQLCOLLATION=SQL_Latin1_General_Cp850_BIN2
<local_admin_account_name > is the account, which was defined as the administrator account when
deploying the VM for the first time through the gallery.
The process should only take a few minutes. In order to make sure whether the step ended up with the
correct result, perform the following steps:
Open SQL Server Management Studio.
Open a Query Window.
Execute the command sp_helpsort in the SQL Server master database.
The desired result should look like:
Latin1-General, binary code point comparison sort for Unicode Data, SQL Server Sort Order 40 on Code Page
850 for non-Unicode Data
If this is not the result, STOP deploying SAP and investigate why the setup command did not work as
expected. Deployment of SAP NetWeaver applications onto SQL Server instance with different SQL Server
codepages than the one mentioned above is NOT supported.
SQL Server High-Availability for SAP in Azure
As mentioned earlier in this paper, there is no possibility to create shared storage, which is necessary for the
usage of the oldest SQL Server high availability functionality. This functionality would install two or more SQL
Server instances in a Windows Server Failover Cluster (WSFC) using a shared disk for the user databases
(and eventually tempdb). This is the long time standard high availability method, which also is supported by
SAP. Because Azure doesnt support shared storage, SQL Server high availability configurations with a shared
disk cluster configuration cannot be realized. However, many other high availability methods are still possible
and are described in the following sections.
SQL Server Log Shipping
One of the methods of high availability (HA) is SQL Server Log Shipping. If the VMs participating in the HA
configuration have working name resolution, there is no problem and the setup in Azure does not differ from
any setup that is done on-premises. It is not recommended to rely on IP resolution only. With regards to
setting up Log Shipping and the principles around Log Shipping, check this documentation:
https://docs.microsoft.com/sql/database-engine/log-shipping/about-log-shipping-sql-server
In order to really achieve any high availability, one needs to deploy the VMs, which are within such a Log
Shipping configuration to be within the same Azure Availability Set.
Database Mirroring
Database Mirroring as supported by SAP (see SAP Note 965908) relies on defining a failover partner in the
SAP connection string. For the Cross-Premises cases, we assume that the two VMs are in the same domain
and that the user context the two SQL Server instances are running under a domain user as well and have
sufficient privileges in the two SQL Server instances involved. Therefore, the setup of Database Mirroring in
Azure does not differ between a typical on-premises setup/configuration.
As of Cloud-Only deployments, the easiest method is to have another domain setup in Azure to have those
DBMS VMs (and ideally dedicated SAP VMs) within one domain.
If a domain is not possible, one can also use certificates for the database mirroring endpoints as described
here: https://docs.microsoft.com/sql/database-engine/database-mirroring/use-certificates-for-a-database-
mirroring-endpoint-transact-sql
A tutorial to set up Database Mirroring in Azure can be found here: https://docs.microsoft.com/sql/database-
engine/database-mirroring/database-mirroring-sql-server
SQL Server Always On
As Always On is supported for SAP on-premises (see SAP Note 1772688), it is supported to be used in
combination with SAP in Azure. The fact that you are not able to create shared disks in Azure doesnt mean
that one cant create an Always On Windows Server Failover Cluster (WSFC) configuration between different
VMs. It only means that you do not have the possibility to use a shared disk as a quorum in the cluster
configuration. Hence you can build an Always On WSFC configuration in Azure and simply not select the
quorum type that utilizes shared disk. The Azure environment those VMs are deployed in should resolve the
VMs by name and the VMs should be in the same domain. This is true for Azure only and Cross-Premises
deployments. There are some special considerations around deploying the SQL Server Availability Group
Listener (not to be confused with the Azure Availability Set) since Azure at this point in time does not allow to
simply create an AD/DNS object as it is possible on-premises. Therefore, some different installation steps are
necessary to overcome the specific behavior of Azure.
Some considerations using an Availability Group Listener are:
Using an Availability Group Listener is only possible with Windows Server 2012 or higher as guest OS of
the VM. For Windows Server 2012 you need to make sure that this patch is applied:
https://support.microsoft.com/kb/2854082
For Windows Server 2008 R2 this patch does not exist and Always On would need to be used in the same
manner as Database Mirroring by specifying a failover partner in the connections string (done through the
SAP default.pfl parameter dbs/mss/server see SAP Note 965908).
When using an Availability Group Listener, the Database VMs need to be connected to a dedicated Load
Balancer. Name resolution in Cloud-Only deployments would either require all VMs of an SAP system
(application servers, DBMS server, and (A)SCS server) are in the same virtual network or would require
from an SAP application layer the maintenance of the etc\host file in order to get the VM names of the
SQL Server VMs resolved. In order to avoid that Azure is assigning new IP addresses in cases where both
VMs incidentally are shutdown, one should assign static IP addresses to the network interfaces of those
VMs in the Always On configuration (defining a static IP address is described in this article)
There are special steps required when building the WSFC cluster configuration where the cluster needs a
special IP address assigned, because Azure with its current functionality would assign the cluster name the
same IP address as the node the cluster is created on. This means a manual step must be performed to
assign a different IP address to the cluster.
The Availability Group Listener is going to be created in Azure with TCP/IP endpoints, which are assigned
to the VMs running the primary and secondary replicas of the Availability group.
There might be a need to secure these endpoints with ACLs.
It is possible to deploy a SQL Server Always On Availability Group over different Azure Regions as well. This
functionality leverages the Azure VNet-to-Vnet connectivity (more details).
Summary on SQL Server High Availability in Azure
Given the fact that Azure Storage is protecting the content, there is one less reason to insist on a hot-standby
image. This means your High Availability scenario needs to only protect against the following cases:
Unavailability of the VM as a whole due to maintenance on the server cluster in Azure or other reasons
Software issues in the SQL Server instance
Protecting from manual error where data gets deleted and point-in-time recovery is needed
Looking at matching technologies one can argue that the first two cases can be covered by Database
Mirroring or Always On, whereas the third case only can be covered by Log-Shipping.
You need to balance the more complex setup of Always On, compared to Database Mirroring, with the
advantages of Always On. Those advantages can be listed like:
Readable secondary replicas.
Backups from secondary replicas.
Better scalability.
More than one secondary replicas.
General SQL Server for SAP on Azure Summary
There are many recommendations in this guide and we recommend you read it more than once before
planning your Azure deployment. In general, though, be sure to follow the top ten general DBMS on Azure
specific points:
1. Use the latest DBMS release, like SQL Server 2014, that has the most advantages in Azure. For SQL Server,
this is SQL Server 2012 SP1 CU4, which would include the feature of backing up against Azure Storage.
However, in conjunction with SAP we would recommend at least SQL Server 2014 SP1 CU1 or SQL Server
2012 SP2 and the latest CU.
2. Carefully plan your SAP system landscape in Azure to balance the data file layout and Azure restrictions:
Dont have too many disks, but have enough to ensure you can reach your required IOPS.
If you don't use Managed Disks, remember that IOPS are also limited per Azure Storage Account
and that Storage Accounts are limited within each Azure subscription (more details).
Only stripe across disks if you need to achieve a higher throughput.
3. Never install software or put any files that require persistence on the D:\ drive as it is non-permanent and
anything on this drive is lost at a Windows reboot.
4. Dont use disk caching for Azure Standard Storage.
5. Dont use Azure geo-replicated Storage Accounts. Use Locally Redundant for DBMS workloads.
6. Use your DBMS vendors HA/DR solution to replicate database data.
7. Always use Name Resolution, dont rely on IP addresses.
8. Use the highest database compression possible. For SQL Server this is page compression.
9. Be careful using SQL Server images from the Azure Marketplace. If you use the SQL Server one, you must
change the instance collation before installing any SAP NetWeaver system on it.
10. Install and configure the SAP Host Monitoring for Azure as described in Deployment Guide.
icm/server_port_0 = PROT=HTTP,PORT=8000,PROCTIMEOUT=600,TIMEOUT=600
icm/server_port_1 = PROT=HTTPS,PORT=443$$,PROCTIMEOUT=600,TIMEOUT=600
and the links generated in transaction DBACockpit will look similar to this:
Depending on if and how the Azure Virtual Machine hosting the SAP system is connected via site-to-site,
multi-site or ExpressRoute (Cross-Premises deployment) you need to make sure that ICM is using a fully
qualified hostname that can be resolved on the machine where you are trying to open the DBACockpit from.
See SAP Note 773830 to understand how ICM determines the fully qualified host name depending on profile
parameters and set parameter icm/host_name_full explicitly if required.
If you deployed the VM in a Cloud-Only scenario without cross-premises connectivity between on-premises
and Azure, you need to define a public IP address and a domainlabel. The format of the public DNS name of
the VM looks like this:
https://mydomainlabel.westeurope.cloudapp.net:44300/sap/bc/webdynpro/sap/dba_cockpit
http://mydomainlabel.westeurope.cloudapp.net:8000/sap/bc/webdynpro/sap/dba_cockpit
icm/server_port_0 = PROT=HTTP,PORT=8000,PROCTIMEOUT=600,TIMEOUT=600
icm/server_port_1 = PROT=HTTPS,PORT=443$$,PROCTIMEOUT=600,TIMEOUT=600
and the links generated in transaction DBACockpit will look similar to this:
Depending on if and how the Azure Virtual Machine hosting the SAP system is connected via site-to-site,
multi-site or ExpressRoute (Cross-Premises deployment) you need to make sure that ICM is using a fully
qualified hostname that can be resolved on the machine where you are trying to open the DBACockpit from.
See SAP Note 773830 to understand how ICM determines the fully qualified host name depending on profile
parameters and set parameter icm/host_name_full explicitly if required.
If you deployed the VM in a Cloud-Only scenario without cross-premises connectivity between on-premises
and Azure, you need to define a public IP address and a domainlabel. The format of the public DNS name of
the VM looks like this:
https://mydomainlabel.westeurope.cloudapp.net:44300/sap/bc/webdynpro/sap/dba_cockpit
http://mydomainlabel.westeurope.cloudapp.net:8000/sap/bc/webdynpro/sap/dba_cockpit
IMPORTANT
Like other databases, SAP MaxDB also has data and log files. However, in SAP MaxDB terminology the correct term is
volume (not file). For example, there are SAP MaxDB data volumes and log volumes. Do not confuse these with OS
disk volumes.
This article describes workloads and applications you can replicate with the Azure Site Recovery service.
Post any comments or questions at the bottom of this article, or on the Azure Recovery Services Forum.
Overview
Organizations need a business continuity and disaster recovery (BCDR) strategy to keep workloads and data safe
and available during planned and unplanned downtime, and recover to regular working conditions as soon as
possible.
Site Recovery is an Azure service that contributes to your BCDR strategy. Using Site Recovery, you can deploy
application-aware replication to the cloud, or to a secondary site. Whether your apps are Windows or Linux-based,
running on physical servers, VMware or Hyper-V, you can use Site Recovery to orchestrate replication, perform
disaster recovery testing, and run failovers and failback.
Site Recovery integrates with Microsoft applications, including SharePoint, Exchange, Dynamics, SQL Server and
Active Directory. Microsoft also works closely with leading vendors including Oracle, SAP and Red Hat. You can
customize replication solutions on an app-by-app basis.
Workload summary
Site Recovery can replicate any app running on a supported machine. In addition we've partnered with product
teams to carry out additional app-specific testing.
System Center Y Y Y Y
Operations Manager
Sharepoint Y Y Y Y
Exchange (non-DAG) Y Y Y Y
Dynamics AX Y Y Y Y
Protect SharePoint
Azure Site Recovery helps protect SharePoint deployments, as follows:
Eliminates the need and associated infrastructure costs for a stand-by farm for disaster recovery. Use Site
Recovery to replicate an entire farm (Web, app and database tiers) to Azure or to a secondary site.
Simplifies application deployment and management. Updates deployed to the primary site are automatically
replicated, and are thus available after failover and recovery of a farm in a secondary site. Also lowers the
management complexity and costs associated with keeping a stand-by farm up-to-date.
Simplifies SharePoint application development and testing by creating a production-like copy on-demand
replica environment for testing and debugging.
Simplifies transition to the cloud by using Site Recovery to migrate SharePoint deployments to Azure.
Learn more about protecting SharePoint.
Protect Dynamics AX
Azure Site Recovery helps protect your Dynamics AX ERP solution, by:
Orchestrating replication of your entire Dynamics AX environment (Web and AOS tiers, database tiers,
SharePoint) to Azure, or to a secondary site.
Simplifying migration of Dynamics AX deployments to the cloud (Azure).
Simplifying Dynamics AX application development and testing by creating a production-like copy on-demand,
for testing and debugging.
Learn more about protecting Dynamic AX.
Protect RDS
Remote Desktop Services (RDS) enables virtual desktop infrastructure (VDI), session-based desktops, and
applications, allowing users to work anywhere. With Azure Site Recovery you can:
Replicate managed or unmanaged pooled virtual desktops to a secondary site, and remote applications and
sessions to a secondary site or Azure.
Here's what you can replicate:
REPLICATE REPLICATE REPLICATE
HYPER-V VMS VMWARE VMS PHYSICAL REPLICATE
TO A REPLICATE TO A REPLICATE SERVERS TO A PHYSICAL
SECONDARY HYPER-V VMS SECONDARY VMWARE VMS SECONDARY SERVERS TO
RDS SITE TO AZURE SITE TO AZURE SITE AZURE
Protect Exchange
Site Recovery helps protect Exchange, as follows:
For small Exchange deployments, such as a single or standalone servers, Site Recovery can replicate and fail over
to Azure or to a secondary site.
For larger deployments, Site Recovery integrates with Exchange DAGS.
Exchange DAGs are the recommended solution for Exchange disaster recovery in an enterprise. Site Recovery
recovery plans can include DAGs, to orchestrate DAG failover across sites.
Learn more about protecting Exchange.
Protect SAP
Use Site Recovery to protect your SAP deployment, as follows:
Enable protection of SAP NetWeaver and non-NetWeaver Production applications running on-premises, by
replicating components to Azure.
Enable protection of SAP NetWeaver and non-NetWeaver Production applications running Azure, by replicating
components to another Azure datacenter.
Simplify cloud migration, by using Site Recovery to migrate your SAP deployment to Azure.
Simplify SAP project upgrades, testing, and prototyping, by creating a production clone on-demand for testing
SAP applications.
Learn more about protecting SAP.
Protect IIS
Use Site Recovery to protect your IIS deployment, as follows:
Azure Site Recovery provides disaster recovery by replicating the critical components in your environment to a cold
remote site or a public cloud like Microsoft Azure. Since the virtual machine with the web server and the database
are being replicated to the recovery site, there is no requirement to backup configuration files or certificates
separately. The application mappings and bindings dependent on environment variables that are changed post
failover can be updated through scripts integrated into the disaster recovery plans. Virtual Machines are brought up
on the recovery site only in the event of a failover. Not only this, Azure Site Recovery also helps you orchestrate the
end to end failover by providing you the following capabilities:
Sequencing the shutdown and startup of virtual machines in the various tiers.
Adding scripts to allow update of application dependencies and bindings on the virtual machines after they have
been started up. The scripts can also be used to update the DNS server to point to the recovery site.
Allocate IP addresses to virtual machines pre-failover by mapping the primary and recovery networks and hence
use scripts that do not need to be updated post failover.
Ability for a one-click failover for multiple web applications on the web servers, thus eliminating the scope for
confusion in the event of a disaster.
Ability to test the recovery plans in an isolated environment for DR drills.
Learn more about protecting IIS web farm.
Next steps
Check prerequisites
Tutorial: Azure Active Directory integration with SAP
Cloud for Customer
7/13/2017 7 min to read Edit Online
In this tutorial, you learn how to integrate SAP Cloud for Customer with Azure Active Directory (Azure AD).
Integrating SAP Cloud for Customer with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP Cloud for Customer
You can enable your users to automatically get signed-on to SAP Cloud for Customer (Single Sign-On) with
their Azure AD accounts
You can manage your accounts in one central location - the Azure portal
If you want to know more details about SaaS app integration with Azure AD, see what is application access and
single sign-on with Azure Active Directory.
Prerequisites
To configure Azure AD integration with SAP Cloud for Customer, you need the following items:
An Azure AD subscription
A SAP Cloud for Customer single sign-on enabled subscription
NOTE
To test the steps in this tutorial, we do not recommend using a production environment.
To test the steps in this tutorial, you should follow these recommendations:
Do not use your production environment, unless it is necessary.
If you don't have an Azure AD trial environment, you can get a one-month trial here: Trial offer.
Scenario description
In this tutorial, you test Azure AD single sign-on in a test environment. The scenario outlined in this tutorial consists
of two main building blocks:
1. Adding SAP Cloud for Customer from the gallery
2. Configuring and testing Azure AD single sign-on
3. To add new application, click New application button on the top of dialog.
5. In the results panel, select SAP Cloud for Customer, and then click Add button to add the application.
2. On the Single sign-on dialog, select Mode as SAML-based Sign-on to enable single sign-on.
3. On the SAP Cloud for Customer Domain and URLs section, perform the following steps:
a. In the Sign-on URL textbox, type a URL using the following pattern:
https://<server name>.crm.ondemand.com
b. In the Identifier textbox, type a URL using the following pattern: https://<server name>.crm.ondemand.com
NOTE
These values are not real. Update these values with the actual Sign-On URL and Identifier. Contact SAP Cloud for
Customer Client support team to get these values.
7. On the SAP Cloud for Customer Configuration section, click Configure SAP Cloud for Customer to
open Configure sign-on window. Copy the SAML Single Sign-On Service URL from the Quick
Reference section.
d. Azure Active Directory requires the element Assertion Consumer Service URL in the SAML request, so
select the Include Assertion Consumer Service URL checkbox.
e. Click Activate Single Sign-On.
f. Save your changes.
g. Click the My System tab.
h. In Azure AD Sign On URL textbox, paste SAML Single Sign-On Service URL which you have copied
from Azure portal.
i. Specify whether the employee can manually choose between logging on with user ID and password or
SSO by selecting the Manual Identity Provider Selection.
j. In the SSO URL section, specify the URL that should be used by your employees to sign on to the system.
In the URL Sent to Employee list, you can choose between the following options:
Non-SSO URL
The system sends only the normal system URL to the employee. The employee cannot log on using SSO, and
must use password or certificate instead.
SSO URL
The system sends only the SSO URL to the employee. The employee can log on using SSO. Authentication
request is redirected through the IdP.
Automatic Selection
If SSO is not active, the system sends the normal system URL to the employee. If SSO is active, the system
checks whether the employee has a password. If a password is available, both SSO URL and Non-SSO URL
are sent to the employee. However, if the employee has no password, only the SSO URL is sent to the
employee.
k. Save your changes.
TIP
You can now read a concise version of these instructions inside the Azure portal, while you are setting up the app! After
adding this app from the Active Directory > Enterprise Applications section, simply click the Single Sign-On tab and
access the embedded documentation through the Configuration section at the bottom. You can read more about the
embedded documentation feature here: Azure AD embedded documentation
2. To display the list of users, go to Users and groups and click All users.
3. To open the User dialog, click Add on the top of the dialog.
NOTE
Please make sure that NameID value should match with the username field in the SAP Cloud for Customer platform.
4. Click Add button. Then select Users and groups on Add Assignment dialog.
5. On Users and groups dialog, select Britta Simon in the Users list.
6. Click Select button on Users and groups dialog.
7. Click Assign button on Add Assignment dialog.
Testing single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Cloud for Customer tile in the Access Panel, you should get automatically signed-on to
your SAP Cloud for Customer application. For more information about the Access Panel, see Introduction to the
Access Panel.
Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
Tutorial: Azure Active Directory integration with SAP
Cloud Platform Identity Authentication
9/20/2017 9 min to read Edit Online
In this tutorial, you learn how to integrate SAP Cloud Platform Identity Authentication with Azure Active Directory
(Azure AD). SAP Cloud Platform Identity Authentication is used as a proxy IdP to access SAP applications using
Azure AD as the main IdP.
Integrating SAP Cloud Platform Identity Authentication with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP Application.
You can enable your users to automatically get signed-on to SAP applications single sign-on (SSO)) with their
Azure AD accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see what is application access and
single sign-on with Azure Active Directory.
Prerequisites
To configure Azure AD integration with SAP Cloud Platform Identity Authentication, you need the following items:
An Azure AD subscription
An SAP Cloud Platform Identity Authentication single sign-on enabled subscription
NOTE
To test the steps in this tutorial, we do not recommend using a production environment.
To test the steps in this tutorial, you should follow these recommendations:
Do not use your production environment, unless it is necessary.
If you don't have an Azure AD trial environment, you can get a one-month trial.
Scenario description
In this tutorial, you test Azure AD single sign-on in a test environment. The scenario outlined in this tutorial consists
of two main building blocks:
1. Adding SAP Cloud Platform Identity Authentication from the gallery
2. Configuring and testing Azure AD single sign-on
Before diving into the technical details, it is vital to understand the concepts you're going to look at. The SAP Cloud
Platform Identity Authentication and Azure Active Directory federation enables you to implement SSO across
applications or services protected by AAD (as an IdP) with SAP applications and services protected by SAP Cloud
Platform Identity Authentication.
Currently, SAP Cloud Platform Identity Authentication acts as a Proxy Identity Provider to SAP-applications. Azure
Active Directory in turn acts as the leading Identity Provider in this setup.
The following diagram illustrates this:
With this setup, your SAP Cloud Platform Identity Authentication tenant will be configured as a trusted application
in Azure Active Directory.
All SAP applications and services you want to protect through this way are subsequently configured in the SAP
Cloud Platform Identity Authentication management console!
This means that authorization for granting access to SAP applications and services needs to take place in SAP Cloud
Platform Identity Authentication for such a setup (as opposed to configuring authorization in Azure Active
Directory).
By configuring SAP Cloud Platform Identity Authentication as an application through the Azure Active Directory
Marketplace, you don't need to take care of configuring needed individual claims / SAML assertions and
transformations needed to produce a valid authentication token for SAP applications.
NOTE
Currently Web SSO has been tested by both parties, only. Flows needed for App-to-API or API-to-API communication should
work but have not been tested, yet. They will be tested as part of subsequent activities.
4. In the search box, type SAP Cloud Platform Identity Authentication, select SAP Cloud Platform
Identity Authentication from result panel then click Add button to add the application.
2. On the Single sign-on dialog, select Mode as SAML-based Sign-on to enable single sign-on.
3. On the SAP Cloud Platform Identity Authentication Domain and URLs section, If you wish to configure
the application in IDP initiated mode:
In the Identifier textbox, type a URL using the following pattern: https://<entity-id>.accounts.ondemand.com
NOTE
This value is not real. Update this value with the actual Identifier. Contact SAP Cloud Platform Identity Authentication
Client support team to get this value. If you don't know this value, please follow the SAP Cloud Platform Identity
Authentication documentation on Tenant SAML 2.0 Configuration.
4. Check Show advanced URL settings. If you wish to configure the application in SP initiated mode:
In the Sign On URL textbox, type a URL using the following pattern:
https://<entity-id>.accounts.ondemand.com/admin
NOTE
This value is not real. Update this value with the actual Sign-On URL. Contact SAP Cloud Platform Identity
Authentication Client support team to get this value.
5. On the SAML Signing Certificate section, click Metadata XML and then save the metadata file on your
computer.
6. SAP Cloud Platform Identity Authentication application expects the SAML assertions in a specific format. You
can manage the values of these attributes from the "User Attributes" section on application integration
page. The following screenshot shows an example for this.
7. In the User Attributes section on the Single sign-on dialog, if your SAP application expects an attribute for
example "firstName". On the SAML token attributes dialog, add the "firstName" attribute.
a. Click Add attribute to open the Add Attribute dialog.
9. On the SAP Cloud Platform Identity Authentication Configuration section, click Configure SAP Cloud
Platform Identity Authentication to open Configure sign-on window. Copy the Sign-Out URL, SAML
Entity ID, and SAML Single Sign-On Service URL from the Quick Reference section.
10. To get SSO configured for your application, go to SAP Cloud Platform Identity Authentication Administration
Console. The URL has the following pattern: https://<tenant-id>.accounts.ondemand.com/admin . Then, follow
the documentation on SAP Cloud Platform Identity Authentication to Configure Microsoft Azure AD as
Corporate Identity Provider at SAP Cloud Platform Identity Authentication.
11. In the Azure portal, click Save button.
12. Continue the following steps only if you want to add and enable SSO for another SAP application. Repeat
steps under the section Adding SAP Cloud Platform Identity Authentication from the gallery to add another
instance of SAP Cloud Platform Identity Authentication.
13. In the Azure portal, on the SAP Cloud Platform Identity Authentication application integration page,
click Linked Sign-on.
NOTE
The new application will leverage the SSO configuration for the previous SAP application. Please make sure you use the same
Corporate Identity Providers in the SAP Cloud Platform Identity Authentication Administration Console.
TIP
You can now read a concise version of these instructions inside the Azure portal, while you are setting up the app! After
adding this app from the Active Directory > Enterprise Applications section, simply click the Single Sign-On tab and
access the embedded documentation through the Configuration section at the bottom. You can read more about the
embedded documentation feature here: Azure AD embedded documentation
2. To display the list of users, go to Users and groups, and then click All users.
3. To open the User dialog box, click Add at the top of the All Users dialog box.
4. Click Add button. Then select Users and groups on Add Assignment dialog.
5. On Users and groups dialog, select Britta Simon in the Users list.
6. Click Select button on Users and groups dialog.
7. Click Assign button on Add Assignment dialog.
Test single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Cloud Platform Identity Authentication tile in the Access Panel, you should get
automatically signed-on to your SAP Cloud Platform Identity Authentication application. For more information
about the Access Panel, see Introduction to the Access Panel.
Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
Tutorial: Azure Active Directory integration with SAP
Cloud Platform
9/22/2017 9 min to read Edit Online
In this tutorial, you learn how to integrate SAP Cloud Platform with Azure Active Directory (Azure AD).
Integrating SAP Cloud Platform with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP Cloud Platform.
You can enable your users to automatically get signed-on to SAP Cloud Platform (Single Sign-On) with their
Azure AD accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see what is application access and
single sign-on with Azure Active Directory.
Prerequisites
To configure Azure AD integration with SAP Cloud Platform, you need the following items:
An Azure AD subscription
A SAP Cloud Platform single sign-on enabled subscription
After completing this tutorial, the Azure AD users you have assigned to SAP Cloud Platform will be able to single
sign into the application using the Introduction to the Access Panel.
IMPORTANT
You need to deploy your own application or subscribe to an application on your SAP Cloud Platform account to test single
sign on. In this tutorial, an application is deployed in the account.
To test the steps in this tutorial, you should follow these recommendations:
Do not use your production environment, unless it is necessary.
If you don't have an Azure AD trial environment, you can get a one-month trial.
Scenario description
In this tutorial, you test Azure AD single sign-on in a test environment. The scenario outlined in this tutorial consists
of two main building blocks:
1. Adding SAP Cloud Platform from the gallery
2. Configuring and testing Azure AD single sign-on
3. To add new application, click New application button on the top of dialog.
4. In the search box, type SAP Cloud Platform, select SAP Cloud Platform from result panel then click Add
button to add the application.
2. On the Single sign-on dialog, select Mode as SAML-based Sign-on to enable single sign-on.
3. On the SAP Cloud Platform Domain and URLs section, perform the following steps:
a. In the Sign On URL textbox, type the URL used by your users to sign into your SAP Cloud Platform
application. This is the account-specific URL of a protected resource in your SAP Cloud Platform application.
The URL is based on the following pattern:
https://<applicationName><accountName>.<landscape host>.ondemand.com/<path_to_protected_resource>
NOTE
This is the URL in your SAP Cloud Platform application that requires the user to authenticate.
https://<subdomain>.hanatrial.ondemand.com/<instancename>
https://<subdomain>.hana.ondemand.com/<instancename>
b. In the Identifier textbox you will provide your SAP Cloud Platform's type a URL using one of the
following patterns:
https://hanatrial.ondemand.com/<instancename>
https://hana.ondemand.com/<instancename>
https://us1.hana.ondemand.com/<instancename>
https://ap1.hana.ondemand.com/<instancename>
c. In the Reply URL textbox, type a URL using the following pattern:
https://<subdomain>.hanatrial.ondemand.com/<instancename>
https://<subdomain>.hana.ondemand.com/<instancename>
https://<subdomain>.us1.hana.ondemand.com/<instancename>
https://<subdomain>.dispatcher.us1.hana.ondemand.com/<instancename>
https://<subdomain>.ap1.hana.ondemand.com/<instancename>
https://<subdomain>.dispatcher.ap1.hana.ondemand.com/<instancename>
https://<subdomain>.dispatcher.hana.ondemand.com/<instancename>
NOTE
These values are not real. Update these values with the actual Sign-On URL, Identifier, and Reply URL. Contact SAP
Cloud Platform Client support team to get Sign-On URL and Identifier. Reply URL you can get from trust
management section which is explained later in the tutorial.
4. On the SAML Signing Certificate section, click Metadata XML and then save the metadata file on your
computer.
6. In a different web browser window, sign on to the SAP Cloud Platform Cockpit at
https://account.<landscape host>.ondemand.com/cockpit (for example:
https://account.hanatrial.ondemand.com/cockpit).
7. Click the Trust tab.
8. In the Trust Management section, under Local Service Provider, perform the following steps:
a. Click Edit.
b. As Configuration Type, select Custom.
c. As Local Provider Name, leave the default value. Copy this value and paste it into the Identifier field in
the Azure AD configuration for SAP Cloud Platform.
d. To generate a Signing Key and a Signing Certificate key pair, click Generate Key Pair.
e. As Principal Propagation, select Disabled.
f. As Force Authentication, select Disabled.
g. Click Save.
9. After saving the Local Service Provider settings, perform the following to obtain the Reply URL:
a. Download the SAP Cloud Platform metadata file by clicking Get Metadata.
b. Open the downloaded SAP Cloud Platform metadata XML file, and then locate the
ns3:AssertionConsumerService tag.
c. Copy the value of the Location attribute, and then paste it into the Reply URL field in the Azure AD
configuration for SAP Cloud Platform.
10. Click the Trusted Identity Provider tab, and then click Add Trusted Identity Provider.
NOTE
To manage the list of trusted identity providers, you need to have chosen the Custom configuration type in the Local
Service Provider section. For Default configuration type, you have a non-editable and implicit trust to the SAP ID
Service. For None, you don't have any trust settings.
11. Click the General tab, and then click Browse to upload the downloaded metadata file.
NOTE
After uploading the metadata file, the values for Single Sign-on URL, Single Logout URL, and Signing Certificate
are populated automatically.
a. Click Add Assertion-Based Attribute, and then add the following assertion-based attributes:
ASSERTION ATTRIBUTE PRINCIPAL ATTRIBUTE
firstname
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname
lastname
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname
email
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress
NOTE
The configuration of the Attributes depends on how the application(s) on SCP are developed, that is, which
attribute(s) they expect in the SAML response and under which name (Principal Attribute) they access this attribute in
the code.
b. The Default Attribute in the screenshot is just for illustration purposes. It is not required to make the
scenario work.
c. The names and values for Principal Attribute shown in the screenshot depend on how the application is
developed. It is possible that your application requires different mappings.
Assertion-based groups
As an optional step, you can configure assertion-based groups for your Azure Active Directory Identity Provider.
Using groups on SAP Cloud Platform allows you to dynamically assign one or more users to one or more roles in
your SAP Cloud Platform applications, determined by values of attributes in the SAML 2.0 assertion.
For example, if the assertion contains the attribute "contract=temporary", you may want all affected users to be
added to the group "TEMPORARY". The group "TEMPORARY" may contain one or more roles from one or more
applications deployed in your SAP Cloud Platform account.
Use assertion-based groups when you want to simultaneously assign many users to one or more roles of
applications in your SAP Cloud Platform account. If you want to assign only a single or small number of users to
specific roles, we recommend assigning them directly in the Authorizations tab of the SAP Cloud Platform
cockpit.
TIP
You can now read a concise version of these instructions inside the Azure portal, while you are setting up the app! After
adding this app from the Active Directory > Enterprise Applications section, simply click the Single Sign-On tab and
access the embedded documentation through the Configuration section at the bottom. You can read more about the
embedded documentation feature here: Azure AD embedded documentation
3. To open the User dialog box, click Add at the top of the All Users dialog box.
To assign Britta Simon to SAP Cloud Platform, perform the following steps:
1. In the Azure portal, open the applications view, and then navigate to the directory view and go to Enterprise
applications then click All applications.
4. Click Add button. Then select Users and groups on Add Assignment dialog.
5. On Users and groups dialog, select Britta Simon in the Users list.
6. Click Select button on Users and groups dialog.
7. Click Assign button on Add Assignment dialog.
Test single sign-on
The objective of this section is to test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Cloud Platform tile in the Access Panel, you should get automatically signed-on to your
SAP Cloud Platform application.
Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
Tutorial: Azure Active Directory integration with SAP
NetWeaver
6/27/2017 5 min to read Edit Online
In this tutorial, you learn how to integrate SAP NetWeaver with Azure Active Directory (Azure AD).
Integrating SAP NetWeaver with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP NetWeaver
You can enable your users to automatically get signed-on to SAP NetWeaver (Single Sign-On) with their Azure
AD accounts
You can manage your accounts in one central location - the Azure portal
If you want to know more details about SaaS app integration with Azure AD, see what is application access and
single sign-on with Azure Active Directory.
Prerequisites
To configure Azure AD integration with SAP NetWeaver, you need the following items:
An Azure AD subscription
An SAP NetWeaver single-sign on enabled subscription
NOTE
To test the steps in this tutorial, we do not recommend using a production environment.
To test the steps in this tutorial, you should follow these recommendations:
Do not use your production environment, unless it is necessary.
If you don't have an Azure AD trial environment, you can get a one-month trial here.
Scenario description
In this tutorial, you test Azure AD single sign-on in a test environment. The scenario outlined in this tutorial consists
of two main building blocks:
1. Adding SAP NetWeaver from the gallery
2. Configuring and testing Azure AD single sign-on
3. To add new application, click New application button on the top of dialog.
5. In the results panel, select SAP NetWeaver, and then click Add button to add the application.
2. On the Single sign-on dialog, select Mode as SAML-based Sign-on to enable single sign-on.
3. On the SAP NetWeaver Domain and URLs section, perform the following steps:
a. In the Sign-on URL textbox, type a URL using the following pattern:
https://<your company instance of SAP NetWeaver>
c. In the Reply URL textbox, type a URL using the following pattern:
https://<your company instance of SAP NetWeaver>/sap/saml2/sp/acs/100
NOTE
These values are not the real. Update these values with the actual Identifier and Reply URL and Sign-On URL. Here we
suggest you to use the unique value of string in the Identifier. Contact SAP NetWeaver Client support team to get
these values.
4. On the SAML Signing Certificate section, click Metadata XML and then save the XML file on your
computer.
6. On the SAP NetWeaver Configuration section, click Configure SAP NetWeaver to open Configure
sign-on window. Copy the SAML Entity ID from the Quick Reference section.
7. To configure single sign-on on SAP NetWeaver side, you need to send the downloaded Metadata XML
and SAML Entity ID to SAP NetWeaver support.
TIP
You can now read a concise version of these instructions inside the Azure portal, while you are setting up the app! After
adding this app from the Active Directory > Enterprise Applications section, simply click the Single Sign-On tab and
access the embedded documentation through the Configuration section at the bottom. You can read more about the
embedded documentation feature here: Azure AD embedded documentation
2. To display the list of users, go to Users and groups and click All users.
3. To open the User dialog, click Add on the top of the dialog.
4. Click Add button. Then select Users and groups on Add Assignment dialog.
5. On Users and groups dialog, select Britta Simon in the Users list.
6. Click Select button on Users and groups dialog.
7. Click Assign button on Add Assignment dialog.
Testing single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP NetWeaver tile in the Access Panel, you should get automatically signed-on to your SAP
NetWeaver application.
Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
Tutorial: Azure Active Directory integration with SAP
Business ByDesign
7/25/2017 7 min to read Edit Online
In this tutorial, you learn how to integrate SAP Business ByDesign with Azure Active Directory (Azure AD).
Integrating SAP Business ByDesign with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP Business ByDesign.
You can enable your users to automatically get signed-on to SAP Business ByDesign (Single Sign-On) with their
Azure AD accounts.
You can manage your accounts in one central location - the Azure portal.
If you want to know more details about SaaS app integration with Azure AD, see what is application access and
single sign-on with Azure Active Directory.
Prerequisites
To configure Azure AD integration with SAP Business ByDesign, you need the following items:
An Azure AD subscription
An SAP Business ByDesign single sign-on enabled subscription
NOTE
To test the steps in this tutorial, we do not recommend using a production environment.
To test the steps in this tutorial, you should follow these recommendations:
Do not use your production environment, unless it is necessary.
If you don't have an Azure AD trial environment, you can get a one-month trial.
Scenario description
In this tutorial, you test Azure AD single sign-on in a test environment. The scenario outlined in this tutorial consists
of two main building blocks:
1. Adding SAP Business ByDesign from the gallery
2. Configuring and testing Azure AD single sign-on
3. To add new application, click New application button on the top of dialog.
4. In the search box, type SAP Business ByDesign, select SAP Business ByDesign from result panel then click
Add button to add the application.
2. On the Single sign-on dialog, select Mode as SAML-based Sign-on to enable single sign-on.
3. On the SAP Business ByDesign Domain and URLs section, perform the following steps:
a. In the Sign-on URL textbox, type a URL using the following pattern: https://<servername>.sapbydesign.com
b. In the Identifier textbox, type a URL using the following pattern: https://<servername>.sapbydesign.com
NOTE
These values are not real. Update these values with the actual Sign-On URL and Identifier. Contact SAP Business
ByDesign Client support team to get these values.
7. On the SAP Business ByDesign Configuration section, click Configure SAP Business ByDesign to open
Configure sign-on window. Copy the SAML Single Sign-On Service URL from the Quick Reference
section.
8. To get SSO configured for your application, perform the following steps:
a. Sign on to your SAP Business ByDesign portal with administrator rights.
b. Navigate to Application and User Management Common Task and click the Identity Provider tab.
c. Click New Identity Provider and select the metadata XML file that you have downloaded from the Azure
portal. By importing the metadata, the system automatically uploads the required signature certificate and
encryption certificate.
d. To include the Assertion Consumer Service URL into the SAML request, select Include Assertion
Consumer Service URL.
e. Click Activate Single Sign-On.
f. Save your changes.
g. Click the My System tab.
h. Paste SAML Single Sign-On Service URL, which you have copied from the Azure portal it into the Azure
AD Sign On URL textbox.
i. Specify whether the employee can manually choose between logging on with user ID and password or
SSO by selecting Manual Identity Provider Selection.
j. In the SSO URL section, specify the URL that should be used by the employee to logon to the system. In the
URL Sent to Employee dropdown list, you can choose between the following options:
Non-SSO URL
The system sends only the normal system URL to the employee. The employee cannot log on using SSO, and
must use password or certificate instead.
SSO URL
The system sends only the SSO URL to the employee. The employee can log on using SSO. Authentication
request is redirected through the IdP.
Automatic Selection
If SSO is not active, the system sends the normal system URL to the employee. If SSO is active, the system
checks whether the employee has a password. If a password is available, both SSO URL and Non-SSO URL
are sent to the employee. However, if the employee has no password, only the SSO URL is sent to the
employee.
k. Save your changes.
TIP
You can now read a concise version of these instructions inside the Azure portal, while you are setting up the app! After
adding this app from the Active Directory > Enterprise Applications section, simply click the Single Sign-On tab and
access the embedded documentation through the Configuration section at the bottom. You can read more about the
embedded documentation feature here: Azure AD embedded documentation
2. To display the list of users, go to Users and groups, and then click All users.
3. To open the User dialog box, click Add at the top of the All Users dialog box.
NOTE
Please make sure that NameID value should match with the username field in the SAP Business ByDesign platform.
To assign Britta Simon to SAP Business ByDesign, perform the following steps:
1. In the Azure portal, open the applications view, and then navigate to the directory view and go to Enterprise
applications then click All applications.
5. On Users and groups dialog, select Britta Simon in the Users list.
6. Click Select button on Users and groups dialog.
7. Click Assign button on Add Assignment dialog.
Test single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP Business ByDesign tile in the Access Panel, you should get automatically signed-on to your
SAP Business ByDesign application.
Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?
Tutorial: Azure Active Directory integration with SAP
HANA
7/27/2017 8 min to read Edit Online
In this tutorial, you learn how to integrate SAP HANA with Azure Active Directory (Azure AD).
Integrating SAP HANA with Azure AD provides you with the following benefits:
You can control in Azure AD who has access to SAP HANA
You can enable your users to automatically get signed-on to SAP HANA (Single Sign-On) with their Azure AD
accounts
You can manage your accounts in one central location - the Azure portal
If you want to know more details about SaaS app integration with Azure AD, see what is application access and
single sign-on with Azure Active Directory.
Prerequisites
To configure Azure AD integration with SAP HANA, you need the following items:
An Azure AD subscription
A SAP HANA single sign-on enabled subscription
A running HANA Instance either on any public IaaS, on-premises, Azure VMs or SAP Large Instances in Azure
The XSA Administration Web Interface as well as HANA Studio installed on the HANA instance.
NOTE
To test the steps in this tutorial, we do not recommend using a production environment of SAP HANA. Test the integration
first in development or staging environment of the application and then use the production environment.
To test the steps in this tutorial, you should follow these recommendations:
Do not use your production environment, unless it is necessary.
If you don't have an Azure AD trial environment, you can get a one-month trial.
Scenario description
In this tutorial, you test Azure AD single sign-on in a test environment. The scenario outlined in this tutorial consists
of two main building blocks:
1. Adding SAP HANA from the gallery
2. Configuring and testing Azure AD single sign-on
3. To add new application, click New application button on the top of dialog.
4. In the search box, type SAP HANA, select SAP HANA from result panel then click Add button to add the
application.
2. On the Single sign-on dialog, select Mode as SAML-based Sign-on to enable single sign-on.
3. On the SAP HANA Domain and URLs section, perform the following steps:
b. In the Reply URL textbox, type a URL using the following pattern:
https://<Customer-SAP-instance-url>/sap/hana/xs/saml/login.xscfunc
NOTE
These values are not real. Update these values with the actual Identifier and Reply URL. Contact SAP HANA Client
support team to get these values.
4. On the SAML Signing Certificate section, click Metadata XML and then save the metadata file on your
computer.
NOTE
If certificate is not active then make it active by clicking the Make new certificate active checkbox in the Azure AD.
5. SAP HANA application expects the SAML assertions in a specific format. The following screenshot shows an
example for this. Here we have mapped the User Identifier with ExtractMailPrefix() function of user.mail.
This gives the prefix value of email of the user which is the unique User ID. This is sent to the SAP HANA
application in every successful response.
8. To configure single sign-on on SAP HANA side, login to your HANA XSA Web Console by browsing to the
respective HTTPS-endpoint.
NOTE
In the default configuration, the URL redirects the request to a logon screen, which requires the credentials of an
authenticated SAP HANA database user to complete the logon process. The user who logs on must have the
privileges required to perform SAML administration tasks.
9. In the XSA Web Interface, navigate to SAML Identity Provider and from there, click the + -button on the
bottom of the screen to display the Add Identity Provider Info pane and perform the following steps:
a. In the Add Identity Provider Info pane, paste the contents of the Metadata XML, which you have
downloaded from Azure portal into the Metadata textbox.
b. If the contents of the XML document are valid, the parsing process extracts the information required to
insert into the Subject, Entity ID, and Issuer fields in the General Data screen area, and the URL fields in
the Destination screen area, for example, Base URL and SingleSignOn URL (*).
c. In the Name box of the General Data screen area, enter a name for the new SAML SSO identity provider.
NOTE
The name of the SAML IDP is mandatory and must be unique; it appears in the list of available SAML IDPs that is
displayed, if you select SAML as the authentication method for SAP HANA XS applications to use, for example, in the
Authentication screen area of the XS Artifact Administration tool.
10. Save the details of the new SAML identity provider. Choose Save to save the details of the SAML identity
provider and add the new SAML IDP to the list of known SAML IDPs.
11. In HANA Studio within the system properties of the Configuration tab, just filter settings by saml and
adjust the assertion_timeout from 10 sec to 120 sec.
TIP
You can now read a concise version of these instructions inside the Azure portal, while you are setting up the app! After
adding this app from the Active Directory > Enterprise Applications section, simply click the Single Sign-On tab and
access the embedded documentation through the Configuration section at the bottom. You can read more about the
embedded documentation feature here: Azure AD embedded documentation
2. To display the list of users, go to Users and groups and click All users.
3. To open the User dialog, click Add on the top of the dialog.
NOTE
You can change the external authentication used by the user. External users are authenticated using an external system, for
example a Kerberos system. For detailed information about external identities, contact your domain administrator.
1. Open the SAP HANA Studio as an administrator and enable the DB-User for SAML SSO.
2. Tick the invisible checkbox to the left of SAML and follow the Configure link.
3. Click Add to add the SAML IDP and click OK selecting the appropriate SAML IDP.
4. Add the External Identity (ex. BrittaSimon here) or choose "Any" and click OK.
NOTE
If "ANY" check-box is not checked, then the user name in HANA needs to exactly match the name of the user in the
UPN before the domain suffix (i.e. BrittaSimon@contoso.com would become BrittaSimon in HANA).
4. Click Add button. Then select Users and groups on Add Assignment dialog.
5. On Users and groups dialog, select Britta Simon in the Users list.
6. Click Select button on Users and groups dialog.
7. Click Assign button on Add Assignment dialog.
Testing single sign-on
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the SAP HANA tile in the Access Panel, you should get automatically signed-on to your SAP HANA
application. For more information about the Access Panel, see Introduction to the Access Panel.
Additional resources
List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory
What is application access and single sign-on with Azure Active Directory?