Sie sind auf Seite 1von 49

CLOUD ARCHITECTURE

& DESIGN

Welcome to the Commvault® cloud architecture and design module.

1
LEARNING GOALS

Understand
▪ Public cloud adoption and usage
▪ Commvault® cloud design principles

Design
▪ Commvault solutions for cloud use-cases
▪ Cloud architecture components

Consider
▪ Key components that influence the design

We will begin this module by providing an introduction to the adoption and usage
of public cloud technologies and the cloud design principles for Commvault®
software.

You will then learn how to design Commvault solutions for specific public cloud
use-cases and how to size the required architecture components.

Finally we will discuss other key technical components both you and your
customer should consider that may influence the design.

2
Cloud Adoption and Usage

Narrative:

To provide a brief introduction to this module we will discuss how the Public
Cloud is being adopted by customers and some of the significant usage statistics
we have seen in the market over recent months that signals our opportunity.

3
Infrastructure as Global, Flexible and Transforming The Disaster
Programmable, Unlimited Resources Recovery Model
Addressable Resources

Narrative:

The Cloud megatrend is one of the most disruptive and challenging forces
impacting customers’ applications and infrastructure, requiring new business
models and new architecture decisions, which impact how Commvault® solutions
protect and manage their data.

{CLICK}
The cloud has reset customer buying trends of traditional infrastructure assets that
require manual configuration, tracking, capacity predictions and lengthy deployment
cycles. The public cloud provides infrastructure on-demand that is programmed and
addressed by code, resulting in a more flexible model for production, development, test
and disaster recovery solutions. These temporary, disposable units, are measured by
their actual and not potential consumption, offering a truly pay-as-you-go utility model.

{CLICK}
Cloud Infrastructure has become, global, flexible and unlimited, giving customers the
ability to localize with their corporate assets. Commvault’s modularised software driven
approach is hardware and cloud agnostic, providing the flexibility to meet the most
diverse data protection and disaster recovery requirements while conforming with
these new architecture realities.

4
{CLICK}
Today, many physical DR environments have less capacity than their Production, or
Dev/Test counterparts, resulting in degraded service in the event of a failover.
Even more so, hardware is often re-purposed to fulfil the DR environment’s
requirements, resulting in higher than expected maintenance costs. With the
Public Cloud model, this hardware availability and refresh aspect is disrupted by
removing the need to maintain a hardware fleet that can meet both your DR
requirements and sustain your service level agreements.

4
EVOLUTION OF CLOUD

{UPDATED CONTENT AND NARRATIVE FOR CVSA18}

So, why should you care? Well your customers certainly do and you only need to
follow the numbers,

Look at the growth of cloud infrastructure services market; Amazon – saw over $5
billion in revenue for Q4 2017, up 45%. Microsoft stated its Azure revenue
increased a whopping 98%, and their commercial cloud business now has an
annualized run-rate of $20 billion dollars.

{click}

Enterprises Want a Multi-Cloud Strategy


• 81 percent of enterprises have a hybrid cloud strategy, holding steady from
2017.
• Private cloud adoption increased from 89 percent to 92 percent
• 96 percent of organizations surveyed now use some form of cloud services

{click}

More Enterprise Workloads Shift to Cloud, Especially Private Cloud

5
• 25% of enterprises now have more than 1,000 VMs in either AWS or Azure
public clouds.
• Enterprises now run 77 percent of their workloads in the cloud, with more in
private cloud, 45% vs public cloud, 32 %.

Perhaps less surprising is that Cloud Security and Spend are the top
two challenges facing organizations in 2018.

{click}

Finally, The Cloud is growing, quickly


• Most Enterprise and small to medium businesses have adopted cloud into
their infrastructure already.

5
Cloud Design Principles

Narrative:

In this first section, we will discuss the design principles and architecture options
for organizations planning to leverage the Cloud as part of their Data
Management strategy.

6
NATIVE CLOUD CONNECTIVITY

Commvault® Cloud Storage


▪ Native integration through MediaAgent
▪ Direct communication with Object Storage
▪ REST API interface over HTTPS

Backup deduplicated data

Cloud Storage
Restore
MediaAgent
(Deduplicated data)
Backup data
Backup Stream
Restore Stream

Client 1 Client 2 Client 3

{VERY MINOR MOD TO CONTENT FOR CVSA18}

Narrative:

The Cloud Storage Connector is the native integration within the MediaAgent
module that directly communicates with Object Storage providers such as AWS’s
S3 and Azure, without requiring translation devices, gateways, hardware
appliances or VTLs. This Connector works by communicating directly with Object
Storage’s REST API interface over HTTPS, allowing for Media Agent deployments
on both Virtual and Physical compute layers to perform read/write operations
against Cloud Storage targets, reducing the Data Management solution’s TCO.

{CLICK}For more information on supported vendors, please refer to this


comprehensive list that can be found on the Commvault® documentation website
or by following the resources link at the top of the screen.

/Notes
Cloud Storage Support:
http://documentation.commvault.com/commvault/v11/article?p=features/cloud_
storage/cloud_storage_support.htm

7
GENERAL CLOUD DESIGN PRINCIPLES

1 SCALABILITY

2 RECOVERY

3 AUTOMATION

Narrative:

The following are the three design principles you should be aware of and we will
discuss these in more depth momentarily.

One. {CLICK} Scalability… Applications grow over time, and a Data Management
solution needs to adapt with the change rate to protect the dataset quickly and
efficiently, while maintaining an economy of scale that continues to generate
business value out of that system.

Two. {CLICK} Recovery… As part of any Data Management solution, it is important


to ensure that you design for Recovery in order to maintain and honor the RPO
and RTO requirements identified for your individual applications.

Three. {CLICK} Automation… The cloud encourages automation, not just because
the infrastructure is programmable, but the benefits in having repeatable actions
reduces operational overheads, bolsters resilience through known good
configurations and allows for greater levels of scale.

8
1. SCALABILITY

De-duplication Building Blocks Client-side De-Duplication


▪ Scalability for both DDB & Network capacity ▪ Network savings over cloud bandwidth

Narrative:

Commvault® addresses scalability in Cloud architecture by providing these key


constructs:

{CLICK}
Commvault maintains a Building Block approach for protecting datasets,
regardless of the origin or type of data. These blocks are sized based on the Front-
End TB (FET), or the size of data they will ingest, pre-compression/de-duplication.
This provides clear scale out and up guidelines for the capabilities and
requirements for each Media Agent. De-duplication Building Blocks can also be
grouped together in a grid, providing further deduplication scale, load balancing
and redundancy across all nodes within the grid.

{CLICK}
While Client-side De-duplication may seen as a way to improve the ingest
performance of the data mover (Media Agent), it has the secondary effect of
reducing the network traffic stemming from each Client communicating through
to the data mover. In public cloud environments where network performance can
vary, the use of Client-side DeDuplication can reduce backup windows and drive

9
higher scale, freeing up bandwidth for both Production and Backup network
traffic.

9
2. DESIGN FOR RECOVERY

Crash Consistency vs. Application Consistency Storage-level Replication vs. Discrete Copies
Active Data Center
Crash Consistent Application Consistent (Production) Passive Data Center (DR)
Host Host

Application Application
Secondary
Communication CommServe®
with application (DR)

Secondary
Data MediaAgent (DR)
No application awareness
of snap process Application
quiesces data

Snapshot performed at Snapshot performed at


volume level volume level

Deciding What to Protect

Narrative:

Using native cloud provider tools, such as creating a snapshot of a Cloud-based


instance is easy to orchestrate, but does not deliver the application-consistency
possibly required by a SQL or Oracle Database residing within the instance, and
may even require additional scripting or manual handling to deliver a successful
application recovery.

{CLICK}
While Crash-consistency within a recovery point may be sufficient for a file-based
dataset or EC2 instance, it may not be appropriate for an Application such as
Microsoft SQL, where the database instance needs to be quiesced to ensure the
database is valid at time of backup. Commvault® supports both Crash and
Application consistent backups, providing flexibility in your design.

{CLICK}
Many cloud providers support replication at the Object Storage layer from one
region to another, however, in the circumstance that bad or corrupted blocks are
replicated to the secondary region, your recovery points are invalid. While
Commvault can support a Replicated Cloud Library model, we recommend you

10
configure Commvault to create an independent copy of your data, whether to
another region or cloud provider to address that risk. De-duplication is also vital as
part of this process, as it means that Commvault can minimize the cross-
region/cross-provider copy by ensuring only the unique changed blocks are
transferred over the wire.

{CLICK}
Not all workloads within the Cloud need protection – for example, with micro
services architectures, or any architecture that involve worker nodes that write
out the valued data to an alternate source, mean that there is no value in
protecting the worker nodes. Instead, the protection of the gold images and the
output of those nodes provides the best value for the business.

10
3. AUTOMATION
Programmatic Data Management Workload Auto-Detection and Auto-Protection

Form Input
Can be used throughout
workflow

Check Condition

TRUE

Decision Execute Action


Verify outcome of
previous action(s)
FALSE User Input

Notify
Query
Pass to
subsequent
Pass multiple Loop actions but
variables through not preceding
loop actions
Information

Self-Service Access and Restore

Narrative:

{VERY MINOR MODS TO CONTENT AND NARRATIVE FOR CVSA18}

Commvault® provides automation capabilities through three key areas.

{CLICK} Commvault provides a robust Application Programming Interface that


allows for automated control over Deployment, Configuration, Backup and
Restore activities within the solution. Regardless of your design goals Commvault
can provide the controls to reduce administrative overhead and integrate
seamlessly within the customers ecosystem.

{CLICK} The Auto-Detection and Auto-Protection features within Commvault’s


Data Protection Agents remove the requirement for a backup or cloud
administrator to manually update the solution to protect the newly created
datasets, be it a Virtual machine inside AWS or a database in a SQL instance. This
helps improve the operational excellence and resiliency within a cloud
infrastructure, ensuring new data is protected and recovery points maintained.

{CLICK} Commvault’s self-service interfaces empower users to access their

11
datasets through a Web-based interface, allowing security mapped access to
individual files & folders within the protected dataset, freeing up administrators to
work on critical tasks.

11
Cloud Solutions with Commvault®

Narrative:
We will now discuss the three primary use cases when leveraging Commvault®
with the cloud.

12
BACKUP/ARCHIVE TO THE CLOUD

Scenario / Suitability
▪ Offsite storage / tape replacement
▪ Native, Direct connectivity to supported object
storage endpoints

Requirements
▪ Minimum 1 x MediaAgent on premise
– No VM in cloud required for backup & recovery to the cloud Backup Store
Cloud Copy

▪ 1 x DDB for the cloud library (hosted on on-


premise MediaAgent) Cloud Copy
REST, Encrypted, Compressed
– Additional DDB required for local copy if desirable
Deduplicated, Selective, Online
▪ Direct internet connection or dedicated network Native connectivity (no
to cloud provider for best performance gateway/hardware required)
– AWS Direct Connect, Azure ExpressRoute

Narrative:
The first use case is protecting data at the primary on-premise location by writing
directly to an external cloud provider’s storage solution, or retaining a local copy
and replicating the backup/archive data (either in full, or only selective portions of
that data) into an external cloud provider’s storage service.

This particular use case is as an off-site storage repository or tape replacement,


and note, there is no requirement to have an additional hardware gateway.

There is no DR to the Cloud requirement, but this use-case can be extended if


required as we will discuss momentarily.

13
DISASTER RECOVERY IN A PUBLIC CLOUD
DR business
Premise Based processes
Virtual (workflow)
Machines

3 VM Restore & convert into AWS/Azure


Instances or Databases/Files
1 Cloud server
MA workloads

Local cache
backup copy 1

Local Copy
2
Replicated
backup
copy 2
DR Copy 4 Applications online, DR complete!

Featuring DASH copy


backups into cloud IaaS Hybrid To
DR

Scenario / Suitability Requirements


▪ Off-site storage & cold DR site in the cloud ▪ Minimum 1 MediaAgent on premise
▪ VM restore & convert ▪ Minimum 1 x MediaAgent in cloud
▪ Database/files – Powered-on for recovery operations only

▪ DR run-book as code ▪ Dedicated network to cloud provider highly


recommended

{UPDATED CONTENT AND NARRATIVE FOR CVSA18}

Narrative:

The second use case is Disaster Recovery in a Public Cloud. In this scenario we are
providing operational recovery of primary site applications to a secondary site
from an external cloud provider.

An agent–in-guest approach allows for the recovery of a wide variety of operating


systems and applications. These can be captured at the primary site and
replicated to the cloud based MediaAgent in a deduplicated efficient manner.
Once replicated, the data can be held and restored in the event of a DR scenario
or automatically recovered to existing instances for more critical workloads.

Another benefit of this design is that databases and or files can be restored out-
of-place whether on-demand or scheduled to refresh the DR targets. In addition
you can turn your DR run book into a Workflow for easy, simplified DR
automation, be it for test or real DR scenarios.

14
Please click next when you have reviewed the solution diagram and are ready to
continue…

14
DISASTER RECOVERY IN A PUBLIC CLOUD
Automated Provisioning &
Incremental Recovery (Low RTO)

Hypervisor

Dash Copy

Scenario / Suitability
Requirements
▪ Virtual Machine Conversion
▪ Minimum 1 MediaAgent/VSA on premise
– Various configurations supported (refer to documentation)
▪ Minimum 1 x MediaAgent/VSA in cloud
▪ Live Sync Replication
– Powered-on for recovery operations only
– Flexible, superior RPO/RTO
– Failover / Failback ▪ Dedicated network to cloud provider highly recommended
▪ CDR Replication
– Lowest RPO/RTO
– Requires destination VM to be running all the time

{NEW SLIDE CVSA18}

In addition to an agent-in-guest approach, local Virtual Machines running on


Vmware and Hyper-V can be converted into virtual machine instances running in
AWS, Azure. At the time of writing this training Virtual machine conversion is also
supported for VMware to Oracle Cloud, VMware to OpenStack, Amazon to Azure,
and Azure Classic to Azure Resource Manager.

{CLICK} Replicating VM Workloads with Live Sync allows you to replicate VMs to
public cloud infrastructure. Live Sync combines the VM conversation feature with
incremental replication to provide a DR solution utilizing on-demand cloud
infrastructure. As Live Sync to cloud integrates with Dash Copy, highly efficient
WAN replication is possible.

{CLICK} Finally, Commvault® Continuous Data Replicator (CDR) allows near time
continuous data replication for critical workloads that must be recovered in
adherence to Service Level Agreements that exceed the capabilities associated
with Live Sync operations. CDR requires a similarly sized VM that must be running
all the time to receive application changes.

15
A good strategy design is to identify multiple approaches depending on the
business RTO/RPO requirements and implement them accordingly, while also
considering the limitations and
requirements specific to the cloud vendor. For example, Tier1 applications may be
a better fit for near-continuous replication using Commvault’s CDR technology,
while Tier 2 applications could make use of Live Sync (VMs, Files, DB), and Tier 3
apps could use on-demand VM conversion from cloud storage when needed.

Commvault® is continuously evolving its support for both traditional Hypervisors


and Public cloud providers so be sure to check out the Commvault documentation
website for the latest features and supportability.

15
MIGRATION TO THE PUBLIC CLOUD
DR business
Premise Based processes
Virtual (workflow)
Machines

3 VM Restore & convert into AWS/Azure


Instances or Databases/Files
1 Cloud server
MA workloads

Local cache
backup copy 1

Local Copy
2
Replicated
backup
copy 2
DR Copy 4 Applications online, DR complete!

Featuring DASH copy


backups into cloud IaaS Hybrid To
DR

Scenario / Suitability Requirements


▪ Lift & Shift Virtual Machines ▪ Minimum 1 x MediaAgent on premise to protect and capture
▪ Application Migration Feature workloads
▪ Application Restore Out-of-Place ▪ Minimum 1 x MediaAgent (& DDB) in Cloud to protect workloads
post-migration/performance
▪ Dedicated network to cloud provider highly recommended
▪ Application Migration Feature check specific requirements

{UPDATED CONTENT AND NARRATIVE FOR CVSA18}

Narrative:

The next use case we will cover here is migration to the public cloud.

Commvault® can assist in the migration of application workloads into the public
cloud, either at the VM container level or at the application level, all while
providing protection during the migration lifecycle while workloads are in a
transition phase between on premise and public cloud.

For supported public cloud providers such as AWS and Azure, workloads running
VMware or Hyper-v can be lifted and shifted directly to the cloud by using a
restore and convert process as part of a migration phased cut-over strategy.

The Commvault application migration feature is now supported with both SQL
and Oracle. This feature can be used to synchronize a remote copy of the
database to the cloud for test or demonstration purposes, additionally it can
provide the baseline plus any incremental changes as part of an automated
migration lifecycle process.

16
Finally, an application can be restored by leveraging the Commvault Data
Protection Agent for supported workloads to restore the target application out-of-
place to a warm instance residing in the cloud.

Please take a moment to review the solution diagram and requirements on this
slide before moving on to the next section.

16
PROTECTION IN A PUBLIC CLOUD

Scenario / Suitability
▪ Data Protection for cloud-based Iaas
workloads
On-premises DC On-premises DC 2
▪ Agent-less instance protection
(supported providers only)
▪ DASH Copy to another region, cloud
or back to on premise
Object Storage

Local copy Local copy

Requirements
▪ VSA + MA deployed on a proxy within cloud provider for agentless backup (supported providers only)
▪ Agents deployed in each VM for non-supported providers and apps requiring application consistency
▪ Minimum 1 x MediaAgent in cloud and (optional) 1 x MediaAgent for secondary site (whether cloud or on premise)
▪ 1 x DDB hosted on MediaAgent
▪ Dedicated network from cloud provider to on premise highly recommended when replicating back to on premise

{MINOR UPDATE TO NARRATIVE FOR CVSA18}

Narrative:

The final use case is a design providing operational recovery for active workloads
and data within an external provider’s cloud.

Commvault® provides an agent-less and automated protection mechanism


through its Virtual Server Agent for supported Public Cloud providers. At the time
of writing this training those are currently AWS, Azure and Oracle VM instances
only, but be sure to check out the Commvault documentation website for the
latest features and supportability. We will discuss the protection methodologies
for the two most common providers, AWS and Azure in the next section.

{CLICK} In addition to the agentless, instance level protection through VSA,


customers can also utilize Commvault data protection agents, where appropriate,
on certain instances. This provides a flexible data protection approach and offers
increased functionality such as application consistency, granular application
recovery, and block level backup for example.

17
Finally, DASH Copy provides complete data mobility, allowing replication to
another geographical region, be it with the same infrastructure as a service
provider, a different provider or back to on premise sites.

Please take a moment to review the solution diagram and requirements on this
slide before moving on to the next section.

17
Amazon

{NEW CONTENT FOR CVSA18}

In this next section we will discuss how you can protect virtual machines and their
data running inside AWS.

18
AWS PROTECTION
Database

▪ Agent-In-Guest (Streaming) RDS API


endpoint EC2-based
Media
▪ Snapshot-based Agent-In-Guest Agent
Oracle DB MySQL
EBS DB MS SQL DB
EBS
(EBS IntelliSnap®)
Volume Instance
Volume Instance
Instance

EC2-based
▪ Agent-less EC2 Instance Protection Media
(VSA) Cloned VM S3 Bucket
Agent
Volume Volume Volume

▪ Agent-less EC2 Instance Protection ⁻ Protect


Agent-less
Agent-in-guest
Amazon
protection
RDS Instances
& recovery
approach for EC2
with lower instances & file-level data
RTO/RPO
(VSA IntelliSnap)
⁻ Application
Required
High
Creates
latency
EBS
for
Consistent
snapshots
application-consistent
due to snap
Application-consistent Snapshots
that
andis clone
backups mounted
backups
method
to VSA/MA
in AWS

▪ Agent-less RDS Instance Protection ⁻ Snap


Leverages
Crash
Snapshots
Revert
Consistent
Client-side
remain
or Replication
Fast storage-based available
Deduplication
tofor
snapshots alternate
longer than
through Region
backup
the same window.
data agent Retain one
snapshot to reduce costs
⁻ Extraction
Block-level
No CBT support
Minimizes to backup
alternate
load supported
targets currently
on production server supported
⁻ Offloads extraction tasks from snapshot into S3 through backup copy
⁻ Secondary
Keep
BackupVSA limited
copy MediaAgent
to VMscan
can extract <150GB
provide-for
Snapshot 200GB
DRlong-term
copy in a retention
different geographic region
at lower cost point

REFER TO DOCUMENTATION.COMMVAULT.COM
⁻ Minimum 1 x Data Agent per instance for intended dataset (i.e. File, SQL)

{NEW CONTENT FOR CVSA18}

Firstly an Agent-in-guest approach is used on the production workload and


protected to the Media Agent residing in AWS, leveraging Client-side
Deduplication to reduce the network consumption within the Cloud. It is
recommended to use this approach when you require application-consistent
backups and granular-level restoration features for applications

{CLICK} In addition to the standard agent-in-guest streaming approach, for


supported configurations the agent can integrate the EBS snapshot facility. Use
the EBS snapshot-based approach when fast recovery is required, and application
consistent backups with a shorter backup window. This approach also reduces
load on the production servers compared to the streaming based approach.

{CLICK} The Commvault® VSA can deliver agent-less protection for EC2 instances
and file level data. This process utilizes a block-level capture of the EC2 instances
and their attached EBS volumes. The agentless backups are crash-consistent,
utilize app-aware or agent in guest to gain application consistency. You should also
be aware of the performance around the hydration of blocks from the clone that
is presented to the VSA proxy. This is the same regardless of whether it is a full or

19
incremental backup being performed and with no CBT support performance can
be an issue. Therefore it is recommended that a VSA is limited to protecting VMs
that are under 200GB in size and agents for anything larger.

{CLICK} IntelliSnap® based snapshots through the Commvault VSA can mitigate
some of the performance overheads with the instance level protection. This will
enable you to create point-in-time snapshots of instance data. You can back up
larger numbers of instances more quickly, and perform multiple backups each day.
Keep in mind that snapshots will be kept available beyond the backup window and
while backup copies are processed. To reduce costs, only retain snapshots for as
long as they are required. Or it may be desirable for customers to store only the
last snapshot and create backup copies to s3 storage for longer term retention.

{CLICK} Finally, You can use the Virtual Server Agent for Amazon to protect Amazon
RDS instances. The Amazon Relational Database Service (RDS) provides
configuration and scaling resources to host relational databases, for example
Oracle, in the Amazon Web Services (AWS) cloud. The EC2 Proxy co-ordinates with
the RDS API to create Application-Consistent Snapshot and control Snap Revert or
Replication across regions. Please note that it is currently not possible to Extract
and de-duplicate data into an S3 bucket or alternative target.

{CLICK} For any of the agent-in-guest or VSA approaches described here,


Remember to check the system requirements section in Commvault
documentation to determine if the data protection agent supports the required
applications and configuration you are proposing.

19
Microsoft Azure

{NEW CONTENT FOR CVSA18}

We will now discuss protection in Microsoft Azure.

20
MICROSOFT AZURE PROTECTION

▪ Agent-In-Guest (Streaming)
EC2-based
Media
▪ Agent-less Azure Instance Protection
Agent
(VSA) EBS EBS
Volume Volume

▪ Agent-less Azure Instance Protection


(VSA IntelliSnap®)

– Protection
Agent-less
Agent-in-guest
and
protection
Recovery
approach
& recovery
for SQL Azure
for EC2
Azure
Databases
instances
instances
(Instance
& &file-level
file-level
ordata
DBaas)
data
▪ Azure DBaaS support
– Utilizes
Required
Good
Createssnapshot
SQL
BLOB
for Agent
application-consistent
snapshots
copy and that
readare
performance
mounted
backupsto VSA/MA

– Reduces
Recover
Leverages
CBT supports
on-premises
protection
Client-side
provides
and
to
Deduplication
good
recovery
Azure,
incremental
Azure
window
to on-premises,
performance
dramatically and SQL Azure to SQL
Azure
– CBT
Block-level
Crashsupport
Consistent
backup
for backup
supported
copies to increase performance
– Lower cost option for longer term retention using existing infrastructure
– Crash
Secondary
Consistent
MediaAgent can provide DR copy in a different geographic region
– Full Backup Only
– Minimum 1 x Data Agent per instance for intended dataset (i.e. File, SQL)

{NEW CONTENT FOR CVSA18}

{CLICK}
Firstly an Agent-in-guest approach is used on the production workload and
protected to the Media Agent residing in Azure, leveraging Client-side
Deduplication to reduce the network consumption within the Cloud. It is
recommended to use this approach when you require application-consistent
backups and granular-level restoration features for applications.

{CLICK}
The Commvault® VSA for Azure delivers an agent-less, block-level capture of Azure
VM instances and their attached block volumes. Unlike AWS, Azure provides
Changed Block Tracking (CBT), an optional mechanism that helps accelerate
incremental backup performance. Once Azure snapshots are created, they are
mounted onto the Commvault VSA proxy and the data is read. In Azure, unlike
AWS, snapshots can directly be mounted on an Azure instance. Also since the data
blocks are shared between a volume and snapshot, the read performance is also
good as the process is similar to reading data off of a volume.

21
{CLICK}
IntelliSnap® based snapshots using Azure BLOB storage, through the Commvault
VSA helps reduce backup windows considerably, providing fast, snap-shot based
restoration capability. IntelliSnap with Azure also allows for the use of CBT
(Changed Block Tracking) when making a backup copy which can accelerate
incremental backup performance considerably. This will enable you to create
point-in-time snapshots of instance data. You can back up larger numbers of
instances more quickly, and perform multiple backups each day.

{CLICK} Finally, Commvault® can help customers protect and stage databases
stored in the Azure cloud, either living in a VM Instance or in the case of SQL,
within Azure Database As a Service. Customer needing a longer term retention
with an existing infrastructure and smaller up front charges is a good use case for
this agent. The SQL agent also allows capabilities of restoring databases from on
premises to SQL Azure, SQL Azure to an on premises SQL instance, and from one
SQL Azure server to another SQL Azure server. With these capabilities in mind it
can also be used as a migration tool.

21
Architecture Sizing & Considerations

22
ARCHITECTURE SIZING

▪ Latest architecture sizing and


considerations

▪ Best practices

▪ Specifications for:
▪ CommServe®
▪ MediaAgents
▪ Storage

Narrative:

To access the latest architecture sizing information and considerations please


refer to the Commvault® Public Cloud Architecture Guide available from the
Commvault Documentation Website or via the resources link at the top of this
screen. This guide is updated regularly by the Commvault Cloud Business Unit and
provides specifications for the core components required when designing
solutions that leverage a public cloud infrastructure.

/Notes
Link to PCAG
http://documentation.commvault.com/commvault/v11/article?p=products/vsa/r
_vsa_white_papers.htm

23
NETWORKING

Virtual Private Cloud / Azure Virtual Network

AWS VPC Example Azure Virtual Network Example

Narrative:
AWS and Azure both have the capability to establish an isolated logical network,
referred to within AWS as Virtual Private Cloud (VPC), and Azure Virtual Network
(AVN) within Azure.

Instances/Virtual Machines deployed within a VPC/AVN by default have no access


to Public Internet, and utilize a subnet of the Customer’s choice. Typically
VPC/AVN’s are used when creating a backbone between Virtual Machines, and
also when establishing a dedicated network route from a Customer’s existing on
premise network directly into AWS/Azure via AWS Direct Connect or Azure
ExpressRoute, which we will discuss next.

24
NETWORKING
BRIDGING ON PREMISE INFRASTRUCTURE – VPN OR DIRECTCONNECT/EXPRESSROUTE

VPN Connection Or AWS DirectConnect /


Azure ExpressRoute

VPN – encrypted
over Public Internet

Backup Store

Cloud Copy

Azure ExpressRoute Peering example

Narrative:
Customers may find a need to bridge their existing On-Premise infrastructure to
their Public Cloud provider, or bridge systems and workloads running between
different Cloud providers to ensure a common network layer between compute
nodes and storage endpoints. This is particularly relevant to solutions where you
wish to Backup/Archive directly to the Cloud, or DASH Copy existing
backup/archive data to Object Storage within a Cloud provider. To provide this,
there are two primary choices available:

To provide this, there are two primary choices available: {CLICK} A VPN
Connection – where network traffic is routed between network segments over
Public Internet, encapsulated in a secure, encrypted tunnel over the Customer’s
existing Internet Connection. As the connection will be shared, bandwidth will be
limited and regular data transfer fees apply as per the Customer’s current contract
with their ISP.

{CLICK} Alternatively an AWS Direct Connect or Azure ExpressRoute is a dedicated


network link is provided at the Customer’s edge network at an existing On-
Premise location that provides secure routing into an AWS Virtual Private Cloud /
Azure Virtual Network. Typically, these links are cheaper when compared to a

25
Customer’s regular internet connection, as pricing is charged on a monthly dual-
port fee, with all inbound and outbound data transfers included free of charge,
with bandwidth from 10Mbit/s to 10Gbit/s.

25
DATA SECURITY

In-flight At-rest HTTPS Proxies

HTTPS Proxy

Cloud Copy

Cloud Copy

✓ Encrypt between nodes ✓ Encrypt all data at rest ✓ HTTP(S) may impact
network performance

Narrative:

The following are three data security considerations to be aware.

{CLICK}
By default, all communication with Cloud Libraries utilize HTTPS which ensures
that all traffic is encrypted while in-flight between the Media Agent and the Cloud
Library end-point, but traffic between Commvault® nodes is not encrypted by
default. It is recommended that any network communications between
Commvault modules routing over public Internet space should be encrypted to
ensure data security. This can be employed by using standard Commvault firewall
configurations (Two-Way & One-Way).

{CLICK}
Data stored in a public Cloud is usually on shared infrastructure logically
segmented to ensure security. Commvault would recommend adding an extra
layer of protection by encrypting all data at-rest

{CLICK}
Please take note of any HTTP(S) proxies between Media Agents and endpoints,

26
whether via public Internet or private space, as this may have a performance
impact upon any backup/restore operations to/from an Object Storage endpoint.
Where possible, Commvault should be configured to have direct access to an
Object Storage endpoint.

26
DATA SEEDING

▪ Process of moving initial set of data from its origin to the a public cloud provider

“Over-the-wire” Drive Seeding

Narrative:

If your customer is planning to store data in a public cloud then you should always
discuss how they will get that data to the destination initially and how long it will
take. This process is known as data seeding and involves moving the initial set of
data from its original location to the target storage destination in the cloud.

There are two types of data seeding methods to consider.


{CLICK}
Over-the-wire is usually initially performed in small logical grouping of systems to
maximize network utilization in order to more quickly complete the data
movement per system. Some organizations will purchase “burst” bandwidth from
their network providers for the seeding process to expedite the transfer process.
Major cloud providers offer a direct network connection service option for
dedicated network bandwidth from your site to their cloud such as AWS Direct
Connect or Azure ExpressRoute. There is a useful chart contained in the Cloud
architecture guide that shows payload transfer time for various data sizes and
speeds. Please click on the resources link at the top of this window to download
it.

27
{CLICK}
If the data set is too large to copy over the network then drive seeding maybe
required. Drive seeding is coping the initial data set to external physical media and
then shipping it directly to the external cloud provider for local data ingestion. It is
worth mentioning that most Cloud providers require that any seeded data be
shipped in an encrypted format.

27
CONSUMPTION / COST

▪ Public cloud providers charge for service GET/PUT


consumption
$
▪ Is the customer aware what processes will
trigger additional consumption? Cloud Copy

Network Storage I/O Data Recall


Egress

$ $
$ Cloud Copy

{UPDATE CONTENT AND NARRATIVE FOR CVSA18}

Narrative:

An important consideration that you are probably not concerned about but the
customer definitely should be is how much the consumption of certain features or
processes will cost them with their public cloud provider. Although you are not
responsible for the consumption and associated costs a customer will incur you
should definitely ensure that they have considered them, it can only help enhance
your credibility in the account.

{CLICK}
Moving data into a cloud provider is typically at no cost. However, moving data
outside the cloud provider, instance or region usually does. Restoring data from a
provider to an external site, or replication data between provider regions are
examples of activities that would be classed as Network Egress and usually incur
additional charges.

{CLICK}
Cloud storage is usually metered with a fixed allowance included per month.

28
Frequent restores, active data, and active databases may go beyond a cloud
provider’s Storage I/O monthly allowance, which would result in additional
“overage” charges.

{CLICK}
Amazon, Azure and other Cloud providers usually incur a cost for GET/PUT
transactions to cloud object storage. These costs are primarily to enforce good
practices for applications when retrieving and placing data in the cloud. As such,
the cost when using the Commvault® solution is minimal. The Cloud architecture
guide available on the Commvault documentation website has some useful
examples of costs for such transactions.

{CLICK}
Low-cost cloud storage solutions may have a cost associated with accessing data
or deleting data before an agreed upon time period. Storing infrequently accessed
data on a low-cost cloud storage solution may be attractive upfront, however
Commvault® would recommend modelling realistic data recall scenarios. In some
cases, the data recall charges maybe more than the potential cost savings vs. an
active cloud storage offering.

28
PERFORMANCE / STORAGE

Compression vs. Block vs Object Partitioned Deduplication


Deduplication 4 Node Grid

vs. vs. MediaAgent-1 MediaAgent-2 MediaAgent-3 MediaAgent-4

DDB-G1a DDB-G1b DDB-G1c DDB-G1d

2:1 10:1 DASH Single Logical Dedupe Store


Copy

Storage Class Fine Tuning Multiple Mount


Paths
256
512 C:\

D:\

E:\

G:\

H:\

{NEW CONTENT FOR CVSA18}

Narrative:

When designing for cloud storage scenarios it is recommended to utilise


Commvault® software de-duplication where possible. Exceptions to this
recommendation are for environments where there are significant bandwidth
concerns for re-baselining operations or for Archive only use cases.

{CLICK}
While Public IaaS environments allow block-based storage to be provisioned and
leveraged as Disk Libraries, the overall cost of those volumes can quickly exceed
that of Object Storage. it is highly recommended that Object Storage be the
primary choice for writing data to the Cloud, and other forms of storage by
exception.

{CLICK}
When possible, make use of partitioned dedupe to increase scale, load balancing,
and failover. Version 11 allows adding up to 4 nodes to an existing deduplication
store dynamically, allowing for rapid scale-out configurations.

29
{CLICK}
Depending on the provider, there may be different tiers of Object Storage available
that offer different levels of cost, performance, and access/SLA’s. Some industry
names for these classes are standard or hot storage, infrequent access or cool
storage and deep archive or cold storage. This can have a significant impact on
both the cost and the user experience for the datasets within. It is highly
recommended that you review the cost options and considerations of each of
these storage classes against the use case for your architecture in order to gain the
best value for your cost model.

{CLICK}
Follow documented best practices for Object storage from the Commvault
documentation, {CLICK} such as Micro pruning support for Object Storage, {CLICK}
Increasing the deduplication block size for maximum performance, {CLICK} and
Multi-Streaming. Object Storage performs best with concurrency, and as such with
any Cloud Libraries configured within Commvault, best performance is achieved
when configured for multiple readers and streams.

{CLICK}
Just like regular disk libraries, Cloud libraries have the option to leverage multiple
mount paths. The benefit of using multiple mount paths depends on the Cloud
storage vendor.

29
CLOUD DESIGN SUMMARY

Follow the Cloud design


1 principles: Scalability, Design
for Recovery and Automation

2 Design for the appropriate


Cloud use-case(s) and use the
right tool for job

Design Commvault®
3 components using the latest
architecture sizing guidelines

Consider other key

4 components that influence the


design (e.g. networking,
security and storage)

{UPDATED CONTENT AND NARRATIVE FOR CVSA18}

Narrative:

Let’s summarise the steps we have discussed when designing Commvault®


solutions for the Cloud.

{CLICK}
One: Remember to follow the cloud design principles; Scalability, Design for
Recovery and Automation

{CLICK}
Two: Ensure you are designing a solution that fits the appropriate cloud use-case
and use the right tool for the job, e.g. instance level VSA protection, agent in
guest, cloud snapshot management, and Dbaas or RDS support.

{CLICK}
Three: Remember to always follow the latest architecture and components sizing
guidelines in the cloud architecture guide available on the Commvault
documentation website.

30
{CLICK}
Four: Consider other key factors or components that may influence the design and
discuss them with your customer. Remember the customer will appreciate your
honesty as well as your expertise.

30
WRAP-UP

▪ Public cloud adoption and usage


▪ Commvault® cloud design principles

▪ Commvault solutions for cloud use-cases


▪ Cloud architecture components

▪ Key components that influence the design

Narrative:

Thank you for watching.

In this module you learned about the adoption and usage of public cloud
technologies and the cloud design principles for Commvault® software.

We then discussed how to design Commvault solutions for specific public cloud
use-cases and how to size the required architecture components utilizing the
cloud architecture guide.

Finally we discussed other key technical components that should be considered


that may influence the design.

31
Questions?
Suggestions?

32

Das könnte Ihnen auch gefallen