Beruflich Dokumente
Kultur Dokumente
& DESIGN
1
LEARNING GOALS
Understand
▪ Public cloud adoption and usage
▪ Commvault® cloud design principles
Design
▪ Commvault solutions for cloud use-cases
▪ Cloud architecture components
Consider
▪ Key components that influence the design
We will begin this module by providing an introduction to the adoption and usage
of public cloud technologies and the cloud design principles for Commvault®
software.
You will then learn how to design Commvault solutions for specific public cloud
use-cases and how to size the required architecture components.
Finally we will discuss other key technical components both you and your
customer should consider that may influence the design.
2
Cloud Adoption and Usage
Narrative:
To provide a brief introduction to this module we will discuss how the Public
Cloud is being adopted by customers and some of the significant usage statistics
we have seen in the market over recent months that signals our opportunity.
3
Infrastructure as Global, Flexible and Transforming The Disaster
Programmable, Unlimited Resources Recovery Model
Addressable Resources
Narrative:
The Cloud megatrend is one of the most disruptive and challenging forces
impacting customers’ applications and infrastructure, requiring new business
models and new architecture decisions, which impact how Commvault® solutions
protect and manage their data.
{CLICK}
The cloud has reset customer buying trends of traditional infrastructure assets that
require manual configuration, tracking, capacity predictions and lengthy deployment
cycles. The public cloud provides infrastructure on-demand that is programmed and
addressed by code, resulting in a more flexible model for production, development, test
and disaster recovery solutions. These temporary, disposable units, are measured by
their actual and not potential consumption, offering a truly pay-as-you-go utility model.
{CLICK}
Cloud Infrastructure has become, global, flexible and unlimited, giving customers the
ability to localize with their corporate assets. Commvault’s modularised software driven
approach is hardware and cloud agnostic, providing the flexibility to meet the most
diverse data protection and disaster recovery requirements while conforming with
these new architecture realities.
4
{CLICK}
Today, many physical DR environments have less capacity than their Production, or
Dev/Test counterparts, resulting in degraded service in the event of a failover.
Even more so, hardware is often re-purposed to fulfil the DR environment’s
requirements, resulting in higher than expected maintenance costs. With the
Public Cloud model, this hardware availability and refresh aspect is disrupted by
removing the need to maintain a hardware fleet that can meet both your DR
requirements and sustain your service level agreements.
4
EVOLUTION OF CLOUD
So, why should you care? Well your customers certainly do and you only need to
follow the numbers,
Look at the growth of cloud infrastructure services market; Amazon – saw over $5
billion in revenue for Q4 2017, up 45%. Microsoft stated its Azure revenue
increased a whopping 98%, and their commercial cloud business now has an
annualized run-rate of $20 billion dollars.
{click}
{click}
5
• 25% of enterprises now have more than 1,000 VMs in either AWS or Azure
public clouds.
• Enterprises now run 77 percent of their workloads in the cloud, with more in
private cloud, 45% vs public cloud, 32 %.
Perhaps less surprising is that Cloud Security and Spend are the top
two challenges facing organizations in 2018.
{click}
5
Cloud Design Principles
Narrative:
In this first section, we will discuss the design principles and architecture options
for organizations planning to leverage the Cloud as part of their Data
Management strategy.
6
NATIVE CLOUD CONNECTIVITY
Cloud Storage
Restore
MediaAgent
(Deduplicated data)
Backup data
Backup Stream
Restore Stream
Narrative:
The Cloud Storage Connector is the native integration within the MediaAgent
module that directly communicates with Object Storage providers such as AWS’s
S3 and Azure, without requiring translation devices, gateways, hardware
appliances or VTLs. This Connector works by communicating directly with Object
Storage’s REST API interface over HTTPS, allowing for Media Agent deployments
on both Virtual and Physical compute layers to perform read/write operations
against Cloud Storage targets, reducing the Data Management solution’s TCO.
/Notes
Cloud Storage Support:
http://documentation.commvault.com/commvault/v11/article?p=features/cloud_
storage/cloud_storage_support.htm
7
GENERAL CLOUD DESIGN PRINCIPLES
1 SCALABILITY
2 RECOVERY
3 AUTOMATION
Narrative:
The following are the three design principles you should be aware of and we will
discuss these in more depth momentarily.
One. {CLICK} Scalability… Applications grow over time, and a Data Management
solution needs to adapt with the change rate to protect the dataset quickly and
efficiently, while maintaining an economy of scale that continues to generate
business value out of that system.
Three. {CLICK} Automation… The cloud encourages automation, not just because
the infrastructure is programmable, but the benefits in having repeatable actions
reduces operational overheads, bolsters resilience through known good
configurations and allows for greater levels of scale.
8
1. SCALABILITY
Narrative:
{CLICK}
Commvault maintains a Building Block approach for protecting datasets,
regardless of the origin or type of data. These blocks are sized based on the Front-
End TB (FET), or the size of data they will ingest, pre-compression/de-duplication.
This provides clear scale out and up guidelines for the capabilities and
requirements for each Media Agent. De-duplication Building Blocks can also be
grouped together in a grid, providing further deduplication scale, load balancing
and redundancy across all nodes within the grid.
{CLICK}
While Client-side De-duplication may seen as a way to improve the ingest
performance of the data mover (Media Agent), it has the secondary effect of
reducing the network traffic stemming from each Client communicating through
to the data mover. In public cloud environments where network performance can
vary, the use of Client-side DeDuplication can reduce backup windows and drive
9
higher scale, freeing up bandwidth for both Production and Backup network
traffic.
9
2. DESIGN FOR RECOVERY
Crash Consistency vs. Application Consistency Storage-level Replication vs. Discrete Copies
Active Data Center
Crash Consistent Application Consistent (Production) Passive Data Center (DR)
Host Host
Application Application
Secondary
Communication CommServe®
with application (DR)
Secondary
Data MediaAgent (DR)
No application awareness
of snap process Application
quiesces data
Narrative:
{CLICK}
While Crash-consistency within a recovery point may be sufficient for a file-based
dataset or EC2 instance, it may not be appropriate for an Application such as
Microsoft SQL, where the database instance needs to be quiesced to ensure the
database is valid at time of backup. Commvault® supports both Crash and
Application consistent backups, providing flexibility in your design.
{CLICK}
Many cloud providers support replication at the Object Storage layer from one
region to another, however, in the circumstance that bad or corrupted blocks are
replicated to the secondary region, your recovery points are invalid. While
Commvault can support a Replicated Cloud Library model, we recommend you
10
configure Commvault to create an independent copy of your data, whether to
another region or cloud provider to address that risk. De-duplication is also vital as
part of this process, as it means that Commvault can minimize the cross-
region/cross-provider copy by ensuring only the unique changed blocks are
transferred over the wire.
{CLICK}
Not all workloads within the Cloud need protection – for example, with micro
services architectures, or any architecture that involve worker nodes that write
out the valued data to an alternate source, mean that there is no value in
protecting the worker nodes. Instead, the protection of the gold images and the
output of those nodes provides the best value for the business.
10
3. AUTOMATION
Programmatic Data Management Workload Auto-Detection and Auto-Protection
Form Input
Can be used throughout
workflow
Check Condition
TRUE
Notify
Query
Pass to
subsequent
Pass multiple Loop actions but
variables through not preceding
loop actions
Information
Narrative:
11
datasets through a Web-based interface, allowing security mapped access to
individual files & folders within the protected dataset, freeing up administrators to
work on critical tasks.
11
Cloud Solutions with Commvault®
Narrative:
We will now discuss the three primary use cases when leveraging Commvault®
with the cloud.
12
BACKUP/ARCHIVE TO THE CLOUD
Scenario / Suitability
▪ Offsite storage / tape replacement
▪ Native, Direct connectivity to supported object
storage endpoints
Requirements
▪ Minimum 1 x MediaAgent on premise
– No VM in cloud required for backup & recovery to the cloud Backup Store
Cloud Copy
Narrative:
The first use case is protecting data at the primary on-premise location by writing
directly to an external cloud provider’s storage solution, or retaining a local copy
and replicating the backup/archive data (either in full, or only selective portions of
that data) into an external cloud provider’s storage service.
13
DISASTER RECOVERY IN A PUBLIC CLOUD
DR business
Premise Based processes
Virtual (workflow)
Machines
Local cache
backup copy 1
Local Copy
2
Replicated
backup
copy 2
DR Copy 4 Applications online, DR complete!
Narrative:
The second use case is Disaster Recovery in a Public Cloud. In this scenario we are
providing operational recovery of primary site applications to a secondary site
from an external cloud provider.
Another benefit of this design is that databases and or files can be restored out-
of-place whether on-demand or scheduled to refresh the DR targets. In addition
you can turn your DR run book into a Workflow for easy, simplified DR
automation, be it for test or real DR scenarios.
14
Please click next when you have reviewed the solution diagram and are ready to
continue…
14
DISASTER RECOVERY IN A PUBLIC CLOUD
Automated Provisioning &
Incremental Recovery (Low RTO)
Hypervisor
Dash Copy
Scenario / Suitability
Requirements
▪ Virtual Machine Conversion
▪ Minimum 1 MediaAgent/VSA on premise
– Various configurations supported (refer to documentation)
▪ Minimum 1 x MediaAgent/VSA in cloud
▪ Live Sync Replication
– Powered-on for recovery operations only
– Flexible, superior RPO/RTO
– Failover / Failback ▪ Dedicated network to cloud provider highly recommended
▪ CDR Replication
– Lowest RPO/RTO
– Requires destination VM to be running all the time
{CLICK} Replicating VM Workloads with Live Sync allows you to replicate VMs to
public cloud infrastructure. Live Sync combines the VM conversation feature with
incremental replication to provide a DR solution utilizing on-demand cloud
infrastructure. As Live Sync to cloud integrates with Dash Copy, highly efficient
WAN replication is possible.
{CLICK} Finally, Commvault® Continuous Data Replicator (CDR) allows near time
continuous data replication for critical workloads that must be recovered in
adherence to Service Level Agreements that exceed the capabilities associated
with Live Sync operations. CDR requires a similarly sized VM that must be running
all the time to receive application changes.
15
A good strategy design is to identify multiple approaches depending on the
business RTO/RPO requirements and implement them accordingly, while also
considering the limitations and
requirements specific to the cloud vendor. For example, Tier1 applications may be
a better fit for near-continuous replication using Commvault’s CDR technology,
while Tier 2 applications could make use of Live Sync (VMs, Files, DB), and Tier 3
apps could use on-demand VM conversion from cloud storage when needed.
15
MIGRATION TO THE PUBLIC CLOUD
DR business
Premise Based processes
Virtual (workflow)
Machines
Local cache
backup copy 1
Local Copy
2
Replicated
backup
copy 2
DR Copy 4 Applications online, DR complete!
Narrative:
The next use case we will cover here is migration to the public cloud.
Commvault® can assist in the migration of application workloads into the public
cloud, either at the VM container level or at the application level, all while
providing protection during the migration lifecycle while workloads are in a
transition phase between on premise and public cloud.
For supported public cloud providers such as AWS and Azure, workloads running
VMware or Hyper-v can be lifted and shifted directly to the cloud by using a
restore and convert process as part of a migration phased cut-over strategy.
The Commvault application migration feature is now supported with both SQL
and Oracle. This feature can be used to synchronize a remote copy of the
database to the cloud for test or demonstration purposes, additionally it can
provide the baseline plus any incremental changes as part of an automated
migration lifecycle process.
16
Finally, an application can be restored by leveraging the Commvault Data
Protection Agent for supported workloads to restore the target application out-of-
place to a warm instance residing in the cloud.
Please take a moment to review the solution diagram and requirements on this
slide before moving on to the next section.
16
PROTECTION IN A PUBLIC CLOUD
Scenario / Suitability
▪ Data Protection for cloud-based Iaas
workloads
On-premises DC On-premises DC 2
▪ Agent-less instance protection
(supported providers only)
▪ DASH Copy to another region, cloud
or back to on premise
Object Storage
Requirements
▪ VSA + MA deployed on a proxy within cloud provider for agentless backup (supported providers only)
▪ Agents deployed in each VM for non-supported providers and apps requiring application consistency
▪ Minimum 1 x MediaAgent in cloud and (optional) 1 x MediaAgent for secondary site (whether cloud or on premise)
▪ 1 x DDB hosted on MediaAgent
▪ Dedicated network from cloud provider to on premise highly recommended when replicating back to on premise
Narrative:
The final use case is a design providing operational recovery for active workloads
and data within an external provider’s cloud.
17
Finally, DASH Copy provides complete data mobility, allowing replication to
another geographical region, be it with the same infrastructure as a service
provider, a different provider or back to on premise sites.
Please take a moment to review the solution diagram and requirements on this
slide before moving on to the next section.
17
Amazon
In this next section we will discuss how you can protect virtual machines and their
data running inside AWS.
18
AWS PROTECTION
Database
EC2-based
▪ Agent-less EC2 Instance Protection Media
(VSA) Cloned VM S3 Bucket
Agent
Volume Volume Volume
REFER TO DOCUMENTATION.COMMVAULT.COM
⁻ Minimum 1 x Data Agent per instance for intended dataset (i.e. File, SQL)
{CLICK} The Commvault® VSA can deliver agent-less protection for EC2 instances
and file level data. This process utilizes a block-level capture of the EC2 instances
and their attached EBS volumes. The agentless backups are crash-consistent,
utilize app-aware or agent in guest to gain application consistency. You should also
be aware of the performance around the hydration of blocks from the clone that
is presented to the VSA proxy. This is the same regardless of whether it is a full or
19
incremental backup being performed and with no CBT support performance can
be an issue. Therefore it is recommended that a VSA is limited to protecting VMs
that are under 200GB in size and agents for anything larger.
{CLICK} IntelliSnap® based snapshots through the Commvault VSA can mitigate
some of the performance overheads with the instance level protection. This will
enable you to create point-in-time snapshots of instance data. You can back up
larger numbers of instances more quickly, and perform multiple backups each day.
Keep in mind that snapshots will be kept available beyond the backup window and
while backup copies are processed. To reduce costs, only retain snapshots for as
long as they are required. Or it may be desirable for customers to store only the
last snapshot and create backup copies to s3 storage for longer term retention.
{CLICK} Finally, You can use the Virtual Server Agent for Amazon to protect Amazon
RDS instances. The Amazon Relational Database Service (RDS) provides
configuration and scaling resources to host relational databases, for example
Oracle, in the Amazon Web Services (AWS) cloud. The EC2 Proxy co-ordinates with
the RDS API to create Application-Consistent Snapshot and control Snap Revert or
Replication across regions. Please note that it is currently not possible to Extract
and de-duplicate data into an S3 bucket or alternative target.
19
Microsoft Azure
20
MICROSOFT AZURE PROTECTION
▪ Agent-In-Guest (Streaming)
EC2-based
Media
▪ Agent-less Azure Instance Protection
Agent
(VSA) EBS EBS
Volume Volume
– Protection
Agent-less
Agent-in-guest
and
protection
Recovery
approach
& recovery
for SQL Azure
for EC2
Azure
Databases
instances
instances
(Instance
& &file-level
file-level
ordata
DBaas)
data
▪ Azure DBaaS support
– Utilizes
Required
Good
Createssnapshot
SQL
BLOB
for Agent
application-consistent
snapshots
copy and that
readare
performance
mounted
backupsto VSA/MA
– Reduces
Recover
Leverages
CBT supports
on-premises
protection
Client-side
provides
and
to
Deduplication
good
recovery
Azure,
incremental
Azure
window
to on-premises,
performance
dramatically and SQL Azure to SQL
Azure
– CBT
Block-level
Crashsupport
Consistent
backup
for backup
supported
copies to increase performance
– Lower cost option for longer term retention using existing infrastructure
– Crash
Secondary
Consistent
MediaAgent can provide DR copy in a different geographic region
– Full Backup Only
– Minimum 1 x Data Agent per instance for intended dataset (i.e. File, SQL)
{CLICK}
Firstly an Agent-in-guest approach is used on the production workload and
protected to the Media Agent residing in Azure, leveraging Client-side
Deduplication to reduce the network consumption within the Cloud. It is
recommended to use this approach when you require application-consistent
backups and granular-level restoration features for applications.
{CLICK}
The Commvault® VSA for Azure delivers an agent-less, block-level capture of Azure
VM instances and their attached block volumes. Unlike AWS, Azure provides
Changed Block Tracking (CBT), an optional mechanism that helps accelerate
incremental backup performance. Once Azure snapshots are created, they are
mounted onto the Commvault VSA proxy and the data is read. In Azure, unlike
AWS, snapshots can directly be mounted on an Azure instance. Also since the data
blocks are shared between a volume and snapshot, the read performance is also
good as the process is similar to reading data off of a volume.
21
{CLICK}
IntelliSnap® based snapshots using Azure BLOB storage, through the Commvault
VSA helps reduce backup windows considerably, providing fast, snap-shot based
restoration capability. IntelliSnap with Azure also allows for the use of CBT
(Changed Block Tracking) when making a backup copy which can accelerate
incremental backup performance considerably. This will enable you to create
point-in-time snapshots of instance data. You can back up larger numbers of
instances more quickly, and perform multiple backups each day.
{CLICK} Finally, Commvault® can help customers protect and stage databases
stored in the Azure cloud, either living in a VM Instance or in the case of SQL,
within Azure Database As a Service. Customer needing a longer term retention
with an existing infrastructure and smaller up front charges is a good use case for
this agent. The SQL agent also allows capabilities of restoring databases from on
premises to SQL Azure, SQL Azure to an on premises SQL instance, and from one
SQL Azure server to another SQL Azure server. With these capabilities in mind it
can also be used as a migration tool.
21
Architecture Sizing & Considerations
22
ARCHITECTURE SIZING
▪ Best practices
▪ Specifications for:
▪ CommServe®
▪ MediaAgents
▪ Storage
Narrative:
/Notes
Link to PCAG
http://documentation.commvault.com/commvault/v11/article?p=products/vsa/r
_vsa_white_papers.htm
23
NETWORKING
Narrative:
AWS and Azure both have the capability to establish an isolated logical network,
referred to within AWS as Virtual Private Cloud (VPC), and Azure Virtual Network
(AVN) within Azure.
24
NETWORKING
BRIDGING ON PREMISE INFRASTRUCTURE – VPN OR DIRECTCONNECT/EXPRESSROUTE
VPN – encrypted
over Public Internet
Backup Store
Cloud Copy
Narrative:
Customers may find a need to bridge their existing On-Premise infrastructure to
their Public Cloud provider, or bridge systems and workloads running between
different Cloud providers to ensure a common network layer between compute
nodes and storage endpoints. This is particularly relevant to solutions where you
wish to Backup/Archive directly to the Cloud, or DASH Copy existing
backup/archive data to Object Storage within a Cloud provider. To provide this,
there are two primary choices available:
To provide this, there are two primary choices available: {CLICK} A VPN
Connection – where network traffic is routed between network segments over
Public Internet, encapsulated in a secure, encrypted tunnel over the Customer’s
existing Internet Connection. As the connection will be shared, bandwidth will be
limited and regular data transfer fees apply as per the Customer’s current contract
with their ISP.
25
Customer’s regular internet connection, as pricing is charged on a monthly dual-
port fee, with all inbound and outbound data transfers included free of charge,
with bandwidth from 10Mbit/s to 10Gbit/s.
25
DATA SECURITY
HTTPS Proxy
Cloud Copy
Cloud Copy
✓ Encrypt between nodes ✓ Encrypt all data at rest ✓ HTTP(S) may impact
network performance
Narrative:
{CLICK}
By default, all communication with Cloud Libraries utilize HTTPS which ensures
that all traffic is encrypted while in-flight between the Media Agent and the Cloud
Library end-point, but traffic between Commvault® nodes is not encrypted by
default. It is recommended that any network communications between
Commvault modules routing over public Internet space should be encrypted to
ensure data security. This can be employed by using standard Commvault firewall
configurations (Two-Way & One-Way).
{CLICK}
Data stored in a public Cloud is usually on shared infrastructure logically
segmented to ensure security. Commvault would recommend adding an extra
layer of protection by encrypting all data at-rest
{CLICK}
Please take note of any HTTP(S) proxies between Media Agents and endpoints,
26
whether via public Internet or private space, as this may have a performance
impact upon any backup/restore operations to/from an Object Storage endpoint.
Where possible, Commvault should be configured to have direct access to an
Object Storage endpoint.
26
DATA SEEDING
▪ Process of moving initial set of data from its origin to the a public cloud provider
Narrative:
If your customer is planning to store data in a public cloud then you should always
discuss how they will get that data to the destination initially and how long it will
take. This process is known as data seeding and involves moving the initial set of
data from its original location to the target storage destination in the cloud.
27
{CLICK}
If the data set is too large to copy over the network then drive seeding maybe
required. Drive seeding is coping the initial data set to external physical media and
then shipping it directly to the external cloud provider for local data ingestion. It is
worth mentioning that most Cloud providers require that any seeded data be
shipped in an encrypted format.
27
CONSUMPTION / COST
$ $
$ Cloud Copy
Narrative:
An important consideration that you are probably not concerned about but the
customer definitely should be is how much the consumption of certain features or
processes will cost them with their public cloud provider. Although you are not
responsible for the consumption and associated costs a customer will incur you
should definitely ensure that they have considered them, it can only help enhance
your credibility in the account.
{CLICK}
Moving data into a cloud provider is typically at no cost. However, moving data
outside the cloud provider, instance or region usually does. Restoring data from a
provider to an external site, or replication data between provider regions are
examples of activities that would be classed as Network Egress and usually incur
additional charges.
{CLICK}
Cloud storage is usually metered with a fixed allowance included per month.
28
Frequent restores, active data, and active databases may go beyond a cloud
provider’s Storage I/O monthly allowance, which would result in additional
“overage” charges.
{CLICK}
Amazon, Azure and other Cloud providers usually incur a cost for GET/PUT
transactions to cloud object storage. These costs are primarily to enforce good
practices for applications when retrieving and placing data in the cloud. As such,
the cost when using the Commvault® solution is minimal. The Cloud architecture
guide available on the Commvault documentation website has some useful
examples of costs for such transactions.
{CLICK}
Low-cost cloud storage solutions may have a cost associated with accessing data
or deleting data before an agreed upon time period. Storing infrequently accessed
data on a low-cost cloud storage solution may be attractive upfront, however
Commvault® would recommend modelling realistic data recall scenarios. In some
cases, the data recall charges maybe more than the potential cost savings vs. an
active cloud storage offering.
28
PERFORMANCE / STORAGE
D:\
E:\
G:\
H:\
Narrative:
{CLICK}
While Public IaaS environments allow block-based storage to be provisioned and
leveraged as Disk Libraries, the overall cost of those volumes can quickly exceed
that of Object Storage. it is highly recommended that Object Storage be the
primary choice for writing data to the Cloud, and other forms of storage by
exception.
{CLICK}
When possible, make use of partitioned dedupe to increase scale, load balancing,
and failover. Version 11 allows adding up to 4 nodes to an existing deduplication
store dynamically, allowing for rapid scale-out configurations.
29
{CLICK}
Depending on the provider, there may be different tiers of Object Storage available
that offer different levels of cost, performance, and access/SLA’s. Some industry
names for these classes are standard or hot storage, infrequent access or cool
storage and deep archive or cold storage. This can have a significant impact on
both the cost and the user experience for the datasets within. It is highly
recommended that you review the cost options and considerations of each of
these storage classes against the use case for your architecture in order to gain the
best value for your cost model.
{CLICK}
Follow documented best practices for Object storage from the Commvault
documentation, {CLICK} such as Micro pruning support for Object Storage, {CLICK}
Increasing the deduplication block size for maximum performance, {CLICK} and
Multi-Streaming. Object Storage performs best with concurrency, and as such with
any Cloud Libraries configured within Commvault, best performance is achieved
when configured for multiple readers and streams.
{CLICK}
Just like regular disk libraries, Cloud libraries have the option to leverage multiple
mount paths. The benefit of using multiple mount paths depends on the Cloud
storage vendor.
29
CLOUD DESIGN SUMMARY
Design Commvault®
3 components using the latest
architecture sizing guidelines
Narrative:
{CLICK}
One: Remember to follow the cloud design principles; Scalability, Design for
Recovery and Automation
{CLICK}
Two: Ensure you are designing a solution that fits the appropriate cloud use-case
and use the right tool for the job, e.g. instance level VSA protection, agent in
guest, cloud snapshot management, and Dbaas or RDS support.
{CLICK}
Three: Remember to always follow the latest architecture and components sizing
guidelines in the cloud architecture guide available on the Commvault
documentation website.
30
{CLICK}
Four: Consider other key factors or components that may influence the design and
discuss them with your customer. Remember the customer will appreciate your
honesty as well as your expertise.
30
WRAP-UP
Narrative:
In this module you learned about the adoption and usage of public cloud
technologies and the cloud design principles for Commvault® software.
We then discussed how to design Commvault solutions for specific public cloud
use-cases and how to size the required architecture components utilizing the
cloud architecture guide.
31
Questions?
Suggestions?
32