Sie sind auf Seite 1von 12

Reg.No.

____________

End Semester Examination – Nov/Dec – 2018


Code : 17CA3018 Duration : 3hrs
Sub. Name : SECURITY IN THE CLOUD Max. marks : 100

ANSWER ALL QUESTIONS (5 x 20 = 100 Marks)

Q. No. Sub Questions Course Marks


Div. Outcome
1. a. With a neat sketch elaborate in detail about NIST cloud computing CO2 20
reference architecture.
5 components of the architecture:
Cloud Provider
Cloud Broker
Cloud Auditor
Cloud Carrier
Cloud Consumer

Cloud Broker:
• Service Intermediation
• Service Aggregation
• Service Arbitrage
Cloud Auditor:

• Security Audit
• Privacy Impact Audit
• Performance Audit
Iteraction between the components:

(OR)
2. a. Give a detailed note on service models of cloud. CO5 10
SaaS: Software as a Service, software is available on the cloud and
the respnsiblity of the same lies with the provider eg: gmail
PaaS: Platform as a Service, the entire platform to build an app is
available on the cloud and the respnsiblity of the same lies with the
provider eg: google app engine
IaaS: Infrastructure as a Service, the entire infra like storage,
comoute, network is available on cloud and the responsibility of the
same lies with the provider eg: AWS EC2

b. Write a brief note on characteristics of cloud computing. CO5 5


 On – demand self service
 Location independant resource pooling
 Broad network access
 Rapid elasticity
 Measured service
c. Explain about public and private cloud. CO6 5
Public cloud:
• The instances are hosted and made available publically.
• Owned by the organization selling cloud services.
• Cloud infrastructure is available to the large group of people.
• The hardware resources are virtualized up on the internet
(off premise) e.g, gmail, onedrive etc.
Private Cloud:
• Cloud infrastructure is solely operated for the organization
• Managed by the third party or organization either on or off
premise
• Confined for a particular group of people
• They are not shared with other organizations
• Private clouds are more expensive but secure

3. a. Dicuss the different levels of virtualization with different types of CO3 20


hypervisors in detail.
1. the instruction set architecture (ISA) level,
 At the ISA level, virtualization is performed by
emulating a given ISA by the ISA of the host
machine. Instruction set emulation leads to virtual
ISAs created on any hardware machine. e.g, MIPS
binary code can run on an x-86-based host machine
with the help of ISA emulation.
 With this approach, it is possible to run a large
amount of legacy binary code written for various
processors on any given new hardware host machine.

2. hardware level,
 Hardware-level virtualization is performed right on
top of the bare hardware.
 On the one hand, this approach generates a virtual
hardware environment for a VM.
 On the other hand, the process manages the
underlying hardware through virtualization.
 The idea is to virtualize a computer’s resources, such
as its processors, memory, and I/O devices. The
intention is to upgrade the hardware utilization rate
by multiple users concurrently.

3. operating system level,


 OS-level virtualization creates isolated containers on
a single physical server and the OS instances to
utilize the hardware and software in data centers. The
containers behave like real servers.
 OS-level virtualization is commonly used in creating
virtual hosting environments to allocate hardware
resources among a large number of mutually
distrusting users.

4. library support level,


 Since most systems provide well-documented APIs, such an
interface becomes another candidate for virtualization.
 Virtualization with library interfaces is possible by
controlling the communication link between applications
and the rest of a system through API hooks.
 The software tool WINE has implemented this approach to
support Windows applications on top of UNIX hosts.

5. application level
 Virtualization at the application level virtualizes an
application as a VM. On a traditional OS, an application
often runs as a process.
 Therefore, application-level virtualization is also known as
process-level virtualization.
 The most popular approach is to deploy high level language
(HLL) VMs. In this scenario, the virtualization layer sits as
an application program on top of the operating system, and
the layer exports an abstraction of a VM that can run
programs written and compiled to a particular abstract
machine definition.

Type 1 hypervisor:

Type 1
(bare-

G VM1 VM2
ue
Hypervisor
H
o Hardware

VMware ESX, Microsoft


Hyper-V, Xen
Type 2 VM1 VM2
Guest
Process Hypervisor
OS Host
Hardware
VMware Workstation, Microsoft Virtual PC, Sun
VirtualBox, QEMU, KVM

(OR)
4. a. Write a note on CPU, Memory and I/O virtualization. CO3 10
CPU:
 A VM is a duplicate of an existing computer system in
which a majority of the VM instructions are executed on the
host processor in native mode. Thus, unprivileged
instructions of VMs run directly on the host machine for
higher efficiency. Other critical instructions should be
handled carefully for correctness and stability.
 The critical instructions are divided into three categories:
privileged instructions, control –sensitive instructions, and
behavior-sensitive instructions.
 Privileged instructions execute in a privileged mode and will
be trapped if executed outside this mode.
Memory:
 Virtual memory virtualization is similar to the virtual
memory support provided by modern operating systems. I n
a traditional execution environment, the operating system
maintains mappings of virtual memory to ma chine memory
using page tables, which is a one-stage mapping from virtual
memory to machine memory.
 However, in a virtual execution environment, virtual
memory virtualization involves sharing the physical system
memory in RAM and dynamically allocating it to the
physical memory of the VMs.
I/O:
 There are three ways to implement I /O virtualization: full
device emulation, para-virtualization, and direct I /O.
 I /O virtualization. Generally, this approach emulates well-
known, real-world devices. All the functions of a device or
bus infrastructure, such as device enumeration,
identification, interrupts, and DMA, are replicated in
software. This software is located in the VMM and acts as a
virtual device.

b. Exlain different types of virtualization in detail. CO3 10


They are broadly classified into 3:-
 Client
 Application Packaging
 Application Streaming
 Hardware Emulation
 Server
 OS virtualization
 Hardware Emulation
 Para Virtualization
 Storage
 DAS
 NAS
 SAN

5. a. Elaborate the architectural design of compute and storage clouds in CO2 20


detail.
Cloud Storage
 When the average person first thinks about cloud storage,
they will likely think about storing files, (example: songs,
videos, and applications) on a remote server to be retrieved
from multiple devices any time he or she needs them.
 Cloud storage is essentially a system that allows you to store
data on the Internet, as you would save on a computer.
 Google Drive, DropBox, or iCloud, the definition of cloud
storage remains the same.
 It enables you to upload data through the Internet to
cloudbased servers. Once you’ve stored your data on the
cloud, you, or any other person you give access to, can then
go ahead and access it from multiple devices using the
Internet as a medium.
Compute Cloud
 You use cloud storage to save and keep data. Cloud
computing, on the other hand, is used to work on and
complete specified projects.
 Cloud computing is linked with cloud storage in that you
have to move data to the cloud (cloud storage) before you
can make use of cloud computing systems.
 Once the data is moved to the cloud, however, you or
someone else can process it into useful material and send it
back to you. An example of Cloud Computing is Software as
a Service (SaaS), where you input data on software and the
data is transformed remotely through a software interface
without your computer being involved.
Some distinguishing factors between cloud storage and cloud
computing include:
1. Cloud computing requires higher processing power than cloud
storage. Cloud storage, on the other hand, needs more storage space.
2. Cloud computing is essentially targeted towards businesses.
Cloud storage, on the other hand, is utilized both for professional
and personal reasons. 3. Cloud storage is simply a data storage and
sharing medium, while cloud computing gives you the ability to
remotely work on and transform data (for example, coding an
application remotely).
The best example of compute cloud in AWS is EC2 and that of
storage is S3. Lets see a design architecture with these both
combined together.
EC2:
• Elastic Compute Cloud was launched on August 2006.
• Consumers can launch their own computers up in the virtual
environment.
• Consumers can run their own applications.
• It provides a web interface through which the users can create,
launch and terminate instances.
• Uses Xen hypervisors for creating virtual machines.
• Amazon sizes the virtual servers into small, large, extra large.
• Computes based on approximate equivalent CPU capacity of
physical hardware.
• 1 EC2 compute unit = 1.0-1.2 GHz 2007 AMD opteron.
• Hourly charge per virtual machine (rounding up to next hour).
• Data transfer charge.
• End of month / year workloads.
• Ad-hoc workloads.
•System testing.
• Amazon removes these peak demands by providing what is needed
or required.
S3:
• Amazon provides EBS ( Elastic Block Storage) who wants
persistent storage.
• Storage (1GB-1TB) is associated with the corresponding instance.
• Amazon S3 (Simple Storage Service) allows to store the objects of
size 1byte-5GB.
• Object is stored in a bucket and retrieved via user-assigned key.
• Amazon S3 is object storage built to store and retrieve any
amount of data from anywhere – web sites and mobile apps,
corporate applications, and data from IoT sensors or devices. It is
designed to deliver 99.999999999% durability, and stores data for
millions of applications used by market leaders in every industry. S3
provides comprehensive security and compliance capabilities that
meet even the most stringent regulatory requirements. It gives
customers flexibility in the way they manage data for cost
optimization, access control, and compliance. S3 provides query-in-
place functionality, allowing you to run powerful analytics directly
on your data at rest in S3. And Amazon S3 is the most supported
cloud storage service
available, with integration from the largest community of third-party
solutions, systems integrator partners, and other AWS services.

(OR)
6. a. Give a detailed summary on resource provisioning and platform CO2 10
deployment.
Types of cloud provisioning
The cloud provisioning process can be conducted using one of three
delivery models. Each delivery model differs depending on the
kinds of resources or services an organization purchases, how and
when the cloud provider delivers those resources or services, and
how the customer pays for them.
The three models are advanced provisioning, dynamic provisioning
and user self-provisioning.
Platform Deployment:
The services can be deployed in cloud platform in 4 different ways
as follows:

Public clouds:
• The instances are hosted and made available publically.
• Owned by the organization selling cloud services.
• Cloud infrastructure is available to the large group of people.
• The hardware resources are virtualized up on the internet (off
premise) e.g, gmail, onedrive etc.
Advantages:
• Customers in public cloud are benefitted economically since cost
of infrastructure is spread across all users.
• Clients on public clouds are provided with continuous, on-demand
scalability.
• Public clouds are very efficient in shared resources.
Private clouds:
• Cloud infrastructure is solely operated for the organization
• Managed by the third party or organization either on or off
premise
• Confined for a particular group of people
• They are not shared with other organizations
• Private clouds are more expensive but secure
Variations:
• On-Premise Private Cloud
• Externally hosted private cloud
Usage:
• Data sovereignty and cloud efficiencies is required
• Greater server capacity is available
Hybrid clouds (which combine both public and private):
• It is a combination of two or more cloud models.
• Non – critical apps are deployed in public.
• Critical and sensitive apps are deployed in private.
• Leasing public cloud services when private cloud capacity is
insufficient (e.g., Cloud burst).
Usage:
• An organization concerned about security wants to use SaaS
application • Companies want to offer services for various markets.
By this, a public cloud can be used for client interaction and private
cloud can be used to keep the data secure
Advantages:
• The hybrid architecture offers the benefits of multiple deployment
models
• Hybrid clouds help in maintaining business efficiently
Community clouds:
• This is a multi-tenant service model governed, shared, managed
and secured among several organizations or a service provider. •
These are a hybrid form of private clouds
Usage:
• Resources need to be shared between several organizations from a
specific group with common computing goals. • Eg: Hospitals use
private HIPAA compliant cloud

b. Describe about the global exchange of cloud resources. CO2 10


 Enterprises currently employ Cloud services in order to
improve the scalability of their services and to deal with
bursts in resource demands. However, at present, service
providers have inflexible pricing, generally limited to flat
rates or tariffs based on usage thresholds, and consumers are
restricted to offerings from a single provider at a time. Also,
many providers have proprietary interfaces to their services
thus restricting the ability of consumers to swap one
provider for another.
 For Cloud computing to mature, it is required that the
services follow standard interfaces.
 This would enable services to be commoditised and thus,
would pave the way for the creation of a market
infrastructure for trading in services. An example of such a
market system, modeled on real-world exchanges, is shown
in Figure above.
 The market directory allows participants to locate providers
or consumers with the right offers. Auctioneers periodically
clear bids and asks received from market participants. The
banking system ensures that financial transactions pertaining
to agreements between participants are carried out.
 Brokers perform the same function in such a market as they
do in real-world markets: they mediate between consumers
and providers by buying capacity from the provider and sub-
leasing these to the consumers.
 A broker can accept requests from many users who have a
choice of submitting their requirements to different brokers.
Consumers, brokers and providers are bound to their
requirements and related compensations through SLAs.
 An SLA specifies the details of the service to be provided in
terms of metrics agreed upon by all parties, and penalties for
meeting and violating the expectations, respectively.
 Such markets can bridge disparate Clouds allowing
consumers to choose a provider that suits their requirements
by either executing SLAs in advance or by buying capacity
on the spot.
 Providers can use the markets in order to perform effective
capacity planning.
 A provider is equipped with a price-setting mechanism
which sets the current price for there source based on market
conditions, user demand, and current level of utilization of
the resource.
 Pricing can be either fixed or variable depending on the
market conditions.
 An admission-control mechanism at a provider’s end selects
the auctions to participate in or the brokers to negotiate with,
based on an initial estimate of the utility.

7. a. Brief out parallel programming model. CO5 5

b. Write shorlty about distributed programming model. CO5 5


c. Explain Hadoop library from apache. CO5 5
• Hadoop Common – contains libraries and utilities needed by
other Hadoop modules
• Hadoop Distributed File System (HDFS) – a distributed file-
system that stores data on commodity machines, providing
very high aggregate bandwidth across the clusterHadoop
• YARN –a platform responsible for managing computing
resources in clusters and using them for scheduling users'
applications
• Hadoop MapReduce – an implementation of the MapReduce
programming model for large-scale data processing

d. Write about working of MapReduce programming model in brief. CO5 5


MapReduce Engine:
• JobTracker & TaskTracker
• JobTracker splits up data into smaller tasks(“Map”) and
sends it to the TaskTracker process in each node
• TaskTracker reports back to the JobTracker node and reports
on job progress, sends data (“Reduce”) or requests new jobs
• None of these components are necessarily limited to using
HDFS
• Many other distributed file-systems with quite different
architectures work
• Many other software packages besides Hadoop's
MapReduce platform make use of HDFS

(OR)
8. a. Describe the architecture of EUCALYTUS in detail. CO5 10
• Cloud Controller: Monitor the availability of resources on
various components of the cloud infrastructure, including
hypervisor nodes that are used to actually provision the
instances and the cluster controllers that manage the
hypervisor nodes
• Cluster Controller: To control the virtual network available
to the instances
• To collect information about the NCs registered with it and
report it to the CLC
• Node Controller: Queries to the OS about the node’s
physical resources and status of VM
• Collection of data related to the resource availability and
utilization on the node and reporting the data to CC
• Storage Controller: Creation of persistent EBS devices
• Interfacing with the storage systems (NFS, iSCSI)
• Walrus: Allows users to create, delete buckets also put, get
and delete objects
• Interface compatibility with Amazon S3
• Supports AMI image-management interface

b. Write a detailed note about Open stack. CO5 10


• OpenStack is an open source infrastructure as a service
(IaaS)
• Initiative for creating and managing large groups of virtual
private servers in a data centre
Components of Open Stack:
• Nova - provides virtual machines (VMs) upon demand.
• Swift - provides a scalable storage system that supports
object storage.
• Cinder - provides persistent block storage to guest VMs.
• Glance - provides a catalogue and repository for virtual disk
images.
• Keystone - provides authentication and authorization for all
the OpenStack services.
• Horizon - provides a modular web-based user interface (UI)
for OpenStack services.
• Neutron - provides network connectivity-as-a-service
between interface devices managed by OpenStack services.

• Ceilometer - provides a single point of contact for billing
systems.
• Heat - provides orchestration services for multiple composite
cloud applications.
• Trove - provides database-as-a-service provisioning for
relational and non relational database engines.
• Sahara - provides data processing services for OpenStack-
managed resources.

Compulsory:
9. a. Decribe the process of Identity Management and Access Control in CO1 10
detail.
 Users
The "identity" aspect of AWS Identity and Access
Management (IAM) helps you with the question "Who
is that user?", often referred to as authentication.
Instead of sharing your root user credentials with
others, you can create individual IAM users within your
account that correspond to users in your organization.
IAM users are not separate accounts; they are users
within your account. Each user can have its own
password for access to the AWS Management
Console.
 Groups:
It contains no of users who are logically groupe for desired
activities.
 Roles
Roles are assigned to AWS service to access another service.
 Policies
Those are set of permissions to access a resource in a cloud
environnment
Shared access to your AWS account
Granular permissions

b. Elaborate the different types of risks in detail. CO1 10


• Audit and compliance risks
• Security risks
• Information risks
• Performance and availability risks
• Interoperability risks
• Contract risks
• Billing risks

Das könnte Ihnen auch gefallen