Beruflich Dokumente
Kultur Dokumente
Answer: This is one of the most basic AWS interview questions and can be answered
in both simple and complex structures, depending on the interviewer.
Amazon Route 53: This is a DNS (Domain Name Service) web service.
Easy Email Service: This service allows customers and clients to address email
utilization through normal SMTP or RESTFUL API.
Access Management and Identity: This has been designed in order to provide
heightened identity control and protection for a client�s AWS account.
S3 or Simple Storage Device: It is a very well-known utility among all the AWS
Services and is mainly used in warehouse equipment.
EBS or Elastic Block Store: This particular utility is used to expand beyond EC2
and is designed to connect EC2 to enable the lifespan of data beyond the capacities
of EC2.
Cloud Watch: This is mainly a crisis management utility and is designed to help
managers inspect and obtain additional resources in the light of a crisis.
Form and twist a fresh massive instance on top of the currently governing instance.
Make an attempt to delay the current instance and separate the source web mass of
dispatch and server.
The next step is to quit your existing instance and separate the same from source
quantity.
Take note of the new machine ID and connect the same source mass to your fresh
server.
Make it a point to study AWS Training Online from Real Time Experts.
7. Name the basic components of Amazon Web Services.
Answer: Amazon Web Services or AWS consists of 4 main components that are as listed
below:
Amazon S3: This component has been designed to enable one to retrieve information
which has been occupied in creating the cloud structural design and also retrieve
the produced information as a consequence of the specified key.
Amazon EC2 instance: This component has been designed in order to run automatic
parallelization and also achieve job scheduling. This instance is immensely helpful
in running a large distributed system on the Hadoop Cluster.
Amazon SimpleDB: This component helps in the storage of the transitional positional
log and also run the errands when they are executed by the client or the consumer.
Amazon SQS: This component has been mainly designed to act as a mediator between
different controllers. This is an additional cushioning for the managers at Amazon.
Amazon EC2 service is the acronym for Amazon Elastic Compute Cloud which has been
designed to provide its customers with resizable and scalable computing capacity
when they are using the cloud. Using the service of Amazon EC2, a client is able to
launch as many virtual servers as he wants. In each of these virtual servers, the
client is able to manage storage as well as configure security as and when needed.
The main advantage of using Amazon EC2 is its ability to get everything done with
minimal friction at all times.
9. List out all the best security practices for AWS EC2.
Answer: As a client who is using the service of Amazon EC2, there are some security
best practices that needs to be followed at all times. The same is as outlined
below.
Use the AWS identity and access management to control and limit access to all your
AWS resources at all times.
You should only allow trusted networks and hosts to have access to all ports to the
instance.
Regularly review all the groups on your security schedule regularly.
Only allow permission to the ports that are utmost required.
One of the most important security measures that need to be taken is to disable the
password-based login, as this is often the point of most security compromise.
Check out:AWS EC2 Study Notes
Once the command for stopping an instance is issued, the instance first performs a
normal shutdown and then transitions itself to a stopped state. All the Amazon EBS
volumes remained attached as they were, and you can resume the instance at a later
stage. One of the main advantages of this feature is that Amazon doesn�t charge you
additionally for the hours while the instance was in a stopped state.
When you issue the termination command to an instance, the instance first performs
a normal shutdown and then moves ahead with detaching the existing Amazon EBS
volumes. This can only be achieved if the deleteOnTermination attribute is set to
false in the Amazon EBS settings. Once terminated, the client cannot resume the
instance at a later stage.
Each of these regions is completely independent of the other and each availability
zone is isolated as well. But all the availability zones in a particular region are
interconnected through multiple low latency links.
15. Which of the following is a method for bidding on unused EC2 capacity based on
the current spot price?
Answer: Spot Instance is the best method for bidding on unused EC2 capacity since
this feature requires an affordable low price and the availability of the system
varies depending on the availability of excess capacity.
18. Which Amazon cloud-based storage system allows you to store data objects
ranging in size from 1 byte up to 5GB?
Answer: Amazon S3 cloud-based storage system allows you to store data objects
ranging in size from 1 byte up to 5GB. It is because, in S3 containers, storage
containers are often referred to as buckets.
At the point when those edges have crossed another instance based on your
preference will be spun up, rolled, and configured into the load balancer pool.
Now, you would�ve scaled that horizontally without the intervention of an operator.
Autoscaling group
23. What is Global Server Load Balancing (GSLB) and does Clustering need to be
turned on in order to use GSLB?
Answer: GSLB (Global Server Load Balancing) is very much similar to SLB (Server
Load Balancing) but GSLB takes SLB to a global scale. It authenticates us to stack
balance VIPs from various geographical locations as well as a single entity. From
this, the geographic site gets scalability and fault tolerance.
Yes, you must turn on clustering and also configure it in order to use Global
Server Load Balancing. Each and every proxy that comes within the site or cluster
must acquire the same configuration. So, every piece of equipment can act as a DNS
server if that becomes the master for the site. Each of the sites will be having a
unique SLB/GSLB/Cluster configuration, and you will have to use the GSLB site
overflow command so that the remote GSLB site can be added to the local appliance.
24. What are the automation tools that can be used to spin up the servers?
Answer: The use of AWS API is the most prominent way to roll your own scripts. The
scripts like this can be written in any language of one�s choice like bash or
python. Another option is that we can use configuration management and also
provisioning the tool like its puppet or it can be better when the successor Opcode
Chef can be used.
There is one more prominent option which is Ansible because the need of an agent is
not required, and also the shell scripts can run as it is. The Cloudformation and
Terraform are the things which you might look towards and in the end, the whole
infrastructure can be captured by the resulting code, and all of this can be
checked in the git repository.
25. What are those load balancing methods which are supported with array network
GSLB and also explain Reverse Proxy Cache?
Answer: The following methods of Global Server Load Balancing are supported by
Array appliance.
Overflow: Overflow method allows all the requests to be sent to the different
remote site when the local site id loaded up to 80%
lc: �lc� here stands for Least Connections, it sends the clients to the site which
has the least count of current connections.
rr: �rr� here stands for Round Robin, it sends the clients in the round robin
suction to each site.
Reverse Proxy Cache is a cache that is presented In the front of the origin
servers. That�s the reason for using the reverse term in the name. If a request of
the cached object is made by the client then the request will be served from the
cache and not from the origin server by the proxy.
Demand Scaling: The application can be auto-scaled which helps in the handling of
workload or traffic while minimizing the cost for the application.
Control over Tools: Tools and resources like Amazon EC2 instance type could be
easily controlled.
EBS is economical with no hidden costs. You will pay what you will use.
The AWS management console can be accessed within an hour with its fast access.
IT supports languages like Java, .NET, PHP, Node.js, Python, Ruby, etc.
AWS EBS builds the setup and spectators the AWS service for the creation of web
services.
34. Mention the time span in which the AWS Lambda function will execute.
Answer: All the process of AWS Lambda and execution takes place within 300 seconds
from placing calls to AWS Lambda. The default timeout is 3 seconds rest you can
setup any value between 1 to 300 seconds.
The approach is utterly simple which converts to quicker time to market and thus
higher sales.
Clients are only required to pay when the code is in operation, thus a huge amount
of money can be saved in enhanced profits.
Clients do not need any additional infrastructure in order to run this application.
Clients do not need to give any second thought on the server which is running the
code.
42. Is it possible to debug and troubleshoot the small or microservices?
Answer: Yes, it is very much possible to debug and troubleshoot small as well as
microservices. The unique feature enables it to be done even when appropriate tasks
are being performed in the background.
All the data can be simply stored in the local server memory.
The data can be stored directly into the database without affecting their
performance.
Integration testing is highly powerful and can be made through multiple vendors.
44. What is your opinion About Zero Downtime Deployment?
Answer: Deployments are most commonly considered in the form of functions. The
advantageous feature of AWS Lambda is that it divides the functions into cases when
they are hugely complex. The app in these scenarios remains offline during such a
time period, but the end result is always great and of high quality.
CloudWatch can be used to track and collect metrics, set alarms, collect and
monitor log files, and also monitor resources such as EC2 instances, RDS DB
instances, and DynamoDB tables.
AWS Cloudwatch
CloudWatch can be used to log in multiple ways, which are as mentioned below:
CentOS
Amazon Linux
Ubuntu
Red Hat Enterprise Linux
Windows
49. Does the CloudWatch logs agent support IAM roles?
Answer: Yes, the CloudWatch logs agent is very much capable of supporting and
integrating with IAM and has access to both keys and IAM roles.
Any data points or high-resolution custom metrics with a span of fewer than 60
seconds are available for 3 hours.
Data points with a period of 60 seconds are available for 15 days.
Data points with a period of 5 minutes are available for 63 days.
Data points with a period of 1 hour are available for 455 days or 15 months.