AWS Interview Questions 6

  1. Which of the following use cases are suitable for Amazon DynamoDB? Choose 2 answers
    1. Managing web sessions.
    2. Storing JSON documents.
    3. Storing metadata for Amazon S3 objects.
    4. Running relational joins and complex updates.

Answer C,D.

Explanation: If all your JSON data have the same fields eg [id,name,age] then it would be better to store it in a relational database, the metadata on the other hand is unstructured, also running relational joins or complex updates would work on DynamoDB as well.

  1. How can I load my data to Amazon Redshift from different data sources like Amazon RDS, Amazon DynamoDB and Amazon EC2?

You can load the data in the following two ways:

    • You can use the COPY command to load data in parallel directly to Amazon Redshift from Amazon EMR, Amazon DynamoDB, or any SSH-enabled host.
    • AWS Data Pipeline provides a high performance, reliable, fault tolerant solution to load data from a variety of AWS data sources. You can use AWS Data Pipeline to specify the data source, desired data transformations, and then execute a pre-written import script to load your data into Amazon Redshift.
  1. Your application has to retrieve data from your user’s mobile every 5 minutes and the data is stored in DynamoDB, later every day at a particular time the data is extracted into S3 on a per user basis and then your application is later used to visualize the data to the user. You are asked to optimize the architecture of the backend system to lower cost, what would you recommend?
  2. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3.
  3. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
  4. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
  5. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.
READ ALSO
How to check biggest table in Oracle

 

Answer C.

Explanation: Since our work requires the data to be extracted and analyzed, to optimize this process a person would use provisioned IO, but since it is expensive, using a ElastiCache memoryinsread to cache the results in the memory can reduce the provisioned read throughput and hence reduce cost without affecting the performance.

  1. You are running a website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)
    1. Deploy ElastiCache in-memory cache running in each availability zone
    2. Implement sharding to distribute load to multiple RDS MySQL instances
    3. Increase the RDS MySQL Instance size and Implement provisioned IOPS
    4. Add an RDS MySQL read replica in each availability zone
READ ALSO
Prerequisites for ABAP for HANA

Answer A,C.

Explanation: Since it does a lot of read writes, provisioned IO may become expensive. But we need high performance as well, therefore the data can be cached using ElastiCache which can be used for frequently reading the data. As for RDS since read contention is happening, the instance size should be increased and provisioned IO should be introduced to increase the performance.

  1. A startup is running a pilot deployment of around 100 sensors to measure street noise and air quality in urban areas for 3 months. It was noted that every month around 4GB of sensor data is generated. The company uses a load balanced auto scaled layer of EC2 instances and a RDS database with 500 GB standard storage. The pilot was a success and now they want to deploy at
READ ALSO
AWS Interview Questions 2

least 100K sensors which need to be supported by the backend. You need to store the data for at least 2 years to analyze it. Which setup of the following would you prefer?

  1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
  2. Ingest data into a DynamoDB table and move old data to a Redshift cluster
  3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
  4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

Answer C.

Explanation: A Redshift cluster would be preferred because it easy to scale, also the work would be done in parallel through the nodes, therefore is perfect for a bigger workload like our use case. Since each month 4 GB of data is generated, therefore in 2 year, it should be around 96 GB. And since the

READ ALSO
Cloud offering 3 types of service models

 

servers will be increased to 100K in number, 96 GB will approximately become 96TB. Hence option C is the right answer.

46. Suppose you have an application where you have to render images and also do some general computing. From the following services which service will best fit your need?

  1. Classic Load Balancer
  2. Application Load Balancer
  3. Both of them
  4. None of these

Answer B.

Explanation: You will choose an application load balancer, since it supports path based routing, which means it can take decisions based on the URL, therefore if your task needs image rendering it will route it to a different instance, and for general computing it will route it to a different instance.

READ ALSO
Ansible Interview Questions 6

47. What is the difference between Scalability and Elasticity?

Scalability is the ability of a system to increase its hardware resources to handle the increase in demand.

It can be done by increasing the hardware specifications or increasing the processing nodes.

Elasticity is the ability of a system to handle increase in the workload by adding additional hardware resources when the demand increases(same as scaling) but also rolling back the scaled resources, when the resources are no longer needed. This is particularly helpful in Cloud environments, where a pay per use model is followed.

  1. How will you change the instance type for instances which are running in your application tier and are using Auto Scaling. Where will you change it from the following areas?
    1. Auto Scaling policy configuration
    2. Auto Scaling group
    3. Auto Scaling tags configuration
    4. Auto Scaling launch configuration
READ ALSO
alexa-skills-kit

Answer D.

Explanation: Auto scaling tags configuration, is used to attach metadata to your instances, to change the instance type you have to use auto scaling launch configuration.

  1. You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which option will reduce load on the Amazon EC2 instance?

 

  1. Create a load balancer, and register the Amazon EC2 instance with it
  2. Create a CloudFront distribution, and configure the Amazon EC2 instance as the origin
  3. Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action
  4. Create a launch configuration from the instance using the CreateLaunchConfigurationAction
READ ALSO
Jenkins Interview Questions

Answer A.

Explanation:Creating alone an autoscaling group will not solve the issue, until you attach a load balancer to it. Once you attach a load balancer to an autoscaling group, it will efficiently distribute the load among all the instances. Option B – CloudFront is a CDN, it is a data transfer tool therefore will not help reduce load on the EC2 instance. Similarly the other option – Launch configuration is a template for configuration which has no connection with reducing loads.

  1. When should I use a Classic Load Balancer and when should I use an Application load balancer?

A Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while an Application Load Balancer is ideal for microservices or container-based architectures where there is a need to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.

READ ALSO
Raster Vector Data Analysis ~ Hiking Path Finder

For a detailed discussion on Auto Scaling and Load Balancer, please refer our EC2 AWS blog.

  1. What does Connection draining do?
    1. Terminates instances which are not in use.
    2. Re-routes traffic from instances which are to be updated or failed a health check.
    3. Re-routes traffic from instances which have more workload to instances which have less workload.
    4. Drains all the connections from an instance, with one click.

Answer B.

Explanation: Connection draining is a service under ELB which constantly monitors the health of the instances. If any instance fails a health check or if any instance has to be patched with a software update, it pulls all the traffic from that instance and re routes them to other instances.

  1. When an instance is unhealthy, it is terminated and replaced with a new one, which of the following services does that?
    1. Sticky Sessions
    2. Fault Tolerance
    3. Connection Draining
READ ALSO
What is DevOps? Introduction to DevOps 2

 

  1. Monitoring

Answer B.

Explanation: When ELB detects that an instance is unhealthy, it starts routing incoming traffic to other healthy instances in the region. If all the instances in a region becomes unhealthy, and if you have instances in some other availability zone/region, your traffic is directed to them. Once your instances become healthy again, they are re routed back to the original instances.

  1. What are lifecycle hooks used for in AutoScaling?
    1. They are used to do health checks on instances
    2. They are used to put an additional wait time to a scale in or scale out event.
    3. They are used to shorten the wait time to a scale in or scale out event
    4. None of these
READ ALSO
Top 10 DevOps Tools

Answer B.

Explanation: Lifecycle hooks are used for putting wait time before any lifecycle action i.e launching or terminating an instance happens. The purpose of this wait time, can be anything from extracting log files before terminating an instance or installing the necessary softwares in an instance before launching it.

  1. A user has setup an Auto Scaling group. Due to some issue the group has failed to launch a single instance for more than 24 hours. What will happen to Auto Scaling in this condition?
    1. Auto Scaling will keep trying to launch the instance for 72 hours
    2. Auto Scaling will suspend the scaling process
    3. Auto Scaling will start an instance in a separate region
    4. The Auto Scaling group will be terminated automatically
READ ALSO
Using R on Amazon EC2 under the Free Usage Tier

Answer B.

Explanation: Auto Scaling allows you to suspend and then resume one or more of the Auto Scaling processes in your Auto Scaling group. This can be very useful when you want to investigate a configuration problem or other issue with your web application, and then make changes to your application, without triggering the Auto Scaling process.

  1. You have an EC2 Security Group with several running EC2 instances. You changed the Security Group rules to allow inbound traffic on a new port and protocol, and then launched several new instances in the same Security Group. The new rules apply:
    1. Immediately to all instances in the security group.
READ ALSO
Understanding AWS concepts

 

  1. Immediately to the new instances only.
  2. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply.
  3. To all instances, but it may take several minutes for old instances to see the changes.

Answer A.

Explanation: Any rule specified in an EC2 Security Group applies immediately to all the instances, irrespective of when they are launched before or after adding a rule.

  1. To create a mirror image of your environment in another region for disaster recovery, which of the following AWS resources do not need to be recreated in the second region? ( Choose 2 answers )
    1. Route 53 Record Sets
    2. Elastic IP Addresses (EIP)
    3. EC2 Key Pairs
    4. Launch configurations
    5. Security Groups
READ ALSO
Jenkins interview questions

Answer A,B.

Explanation: Elastic IPs and Route 53 record sets are common assets therefore there is no need to replicate them, since Elastic IPs and Route 53 are valid across regions

  1. A customer wants to capture all client connection information from his load balancer at an interval of 5 minutes, which of the following options should he choose for his application?
    1. Enable AWS CloudTrail for the loadbalancer.
    2. Enable access logs on the load balancer.
    3. Install the Amazon CloudWatch Logs agent on the load balancer.
    4. Enable Amazon CloudWatch metrics on the load balancer.

Answer A.

Explanation: AWS CloudTrail provides inexpensive logging information for load balancer and other AWS resources This logging information can be used for analyses and other administrative work, therefore is perfect for this use case.

  1. A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer requirement?
READ ALSO
What Is Amazon Ec2 Service?

 

  1. Enable AWS CloudTrail to audit all Amazon S3 bucket access.
  2. Enable server access logging for all required Amazon S3 buckets.
  3. Enable the Requester Pays option to track access via AWS Billing
  4. Enable Amazon S3 event notifications for Put and Post.

Answer A.

Explanation: AWS CloudTrail has been designed for logging and tracking API calls. Also this service is available for storage, therefore should be used in this use case.

  1. Which of the following are true regarding AWS CloudTrail? (Choose 2 answers)
    1. CloudTrail is enabled globally
    2. CloudTrail is enabled on a per-region and service basis
    3. Logs can be delivered to a single Amazon S3 bucket for aggregation.
    4. CloudTrail is enabled for all available services within a region.
READ ALSO
DevOps Interview Questions

Answer B,C.

Explanation: Cloudtrail is not enabled for all the services and is also not available for all the regions.

Therefore option B is correct, also the logs can be delivered to your S3 bucket, hence C is also correct.

  1. What happens if CloudTrail is turned on for my account but my Amazon S3 bucket is not configured with the correct policy?

CloudTrail files are delivered according to S3 bucket policies. If the bucket is not configured or is misconfigured, CloudTrail might not be able to deliver the log files.