Day 49 -INTERVIEW QUESTIONS ON AWS

Radheya Zunjur
8 min readAug 16, 2023

--

Here, we come to an end of aws series. So, let’s summarize our learnings into questions and answers. If you are going for an AWS interview, then this prepared list of AWS interview questions and answers is all you need to get through.

INTERVIEW QUESTIONS:

  1. Name 5 AWS services you have used and what’s the use cases?
  • Amazon EC2 (Elastic Compute Cloud): EC2 provides scalable computing capacity in the cloud. It’s commonly used to host web applications, run batch processing jobs, and handle various compute-intensive tasks.
  • Amazon S3 (Simple Storage Service): S3 offers scalable object storage for various types of data, such as backups, static website hosting, data archiving, and content distribution.
  • Amazon RDS (Relational Database Service): RDS makes it easy to set up, operate, and scale relational databases. It’s used for hosting databases like MySQL, PostgreSQL, Oracle, and SQL Server.
  • Amazon DynamoDB: DynamoDB is a managed NoSQL database service, suitable for applications that require low-latency, seamless scalability, and high availability, such as gaming and mobile applications.
  • Amazon SQS (Simple Queue Service): SQS provides a scalable, fully managed message queuing service for decoupling and scaling microservices, distributed systems, and serverless applications.

2. What are the tools used to send logs to the cloud environment?

Tools commonly used to send logs to a cloud environment include:

  • Amazon CloudWatch Logs: AWS service for ingesting, storing, and analyzing logs. Applications and AWS services can send logs directly to CloudWatch Logs.
  • AWS Lambda: Serverless compute service that can be used to process logs and send them to various destinations.
  • Logstash and Elasticsearch: Often used together in the ELK (Elasticsearch, Logstash, Kibana) stack for log collection, processing, and visualization.
  • Fluentd: An open-source data collector that can unify the data collection and consumption for better use and understanding of data.

3. What are IAM Roles? How do you create /manage them?

IAM (Identity and Access Management) roles in AWS provide a secure way to grant permissions to entities that you trust. These roles are not associated with a specific user, but rather with AWS resources, services, or applications. They are used for granting temporary permissions to services or resources, such as EC2 instances or Lambda functions.

To create and manage IAM roles:

  • Creating Roles: Go to the IAM console, navigate to “Roles,” and click “Create Role.” Choose the trusted entity type (e.g., AWS service or another AWS account), select permissions policies, and define role details.
  • Managing Roles: You can update trust policies and attach/detach permissions policies to roles. Roles can be assumed by entities like EC2 instances or Lambda functions, allowing them to access other AWS resources based on the permissions associated with the role.

4. How to upgrade or downgrade a system with zero downtime?

Achieving zero downtime typically involves deploying the new version in parallel with the existing one and then seamlessly switching traffic to the upgraded version. Techniques like blue-green deployment, canary deployment, and rolling deployments can be used to achieve this. Load balancers, proper health checks, and monitoring play a crucial role in these strategies.

5. What is infrastructure as code and how do you use it?

Infrastructure as Code involves managing and provisioning infrastructure using code and automation. Instead of manually configuring resources, you define your infrastructure in code, which can then be version-controlled, tested, and deployed consistently.

Tools for IaC in AWS include:

  • AWS CloudFormation: A service that enables you to define and provision AWS infrastructure using JSON or YAML templates.
  • Terraform: An open-source tool for building, changing, and versioning infrastructure in a safe and efficient manner.
  • AWS CDK (Cloud Development Kit): A software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation.

6. What is a load balancer? Give scenarios of each kind of balancer based on your experience.

A load balancer is a network device or service that distributes incoming network traffic across multiple servers or resources to ensure efficient utilization, high availability, and improved reliability. There are mainly two types of load balancers:

  • Application Load Balancer (ALB): ALBs operate at the application layer (Layer 7) of the OSI model and distribute traffic based on application content. They are ideal for routing HTTP/HTTPS traffic and provide advanced features like path-based routing and host-based routing. ALBs are used for distributing traffic to different application services or microservices.
  • Network Load Balancer (NLB): NLBs operate at the transport layer (Layer 4) and distribute traffic based on IP protocol data. They are used for TCP/UDP traffic and are suitable for scenarios requiring ultra-high performance and low latency. NLBs are often used for handling gaming, streaming, and other latency-sensitive workloads.

7. What is CloudFormation and why is it used for?

AWS CloudFormation is a service that allows you to define and provision AWS infrastructure as code. It uses templates written in JSON or YAML to define the resources and their configurations in a repeatable and automated manner. CloudFormation enables you to create, update, and delete resources as a single unit, ensuring consistency and avoiding manual configuration errors.

8. Difference between AWS CloudFormation and AWS Elastic Beanstalk?

  • AWS CloudFormation: It’s a service for infrastructure automation. It helps in provisioning and managing a wide range of AWS resources and services. You define the infrastructure using templates, which can include networking, storage, security settings, and more.
  • AWS Elastic Beanstalk: It’s a platform-as-a-service (PaaS) offering that abstracts the underlying infrastructure. It’s designed for deploying and managing applications without worrying about the infrastructure details. Elastic Beanstalk supports various programming languages and frameworks, making it easy to deploy web applications.

9. What are the kinds of security attacks that can occur on the cloud? And how can we minimize them?

Various security attacks can occur in the cloud, including data breaches, DDoS attacks, insecure interfaces/APIs, insider threats, and more. To minimize these attacks:

  • Implement strong access controls using IAM.
  • Encrypt data at rest and in transit.
  • Regularly update and patch systems.
  • Use security groups and network ACLs to control traffic.
  • Implement monitoring, logging, and intrusion detection.
  • Use firewalls and WAFs for filtering malicious traffic.
  • Implement proper authentication and authorization mechanisms.

10. Can we recover the EC2 instance when we have lost the key?

If you’ve lost the SSH key to access your EC2 instance, you won’t be able to directly regain access using that key. However, you have a few options:

  • Use an Existing Key: If you have other users with access to the instance using different keys, they might still be able to access it.
  • Launch Replacement Instance: Create a new EC2 instance and migrate your data. This is often the recommended approach to ensure security.
  • Replace Key Pair (Linux): If you have the ability to stop the instance and detach its root volume, you can attach it to another instance, modify the SSH keys, and then reattach it.

11. What is a gateway?

In a networking context, a gateway is a device or software that acts as an entry or exit point for traffic between different networks. It serves as a bridge between different network protocols or technologies, allowing data to flow between them. Gateways can also provide additional functionalities like network address translation (NAT), firewalling, and routing. Common examples include routers that connect local networks to the internet and API gateways that mediate communication between applications.

12. What is the difference between the Amazon Rds, Dynamodb, and Redshift?

  • Amazon RDS (Relational Database Service): RDS is a managed relational database service. It supports various database engines like MySQL, PostgreSQL, Oracle, and SQL Server. It’s suitable for applications that require traditional relational databases, providing features like automated backups, automatic software patching, and scalability.
  • Amazon DynamoDB: DynamoDB is a managed NoSQL database service. It offers fast and predictable performance, seamless scalability, and built-in security features. It’s suitable for applications that require low-latency access to large amounts of data, like gaming or real-time applications.
  • Amazon Redshift: Redshift is a managed data warehousing service. It’s designed for analyzing large datasets using SQL queries. It’s optimized for analytics workloads, columnar storage, and parallel processing. Redshift is suitable for business intelligence, data warehousing, and reporting.

13. Do you prefer to host a website on S3? What’s the reason if your answer is either yes or no?

Whether to host a website on Amazon S3 depends on the specific requirements and characteristics of the website. Here are reasons for both yes and no:

Yes, Host on S3:

  • Static Websites: If your website is primarily composed of static content (HTML, CSS, JavaScript), S3 can efficiently serve these files.
  • Scalability: S3 can handle high traffic loads without manual scaling efforts.
  • Cost-Effective: S3 offers cost-effective storage and data transfer pricing.

No, Don’t Host on S3:

  • Dynamic Content: If your website relies heavily on server-side processing, databases, or user-generated content, you might need a more dynamic hosting solution.
  • Server-Side Logic: S3 does not support server-side scripting, so complex server-side logic cannot be directly implemented.

14. What is the relation between the Availability Zone and Region?

AWS regions are separate geographical areas, like the US-West 1 (North California) and Asia South (Mumbai). On the other hand, availability zones are the areas that are present inside the regions. These are generally isolated zones that can replicate themselves whenever required.

15. What is Autoscaling?

Auto-scaling is a function that allows you to provision and launch new instances whenever there is a demand. It allows you to automatically increase or decrease resource capacity in relation to the demand.

16. Is there any other alternative tool to log into the cloud environment other than console?

The that can help you log into the AWS resources are:

  • Putty
  • AWS CLI for Linux
  • AWS CLI for Windows
  • AWS CLI for Windows CMD
  • AWS SDK
  • Eclipse

17. Why do we make subnets?

Creating subnets means dividing a large network into smaller ones. These subnets can be created for several reasons. For example, creating and using subnets can help reduce congestion by making sure that the traffic destined for a subnet stays in that subnet. This helps in efficiently routing the to the network, which reduces the network’s load.

18. What is the maximum number of S3 buckets you can create?

The maximum number of S3 buckets that can be created is 100.

19. How many total VPCs per account/region and subnets per VPC can you have?

We can have a total of 5 VPCs for every account/region and 200 subnets for every VPC that you have.

20. What does an AMI include?

AMI stands for Amazon Machine Images. It includes the following:

  • Single or multiple Amazon Elastic Block Store (Amazon EBS) snapshots. Basically, templates for the root volume of the instance.
  • Launch permissions that let AWS accounts use AMI to launch instances.
  • A block device mapping to specify what volumes to be attached to the instance during its launch.

--

--

Radheya Zunjur
Radheya Zunjur

Written by Radheya Zunjur

Database Engineer At Harbinger | DevOps | Cloud Ops | Technical Writer

No responses yet