Amazon AWS Certified Cloud Practitioner Exam Practice Flashcards
A company is planning to run a global marketing application in the AWS Cloud. The application will feature videos that can be viewed by users. The company must ensure that all users can view these videos with low latency.
Which AWS service should the company use to meet this requirement?
A. AWS Auto Scaling
B. Amazon Kinesis Video Streams
C. Elastic Load Balancing
D. Amazon CloudFront
To ensure that users can view videos with low latency in a global marketing application, the appropriate AWS service to use is:
D. Amazon CloudFront
Amazon CloudFront is a content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. It distributes content across multiple edge locations worldwide, allowing users to access the content from the nearest edge location, minimizing the time it takes to retrieve the videos and improving overall performance.
CloudFront caches content at edge locations, which reduces the load on the application’s origin server and ensures that users can view videos quickly and with minimal latency regardless of their geographical location.
In this scenario, using Amazon CloudFront would help the company provide a smooth video viewing experience for users by delivering the content from the nearest edge location, thereby minimizing latency and optimizing performance.
Which pillar of the AWS Well-Architected Framework refers to the ability of a system to recover from infrastructure or service disruptions and dynamically acquire computing resources to meet demand?
A. Security
B. Reliability
C. Performance efficiency
D. Cost optimization
The pillar of the AWS Well-Architected Framework that refers to the ability of a system to recover from infrastructure or service disruptions and dynamically acquire computing resources to meet demand is:
B. Reliability
The Reliability pillar focuses on designing systems that can recover from failures and disruptions quickly, efficiently, and gracefully. It encompasses aspects like fault tolerance, disaster recovery, and the ability to dynamically scale resources to handle varying workloads. This pillar aims to ensure that a system operates as expected, even in the face of failures, and can be quickly restored to normal operation.
Which of the following are benefits of migrating to the AWS Cloud? (Choose two.)
A. Operational resilience
B. Discounts for products on Amazon.com
C. Business agility
D. Business excellence
E. Increased staff retention
The benefits of migrating to the AWS Cloud are:
A. Operational resilience: AWS provides a robust and reliable infrastructure that enhances operational resilience by offering features such as data redundancy, automated backup, and disaster recovery solutions.
C. Business agility: Moving to the AWS Cloud allows businesses to quickly adapt and respond to changing market conditions, customer demands, and technological advancements. It enables rapid scaling of resources and the ability to experiment with new ideas and innovations.
Options B and D are not direct benefits of migrating to the AWS Cloud. Option B refers to discounts for products on Amazon.com, which is not a direct benefit of using AWS cloud services. Option D, “Business excellence,” is a broad term and not directly related to the specific benefits of migrating to the AWS Cloud.
Option E, “Increased staff retention,” is not a direct benefit of migrating to the AWS Cloud. Staff retention can be influenced by various factors, but it is not a specific benefit tied to AWS migration.
A company is planning to replace its physical on-premises compute servers with AWS serverless compute services. The company wants to be able to take advantage of advanced technologies quickly after the migration.
Which pillar of the AWS Well-Architected Framework does this plan represent?
A. Security
B. Performance efficiency
C. Operational excellence
D. Reliability
The plan to replace physical on-premises compute servers with AWS serverless compute services to take advantage of advanced technologies quickly after migration aligns with the following pillar of the AWS Well-Architected Framework:
C. Operational Excellence
The “Operational Excellence” pillar focuses on running workloads effectively, gaining insights into operations, and continually improving processes and procedures. Utilizing AWS serverless compute services allows the company to streamline operations, automate tasks, and rapidly adopt advanced technologies without the need to manage physical servers. Serverless computing can enhance operational efficiency, reduce maintenance efforts, and provide agility in leveraging new technologies as they become available on the AWS platform.
A large company has multiple departments. Each department has its own AWS account. Each department has purchased Amazon EC2 Reserved Instances.
Some departments do not use all the Reserved Instances that they purchased, and other departments need more Reserved Instances than they purchased.
The company needs to manage the AWS accounts for all the departments so that the departments can share the Reserved Instances.
Which AWS service or tool should the company use to meet these requirements?
A. AWS Systems Manager
B. Cost Explorer
C. AWS Trusted Advisor
D. AWS Organizations
To manage the AWS accounts for multiple departments and enable them to share Reserved Instances, the appropriate AWS service to use is:
D. AWS Organizations
AWS Organizations is a service that allows you to consolidate multiple AWS accounts into an organization that you create and centrally manage. It helps you achieve policy-based management and cost allocation across these accounts.
Using AWS Organizations, the company can set up an organization structure to manage all the AWS accounts for the various departments. Within this structure, the company can implement consolidated billing and cost allocation, making it easier to track and manage the usage and costs associated with Reserved Instances across all accounts.
Additionally, AWS Organizations can help in setting up service control policies (SCPs) to control permissions and access to resources, which ensures that the departments can share and utilize the Reserved Instances effectively and in compliance with organizational policies.
Which component of the AWS global infrastructure is made up of one or more discrete data centers that have redundant power, networking, and connectivity?
A. AWS Region
B. Availability Zone
C. Edge location
D. AWS Outposts
The component of the AWS global infrastructure that is made up of one or more discrete data centers with redundant power, networking, and connectivity is:
B. Availability Zone
An Availability Zone (AZ) is a data center or a cluster of data centers in a specific geographic region. Each Availability Zone is designed to be isolated from failures in other Availability Zones and typically has redundant power, networking, and connectivity. This redundancy and isolation make them a fundamental building block for creating highly available and fault-tolerant applications in the AWS Cloud. Having applications and data distributed across multiple Availability Zones helps ensure high availability and resilience in case of failures.
Which duties are the responsibility of a company that is using AWS Lambda? (Choose two.)
A. Security inside of code
B. Selection of CPU resources
C. Patching of operating system
D. Writing and updating of code
E. Security of underlying infrastructure
The duties that are the responsibility of a company using AWS Lambda are:
A. Security inside of code: Ensuring security within the code itself, including handling access controls, encryption, and following secure coding practices, is the responsibility of the company using AWS Lambda.
D. Writing and updating of code: Developing, maintaining, and updating the code that runs within AWS Lambda functions is a primary responsibility of the company using AWS Lambda. This includes writing the initial code and making any necessary updates or improvements.
Explanation:
- B. Selection of CPU resources: AWS Lambda is a serverless compute service, and the management of CPU resources is abstracted away from the user. AWS Lambda automatically scales and allocates resources based on the workload, so the user does not need to select or manage CPU resources.
- C. Patching of operating system: AWS Lambda is a fully managed service, and Amazon is responsible for managing the underlying infrastructure, including patching the operating system. Users do not have to handle the patching of the operating system as it’s taken care of by AWS.
- E. Security of underlying infrastructure: AWS Lambda is a managed service, and Amazon is responsible for the security of the underlying infrastructure, including physical security, network security, and other infrastructure-level security measures. Users can rely on AWS to ensure the security of the underlying infrastructure.
Which AWS services or features provide disaster recovery solutions for Amazon EC2 instances? (Choose two.)
A. ׀*׀¡2 Reserved Instances
B. EC2 Amazon Machine Images (AMIs)
C. Amazon Elastic Block Store (Amazon EBS) snapshots
D. AWS Shield
E. Amazon GuardDuty
The AWS services or features that provide disaster recovery solutions for Amazon EC2 instances are:
B. EC2 Amazon Machine Images (AMIs): EC2 AMIs allow you to create snapshots of your EC2 instances, providing a backup of your entire system configuration, including the operating system, applications, and data. These snapshots can be used to restore or launch new EC2 instances in case of a disaster.
C. Amazon Elastic Block Store (Amazon EBS) snapshots: Amazon EBS snapshots are point-in-time copies of your EBS volumes. They can be used to back up your data and configuration stored in EBS volumes, allowing you to restore the volume or create new volumes from the snapshots to recover from a disaster.
Explanation:
- A. EC2 Reserved Instances: EC2 Reserved Instances are cost-saving options that provide significant discounts on EC2 usage. However, they are not related to disaster recovery solutions.
- D. AWS Shield: AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. It is focused on protecting against DDoS attacks and is not specifically related to disaster recovery.
- E. Amazon GuardDuty: Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. While important for security, it is not directly related to disaster recovery.
A company is migrating to the AWS Cloud instead of running its infrastructure on premises.
Which of the following are advantages of this migration? (Choose two.)
A. Elimination of the need to perform security auditing
B. Increased global reach and agility
C. Ability to deploy globally in minutes
D. Elimination of the cost of IT staff members
E. Redundancy by default for all compute services
The advantages of migrating to the AWS Cloud are:
B. Increased global reach and agility: AWS allows businesses to expand their reach globally by leveraging AWS’s extensive global infrastructure. Companies can quickly deploy resources in various AWS regions and availability zones to meet the needs of their customers around the world. This agility in deployment supports scaling and adapting to market demands effectively.
C. Ability to deploy globally in minutes: AWS provides the ability to deploy applications and resources globally in minutes. This is made possible due to AWS’s vast network of data centers worldwide. With just a few clicks, companies can provision resources in different regions, enabling them to serve customers from diverse geographical locations.
Explanation:
- A. Elimination of the need to perform security auditing: While AWS manages and maintains the security of the cloud infrastructure, security auditing and securing the applications and services that run on AWS are still the responsibility of the company. AWS follows a shared responsibility model where the customer is responsible for securing their applications and data in the cloud.
- D. Elimination of the cost of IT staff members: While moving to the cloud can lead to cost savings in terms of reducing the need for on-premises hardware and associated maintenance costs, it does not eliminate the need for IT staff. The skills and expertise of IT staff are still essential for managing and optimizing cloud resources effectively.
- E. Redundancy by default for all compute services: While AWS provides a highly available and redundant infrastructure, achieving redundancy for specific compute services depends on how they are configured by the user. AWS offers features like Availability Zones and fault-tolerant architecture, but it’s up to the user to design and configure their applications to make use of these features for achieving redundancy.
A user is comparing purchase options for an application that runs on Amazon EC2 and Amazon RDS. The application cannot sustain any interruption. The application experiences a predictable amount of usage, including some seasonal spikes that last only a few weeks at a time. It is not possible to modify the application.
Which purchase option meets these requirements MOST cost-effectively?
A. Review the AWS Marketplace and buy Partial Upfront Reserved Instances to cover the predicted and seasonal load.
B. Buy Reserved Instances for the predicted amount of usage throughout the year. Allow any seasonal usage to run on Spot Instances.
C. Buy Reserved Instances for the predicted amount of usage throughout the year. Allow any seasonal usage to run at an On-Demand rate.
D. Buy Reserved Instances to cover all potential usage that results from the seasonal usage.
Given the requirements of having an application that cannot sustain any interruption and experiences predictable usage with seasonal spikes, the most cost-effective purchase option would be:
A. Review the AWS Marketplace and buy Partial Upfront Reserved Instances to cover the predicted and seasonal load.
Explanation:
- Partial Upfront Reserved Instances provide a balance between upfront payment and hourly rates, making them a cost-effective option.
- By purchasing Partial Upfront Reserved Instances to cover both predicted and seasonal usage, you can ensure that your application has the required capacity at a discounted rate, improving cost-effectiveness.
- This approach helps in managing predictable usage and seasonal spikes without interruptions to the application while optimizing costs through reservations.
Options B and C involve using a combination of Reserved Instances and Spot Instances or allowing seasonal usage to run at an On-Demand rate. These options may not guarantee the required availability during seasonal spikes as Spot Instances can be interrupted and running at an On-Demand rate might be costly during peak usage.
Option D suggests buying Reserved Instances to cover all potential usage resulting from seasonal usage, which could be costly and may not align with cost-effectiveness given the unpredictable nature of seasonal spikes.
Option A provides a balance by using Partial Upfront Reserved Instances, covering both predictable and seasonal usage, offering cost-effectiveness while ensuring application availability.
A company wants to review its monthly costs of using Amazon EC2 and Amazon RDS for the past year.
Which AWS service or tool provides this information?
A. AWS Trusted Advisor
B. Cost Explorer
C. Amazon Forecast
D. Amazon CloudWatch
The AWS service or tool that provides detailed cost and usage information, allowing you to review monthly costs for Amazon EC2 and Amazon RDS over the past year, is:
B. Cost Explorer
AWS Cost Explorer is a tool that provides comprehensive insights into your AWS spending and usage. It allows you to view and analyze your AWS costs and usage over a specified time period, including monthly costs for various services like Amazon EC2 and Amazon RDS. You can analyze costs by service, region, tags, usage types, and more. Cost Explorer also enables you to set custom date ranges to review historical costs, making it suitable for reviewing monthly costs over the past year.
A company wants to migrate a critical application to AWS. The application has a short runtime. The application is invoked by changes in data or by shifts in system state. The company needs a compute solution that maximizes operational efficiency and minimizes the cost of running the application.
Which AWS solution should the company use to meet these requirements?
A. Amazon EC2 On-Demand Instances
B. AWS Lambda
C. Amazon EC2 Reserved Instances
D. Amazon EC2 Spot Instances
For a critical application with a short runtime that is triggered by changes in data or shifts in system state, and where the goal is to maximize operational efficiency and minimize costs, the most suitable AWS solution would be:
B. AWS Lambda
AWS Lambda is a serverless compute service that allows you to run code in response to events without provisioning or managing servers. It is designed for short-lived, event-driven applications. Lambda automatically scales based on the number of invocations, ensuring cost efficiency by charging only for the compute time consumed during the application’s execution.
Explanation:
- AWS Lambda is well-suited for short-lived applications and event-driven workloads, making it efficient for an application with a short runtime that is triggered by data changes or system state shifts.
- Lambda is a serverless service, eliminating the need to manage infrastructure, which maximizes operational efficiency and minimizes operational overhead and costs.
- With Lambda, you are billed based on the number of requests for your functions and the duration of code execution, making it cost-effective for sporadic invocations that respond to changes in data or system state.
Options A, C, and D involve EC2 instances, which are not optimal for maximizing operational efficiency and minimizing cost in this scenario, especially given the short-lived and event-driven nature of the application.
Which AWS service or feature allows users to connect with and deploy AWS services programmatically?
A. AWS Management Console
B. AWS Cloud9
C. AWS CodePipeline
D. AWS software development kits (SDKs)
The AWS service or feature that allows users to connect with and deploy AWS services programmatically is:
D. AWS software development kits (SDKs)
AWS SDKs are sets of tools and libraries provided by Amazon Web Services to help developers interact with AWS services in a programmatic way. These SDKs allow developers to integrate AWS services into their applications, manage resources, and perform operations on AWS services using their preferred programming languages. AWS SDKs support a wide range of programming languages, including Python, Java, JavaScript, .NET, Ruby, PHP, and more, making it easier for developers to access and utilize AWS services in their applications.
A company plans to create a data lake that uses Amazon S3.
Which factor will have the MOST effect on cost?
A. The selection of S3 storage tiers
B. Charges to transfer existing data into Amazon S3
C. The addition of S3 bucket policies
D. S3 ingest fees for each request
The factor that will have the MOST effect on cost for creating a data lake using Amazon S3 is:
A. The selection of S3 storage tiers
Explanation:
- Amazon S3 offers different storage classes or tiers, each with its own pricing structure and capabilities. The selection of the appropriate storage tier based on your data access patterns and requirements will significantly impact the cost of storing data in S3.
- Standard storage is costlier than other storage classes like Glacier or Glacier Deep Archive. Depending on your data access frequency, you’ll need to choose the storage tier that balances cost and performance effectively.
Additional information for context:
- Charges to transfer existing data into Amazon S3 (Option B) may incur costs based on the volume of data being transferred into S3, but this is a one-time cost or occasional cost and may not have the most significant ongoing effect on cost compared to the storage tier choice.
- The addition of S3 bucket policies (Option C) does not directly impact the cost of S3 storage. Bucket policies are used to control access to your S3 bucket and do not have a direct cost associated with them.
- S3 ingest fees for each request (Option D) refer to PUT, COPY, POST, or LIST requests made to the S3 bucket. While these fees are associated with requests and can add up depending on usage, the storage tier selection generally has a more substantial effect on overall cost for a data lake setup.
A company is launching an ecommerce application that must always be available. The application will run on Amazon EC2 instances continuously for the next
12 months.
What is the MOST cost-effective instance purchasing option that meets these requirements?
A. Spot Instances
B. Savings Plans
C. Dedicated Hosts
D. On-Demand Instances
For an ecommerce application that must always be available and will run on Amazon EC2 instances continuously for the next 12 months, the MOST cost-effective instance purchasing option that meets these requirements is:
B. Savings Plans
Explanation:
- Savings Plans provide significant cost savings on EC2 usage (as well as other AWS services) in exchange for a commitment to a consistent amount of usage (measured in $/hr) for a 1- or 3-year term.
- Savings Plans provide flexibility in terms of instance types and regions, allowing you to apply the cost savings across a wide range of EC2 usage, making it suitable for continuous usage over the next 12 months.
- While Reserved Instances can also offer cost savings, Savings Plans provide more flexibility and potentially even greater savings depending on usage patterns.
Option A, Spot Instances, might not be suitable for an application that needs to be available continuously, as Spot Instances can be interrupted and terminated based on spot market demand.
Option C, Dedicated Hosts, might be more expensive compared to Savings Plans for continuous usage.
Option D, On-Demand Instances, are a pay-as-you-go option without any upfront commitment, but they are typically more expensive in the long run compared to Reserved Instances or Savings Plans for continuous usage.
Which AWS service or feature can a company use to determine which business unit is using specific AWS resources?
A. Cost allocation tags
B. Key pairs
C. Amazon Inspector
D. AWS Trusted Advisor
The AWS service or feature that a company can use to determine which business unit is using specific AWS resources is:
A. Cost allocation tags
Cost allocation tags are metadata labels applied to AWS resources, allowing businesses to categorize and track resource usage and costs based on different criteria, such as business unit, project, team, or application. By assigning cost allocation tags to AWS resources, a company can analyze its AWS cost and usage reports to determine how specific resources are being utilized and allocate costs accurately to respective business units or projects. This helps in understanding resource utilization and optimizing costs based on organizational requirements and budget allocations.
A company wants to migrate its workloads to AWS, but it lacks expertise in AWS Cloud computing.
Which AWS service or feature will help the company with its migration?
A. AWS Trusted Advisor
B. AWS Consulting Partners
C. AWS Artifacts
D. AWS Managed Services
To help a company with migration to AWS, especially when it lacks expertise in AWS Cloud computing, the most suitable option among the provided choices is:
B. AWS Consulting Partners
AWS Consulting Partners are third-party companies and consultants that have expertise in AWS services and solutions. They can provide valuable guidance and assistance to organizations looking to migrate workloads to AWS. These partners offer a range of services, including architecture design, migration planning, and hands-on support to ensure a smooth and successful migration to the AWS Cloud. They can help assess the company’s specific requirements, design the architecture, and manage the migration process effectively, taking into account best practices and AWS expertise.
While other services like AWS Trusted Advisor, AWS Artifacts, and AWS Managed Services have their roles in optimizing, auditing, and managing AWS resources, they may not directly provide the migration support and expertise needed in this scenario.
Which AWS service or tool should a company use to centrally request and track service limit increases?
A. AWS Config
B. Service Quotas
C. AWS Service Catalog
D. AWS Budgets
A company should use:
B. Service Quotas
Service Quotas (formerly known as AWS Service Limits) is the AWS service designed for centrally requesting and tracking service limit increases. It provides a way for customers to request higher limits on various AWS resources and services, such as the number of Amazon EC2 instances or the provisioned IOPS for Amazon RDS.
Service Quotas also offers the ability to track your current resource usage compared to the defined service limits, making it easier to manage and optimize your AWS resources effectively. It is the service to use when you need to manage and request increases in service limits across your AWS account.
Which documentation does AWS Artifact provide?
A. Amazon EC2 terms and conditions
B. AWS ISO certifications
C. A history of a company’s AWS spending
D. A list of previous-generation Amazon EC2 instance types
AWS Artifact provides:
B. AWS ISO certifications
AWS Artifact is a service that provides on-demand access to AWS compliance and security reports and other relevant documentation. This includes certifications like ISO, SOC, PCI, and various other audit reports and certifications that demonstrate AWS’s adherence to compliance and security standards. These documents can be used to verify AWS’s compliance with specific regulations and standards, which is important for customers who need to ensure the security and compliance of their workloads on AWS.
Which task requires using AWS account root user credentials?
A. Viewing billing information
B. Changing the AWS Support plan
C. Starting and stopping Amazon EC2 instances
D. Opening an AWS Support case
The task that requires using AWS account root user credentials is:
A. Viewing billing information
Viewing billing information, such as the AWS account’s billing dashboard, cost reports, and detailed billing data, typically requires the use of the AWS account root user credentials. The root user has full access to the AWS account and is the only user that can access sensitive billing and account management information by default.
The other tasks (B, C, and D) can generally be performed by IAM (Identity and Access Management) users with the appropriate permissions, and it is generally recommended to avoid using the root user for these tasks to enhance security and access control.
A company needs to simultaneously process hundreds of requests from different users.
Which combination of AWS services should the company use to build an operationally efficient solution?
A. Amazon Simple Queue Service (Amazon SQS) and AWS Lambda
B. AWS Data Pipeline and Amazon EC2
C. Amazon Kinesis and Amazon Athena
D. AWS Amplify and AWS AppSync
To build an operationally efficient solution for simultaneously processing hundreds of requests from different users, the most suitable combination of AWS services is:
A. Amazon Simple Queue Service (Amazon SQS) and AWS Lambda
Here’s why:
- Amazon SQS: Amazon SQS is a fully managed message queuing service that allows you to decouple and scale microservices, serverless applications, and distributed systems. It can handle high request loads and provides the ability to manage queues, ensure reliability, and provide scalability.
- AWS Lambda: AWS Lambda is a serverless compute service that allows you to run code in response to events. It can be triggered by events, such as messages arriving in an SQS queue, to process the requests efficiently without having to manage servers.
This combination is efficient because Amazon SQS can act as a buffer for the incoming requests, ensuring that requests are processed in an orderly and scalable manner. AWS Lambda can then be triggered by the messages in the queue, handling the requests as they arrive.
Option B (AWS Data Pipeline and Amazon EC2) is typically used for data processing and ETL (Extract, Transform, Load) tasks, and it may not be the best choice for handling user requests at scale.
Option C (Amazon Kinesis and Amazon Athena) is geared more towards real-time data stream processing and analytics, which may not align with the requirements of handling hundreds of user requests simultaneously.
Option D (AWS Amplify and AWS AppSync) is more focused on mobile and web application development, including creating APIs and user interfaces, and is less suitable for handling the simultaneous processing of user requests at scale.
What is the scope of a VPC within the AWS network?
A. A VPC can span all Availability Zones globally.
B. A VPC must span at least two subnets in each AWS Region.
C. A VPC must span at least two edge locations in each AWS Region.
D. A VPC can span all Availability Zones within an AWS Region.
The scope of a VPC (Virtual Private Cloud) within the AWS network is:
D. A VPC can span all Availability Zones within an AWS Region.
A VPC is a logically isolated section of the AWS Cloud where you can launch AWS resources, such as Amazon EC2 instances and RDS databases. It is contained within a specific AWS Region and can span multiple Availability Zones within that Region, providing redundancy and fault tolerance for your applications and services. Each VPC is specific to a single AWS Region and is not spread across multiple Regions.
Which of the following are components of an AWS Site-to-Site VPN connection? (Choose two.)
A. AWS Storage Gateway
B. Virtual private gateway
C. NAT gateway
D. Customer gateway
E. Internet gateway
The components of an AWS Site-to-Site VPN connection are:
B. Virtual private gateway: The virtual private gateway is the AWS-side endpoint of the VPN connection. It represents the AWS side of the VPN tunnel and connects to the customer gateway.
D. Customer gateway: The customer gateway is the customer-side endpoint of the VPN connection. It represents the customer’s on-premises device or network that connects to the virtual private gateway.
AWS Storage Gateway (Option A), NAT gateway (Option C), and Internet gateway (Option E) are not directly components of a Site-to-Site VPN connection. They serve other purposes within the AWS network infrastructure.
A company needs to establish a connection between two VPCs. The VPCs are located in two different AWS Regions. The company wants to use the existing infrastructure of the VPCs for this connection.
Which AWS service or feature can be used to establish this connection?
A. AWS Client VPN
B. VPC peering
C. AWS Direct Connect
D. VPC endpoints
To establish a connection between two VPCs located in two different AWS Regions and use the existing infrastructure of the VPCs for this connection, the most suitable option is:
B. VPC peering
VPC peering allows you to connect two VPCs together and route traffic between them using the existing VPC infrastructure. It is an easy way to create a network connection between VPCs without the need for additional hardware or services. However, it’s important to note that VPC peering is limited to VPCs within the same AWS Region.
AWS Client VPN (Option A) is used for remote access to VPC resources and is not typically used to connect two VPCs in different Regions.
AWS Direct Connect (Option C) is a service used for creating dedicated network connections between your on-premises data center and AWS. It’s not for connecting two VPCs in different Regions.
VPC endpoints (Option D) are used to access AWS services like S3 and DynamoDB from within a VPC without the need to traverse the public internet. They are not used to connect two VPCs in different Regions.
According to the AWS shared responsibility model, what responsibility does a customer have when using Amazon RDS to host a database?
A. Manage connections to the database
B. Install Microsoft SQL Server
C. Design encryption-at-rest strategies
D. Apply minor database patches
According to the AWS shared responsibility model, when using Amazon RDS (Relational Database Service) to host a database, the customer’s responsibilities include:
D. Apply minor database patches
Customers are responsible for applying minor database patches and updates to their Amazon RDS instances. AWS manages the underlying infrastructure, patching the database engine, and performing routine maintenance. However, customers are responsible for managing the database schema, application logic, and any minor version upgrades to the database engine.
Options A, B, and C fall under AWS’s responsibilities in the shared responsibility model. AWS manages connections to the database, the installation of the database engine (e.g., Microsoft SQL Server), and the design and management of encryption-at-rest strategies.
What are some advantages of using Amazon EC2 instances to host applications in the AWS Cloud instead of on premises? (Choose two.)
A. EC2 includes operating system patch management.
B. EC2 integrates with Amazon VPC, AWS CloudTrail, and AWS Identity and Access Management (IAM).
C. EC2 has a 100% service level agreement (SLA).
D. EC2 has a flexible, pay-as-you-go pricing model.
E. EC2 has automatic storage cost optimization.
The advantages of using Amazon EC2 instances to host applications in the AWS Cloud instead of on premises are:
B. EC2 integrates with Amazon VPC, AWS CloudTrail, and AWS Identity and Access Management (IAM): Amazon EC2 instances seamlessly integrate with AWS services like Amazon VPC (Virtual Private Cloud), AWS CloudTrail for audit and monitoring, and IAM for fine-grained access control. This enhances security and management capabilities in the AWS Cloud.
D. EC2 has a flexible, pay-as-you-go pricing model: Amazon EC2 offers a flexible and cost-effective pricing model that allows you to pay for the compute capacity you use on an hourly or per-second basis. This pricing model provides cost savings and scalability compared to investing in on-premises hardware.
Explanation:
A. EC2 includes operating system patch management: While AWS manages the underlying infrastructure and hardware, customers are responsible for managing the operating system, including patch management, on EC2 instances. This is not a service provided by AWS for EC2.
C. EC2 does not have a 100% service level agreement (SLA): Amazon EC2 has a service level agreement that provides availability targets, but it does not guarantee 100% uptime.
E. EC2 does not have automatic storage cost optimization: AWS provides various storage services with cost optimization features, but EC2 itself does not offer automatic storage cost optimization. It’s up to the user to configure and manage storage resources efficiently.
A user needs to determine whether an Amazon EC2 instance’s security groups were modified in the last month.
How can the user see if a change was made?
A. Use Amazon EC2 to see if the security group was changed.
B. Use AWS Identity and Access Management (IAM) to see which user or role changed the security group.
C. Use AWS CloudTrail to see if the security group was changed.
D. Use Amazon CloudWatch to see if the security group was changed.
To determine whether an Amazon EC2 instance’s security groups were modified in the last month and to see if a change was made, you should:
C. Use AWS CloudTrail to see if the security group was changed.
AWS CloudTrail is a service that provides a detailed history of AWS API calls made on your account, including changes to resources like security groups. You can use CloudTrail logs to track changes to your security groups and identify who made those changes.
Option A (Use Amazon EC2) won’t provide historical information about changes to security groups.
Option B (Use AWS Identity and Access Management) can help track which user or role made changes but doesn’t provide detailed information about the specific changes made.
Option D (Use Amazon CloudWatch) is a monitoring service but doesn’t specifically track and record changes to security groups; it’s not designed for change tracking.
Which AWS service will help protect applications running on AWS from DDoS attacks?
A. Amazon GuardDuty
B. AWS WAF
C. AWS Shield
D. Amazon Inspector
The AWS service that helps protect applications running on AWS from Distributed Denial of Service (DDoS) attacks is:
C. AWS Shield
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. It offers protection against common and most advanced DDoS attacks, helping to keep applications highly available and responsive during DDoS attacks. AWS Shield provides both standard and advanced protection tiers, with the latter offering additional protection for more complex and larger-scale attacks.
While AWS WAF (Web Application Firewall) (Option B) can help protect applications from web-based attacks, it’s not primarily focused on DDoS protection. AWS WAF is more for filtering and monitoring web traffic and can be used in conjunction with AWS Shield.
Options A (Amazon GuardDuty) and D (Amazon Inspector) are security services that focus on threat detection, vulnerability assessment, and security monitoring, but they are not DDoS protection services.
Which AWS service or feature acts as a firewall for Amazon EC2 instances?
A. Network ACL
B. Elastic network interface
C. Amazon VPC
D. Security group
The AWS service or feature that acts as a firewall for Amazon EC2 instances is:
D. Security group
A security group acts as a virtual firewall for Amazon EC2 instances to control inbound and outbound traffic. You can define inbound and outbound rules for your security group, specifying the allowed sources and destinations, and the permitted ports and protocols. Security groups provide an essential layer of security and access control for EC2 instances within an Amazon Virtual Private Cloud (VPC).
Option A (Network ACL) is a network-level control list that operates at the subnet level, controlling traffic in and out of the associated subnets.
Option B (Elastic network interface) is a logical networking component that represents a virtual network card and is not a firewall.
Option C (Amazon VPC) is the virtual network where your Amazon EC2 instances reside and can be configured to control network access, but it is not a firewall on its own. Firewalls, like security groups, are configured within the context of the VPC.
How does the AWS Cloud pricing model differ from the traditional on-premises storage pricing model?
A. AWS resources do not incur costs
B. There are no infrastructure operating costs
C. There are no upfront cost commitments
D. There are no software licensing costs
The AWS Cloud pricing model differs from the traditional on-premises storage pricing model in several ways, but one of the key differences is:
C. There are no upfront cost commitments
In the traditional on-premises storage model, organizations often have significant upfront capital expenditures for hardware, software, and infrastructure. They typically need to purchase and maintain storage equipment, pay for software licenses, and invest in data center facilities. These upfront costs can be substantial.
In contrast, AWS provides a pay-as-you-go pricing model where customers only pay for the resources and services they consume, with no large upfront costs or long-term commitments. This model offers cost flexibility and scalability as businesses can adjust their resources based on demand without incurring large capital expenses.
Options A, B, and D are not accurate in the context of AWS pricing:
- AWS resources do incur costs based on usage.
- While AWS eliminates the need to manage physical infrastructure, AWS services have associated costs.
- AWS may include software licensing costs depending on the specific services and software being used.
A company has a single Amazon EC2 instance. The company wants to adopt a highly available architecture.
What can the company do to meet this requirement?
A. Scale vertically to a larger EC2 instance size.
B. Scale horizontally across multiple Availability Zones.
C. Purchase an EC2 Dedicated Instance.
D. Change the EC2 instance family to a compute optimized instance.
To achieve high availability with a single Amazon EC2 instance, the company should:
B. Scale horizontally across multiple Availability Zones.
By distributing the EC2 instance across multiple Availability Zones (AZs), you can achieve redundancy and fault tolerance. If one Availability Zone experiences an issue, the application can still operate from the other Availability Zone. This ensures high availability and helps to minimize downtime.
Option A (Scaling vertically to a larger EC2 instance size) doesn’t provide high availability on its own. While it can improve performance, it doesn’t protect against failures in a single Availability Zone.
Option C (Purchasing an EC2 Dedicated Instance) doesn’t address high availability; it relates to tenancy and doesn’t inherently make your instance highly available.
Option D (Changing the EC2 instance family) can improve performance but doesn’t directly address high availability.
A company’s on-premises application deployment cycle was 3-4 weeks. After migrating to the AWS Cloud, the company can deploy the application in 2-3 days.
Which benefit has this company experienced by moving to the AWS Cloud?
A. Elasticity
B. Flexibility
C. Agility
D. Resilience
The company that can now deploy its application in 2-3 days compared to 3-4 weeks in the on-premises environment has experienced the benefit of:
C. Agility
Moving to the AWS Cloud has improved the company’s agility by allowing for quicker and more efficient application deployment. AWS provides the flexibility to scale resources up or down as needed, which leads to faster development and deployment cycles, enabling the company to respond to changing business requirements more rapidly.
Option A (Elasticity) and Option B (Flexibility) are related benefits, but in this context, the improved deployment speed is more aligned with agility.
Option D (Resilience) relates to the ability to withstand and recover from failures, which may be a benefit of using AWS, but it’s not the primary benefit highlighted in this scenario.
Which of the following are included in AWS Enterprise Support? (Choose two.)
A. AWS technical account manager (TAM)
B. AWS partner-led support
C. AWS Professional Services
D. Support of third-party software integration to AWS
E. 5-minute response time for critical issues
AWS Enterprise Support includes the following:
A. AWS technical account manager (TAM): AWS Enterprise Support provides access to a dedicated Technical Account Manager (TAM) who can help with technical and architectural guidance, operational best practices, and AWS resource optimization.
D. Support of third-party software integration to AWS: AWS Enterprise Support includes support for third-party software that is integrated with AWS services.
Options B, C, and E are not included in AWS Enterprise Support:
B. AWS partner-led support typically involves AWS Partner Network (APN) consulting partners and is separate from AWS Enterprise Support.
C. AWS Professional Services are not included in AWS Enterprise Support and are typically separate consulting and professional services offered by AWS.
E. The 5-minute response time for critical issues is a feature of AWS Business Support and AWS Enterprise Support Plus, but not included in standard AWS Enterprise Support.
A global media company uses AWS Organizations to manage multiple AWS accounts.
Which AWS service or feature can the company use to limit the access to AWS services for member accounts?
A. AWS Identity and Access Management (IAM)
B. Service control policies (SCPs)
C. Organizational units (OUs)
D. Access control lists (ACLs)
To limit access to AWS services for member accounts within AWS Organizations, the company can use:
B. Service control policies (SCPs)
Service control policies (SCPs) are used to set fine-grained permissions on what services and actions can be accessed by member accounts within AWS Organizations. SCPs are attached at the root, organizational unit (OU), or account level and allow you to define which AWS services can and cannot be used by accounts, helping to enforce security and compliance policies across the organization.
Options A (AWS Identity and Access Management), C (Organizational units), and D (Access control lists) play important roles in access control and security but are not used specifically to limit access to AWS services for member accounts within the context of AWS Organizations.
A company wants to limit its employees’ AWS access to a portfolio of predefined AWS resources.
Which AWS solution should the company use to meet this requirement?
A. AWS Config
B. AWS software development kits (SDKs)
C. AWS Service Catalog
D. AWS AppSync
To limit its employees’ AWS access to a portfolio of predefined AWS resources, the company should use:
C. AWS Service Catalog
AWS Service Catalog allows organizations to create and manage catalogs of IT services, applications, and resources that are approved for use within the organization. By setting up a portfolio in AWS Service Catalog, the company can control and limit access to predefined AWS resources, ensuring that employees can only provision resources from the approved catalog. This helps maintain control, compliance, and standardization while providing a self-service mechanism for employees to request and access resources.
Options A (AWS Config), B (AWS software development kits), and D (AWS AppSync) are not designed for this specific use case of controlling and limiting employee access to predefined AWS resources. They serve different purposes within AWS.
An online company was running a workload on premises and was struggling to launch new products and features. After migrating the workload to AWS, the company can quickly launch products and features and can scale its infrastructure as required.
Which AWS Cloud value proposition does this scenario describe?
A. Business agility
B. High availability
C. Security
D. Centralized auditing
The scenario described, where the company can quickly launch new products and features and scale infrastructure as required after migrating to AWS, aligns with the AWS Cloud value proposition of:
A. Business agility
Business agility is a key benefit of AWS, enabling organizations to respond to changing business needs more rapidly, innovate, and launch new products or features with greater speed and flexibility. AWS’s cloud infrastructure and services provide the agility needed for businesses to adapt and scale as required, allowing for faster time-to-market and improved competitiveness.
Which of the following are advantages of the AWS Cloud? (Choose two.)
A. AWS management of user-owned infrastructure
B. Ability to quickly change required capacity
C. High economies of scale
D. Increased deployment time to market
E. Increased fixed expenses
The advantages of the AWS Cloud are:
B. Ability to quickly change required capacity: AWS offers the flexibility to easily scale resources up or down to meet changing requirements. This agility allows you to respond to fluctuations in demand and optimize costs.
C. High economies of scale: AWS operates on a massive scale, which allows them to achieve efficiencies and economies of scale. This often translates into cost savings for AWS customers.
Option A (AWS management of user-owned infrastructure) is not an advantage but rather a responsibility shift from the customer to AWS when moving to the cloud.
Option D (Increased deployment time to market) is not an advantage but rather a disadvantage. AWS typically reduces deployment time to market, so this is not a benefit of the AWS Cloud.
Option E (Increased fixed expenses) is not an advantage; cloud services often provide the advantage of reducing fixed expenses by replacing large capital expenditures with pay-as-you-go models.
AWS has the ability to achieve lower pay-as-you-go pricing by aggregating usage across hundreds of thousands of users.
This describes which advantage of the AWS Cloud?
A. Launch globally in minutes
B. Increase speed and agility
C. High economies of scale
D. No guessing about compute capacity
This statement describes the advantage of the AWS Cloud:
C. High economies of scale
AWS can achieve lower pay-as-you-go pricing by aggregating usage across a large and diverse customer base, allowing them to leverage their massive infrastructure efficiently and pass on cost savings to customers. This benefit is a result of the economies of scale that AWS can achieve, making cloud services cost-effective for a wide range of users.
A company has a database server that is always running. The company hosts the server on Amazon EC2 instances. The instance sizes are suitable for the workload. The workload will run for 1 year.
Which EC2 instance purchasing option will meet these requirements MOST cost-effectively?
A. Standard Reserved Instances
B. On-Demand Instances
C. Spot Instances
D. Convertible Reserved Instances
For a workload that will run for 1 year on Amazon EC2 instances and where the instance sizes are suitable, the MOST cost-effective EC2 instance purchasing option is:
A. Standard Reserved Instances
Standard Reserved Instances (RIs) provide a significant cost savings when compared to On-Demand Instances for a committed 1-year term. By committing to a 1-year term, you receive a discount on the hourly rate compared to On-Demand pricing while keeping the flexibility to choose the instance size and Availability Zone.
Option B (On-Demand Instances) may be more expensive for a 1-year continuous workload compared to Reserved Instances.
Option C (Spot Instances) offer lower pricing but are not suitable for workloads that require continuous and predictable availability.
Option D (Convertible Reserved Instances) also provide cost savings but often require a 3-year term commitment and offer more flexibility to change instance types. For a 1-year term, Standard Reserved Instances are typically the more cost-effective choice.
A company is developing a mobile app that needs a high-performance NoSQL database.
Which AWS services could the company use for this database? (Choose two.)
A. Amazon Aurora
B. Amazon RDS
C. Amazon Redshift
D. Amazon DocumentDB (with MongoDB compatibility)
E. Amazon DynamoDB
For a high-performance NoSQL database for a mobile app, the company could use the following AWS services:
D. Amazon DocumentDB (with MongoDB compatibility): Amazon DocumentDB is a managed NoSQL database service compatible with MongoDB, making it a good choice for NoSQL database needs, especially if the application already uses MongoDB.
E. Amazon DynamoDB: Amazon DynamoDB is a fully managed NoSQL database service designed for high-performance and scalability. It’s a popular choice for mobile and web applications that require low-latency access to data and need to scale easily as the application grows.
Options A (Amazon Aurora) and B (Amazon RDS) are relational database services, not NoSQL databases, and may not be the best fit for a high-performance NoSQL database use case.
Option C (Amazon Redshift) is a data warehousing service, which is not typically used as a high-performance NoSQL database for a mobile app.
Which tasks are the responsibility of AWS, according to the AWS shared responsibility model? (Choose two.)
A. Patch the Amazon EC2 guest operating system.
B. Upgrade the firmware of the network infrastructure.
C. Apply password rotation for IAM users.
D. Maintain the physical security of edge locations.
E. Maintain least privilege access to the root user account.
The tasks that are the responsibility of AWS according to the AWS shared responsibility model are:
A. Patch the Amazon EC2 guest operating system.
D. Maintain the physical security of edge locations.
AWS takes responsibility for the physical infrastructure, as well as the host operating system on managed services like Amazon EC2. However, the customer is responsible for configuring and securing their applications and data running on these services. Tasks like patching the operating system and maintaining physical security are the responsibility of AWS.
Which of the following are features of network ACLs as they are used in the AWS Cloud? (Choose two.)
A. They are stateless.
B. They are stateful.
C. They evaluate all rules before allowing traffic.
D. They process rules in order, starting with the lowest numbered rule, when deciding whether to allow traffic.
E. They operate at the instance level.
The features of network ACLs (Access Control Lists) as they are used in the AWS Cloud are:
A. They are stateless: Network ACLs are stateless, which means that rules are not automatically applied in both directions. You need to create rules for both inbound and outbound traffic if necessary.
D. They process rules in order, starting with the lowest numbered rule, when deciding whether to allow traffic: Network ACLs evaluate rules in order from lowest to highest rule number, and the first matching rule determines the action taken for the traffic.
Network ACLs operate at the subnet level, not the instance level, and control traffic entering and leaving the subnet.
Option B (They are stateful) is incorrect. Network ACLs in AWS are stateless and do not keep track of the state of connections like stateful firewalls do. Stateful behavior is typically associated with security groups in AWS.
A company has designed its AWS Cloud infrastructure to run its workloads effectively. The company also has protocols in place to continuously improve supporting processes.
Which pillar of the AWS Well-Architected Framework does this scenario represent?
A. Security
B. Performance efficiency
C. Cost optimization
D. Operational excellence
The scenario described, where a company has protocols in place to continuously improve supporting processes and effectively run its AWS Cloud workloads, aligns with the following pillar of the AWS Well-Architected Framework:
D. Operational excellence
Operational excellence focuses on the efficient and continuous improvement of processes and procedures, allowing organizations to run and monitor systems effectively, and respond to events and changes with minimal effort.
While security (Option A), performance efficiency (Option B), and cost optimization (Option C) are all important aspects of a well-architected infrastructure, the scenario emphasizes operational excellence as the primary focus.
Which AWS service or feature can be used to create a private connection between an on-premises workload and an AWS Cloud workload?
A. Amazon Route 53
B. Amazon Macie
C. AWS Direct Connect
D. AWS PrivateLink
The AWS service or feature that can be used to create a private connection between an on-premises workload and an AWS Cloud workload is:
C. AWS Direct Connect
AWS Direct Connect is a network service that provides dedicated and private network connections between on-premises data centers and AWS. It allows for a secure and high-bandwidth connection between your on-premises infrastructure and AWS, enabling you to extend your network and access AWS resources without going over the public internet.
Option A (Amazon Route 53) is a scalable domain name system (DNS) web service and is not used for creating private network connections.
Option B (Amazon Macie) is a service for discovering, classifying, and protecting sensitive data, but it’s not used for creating network connections.
Option D (AWS PrivateLink) is a service that provides private network connections to AWS services, but it primarily focuses on accessing AWS services privately from within the AWS network rather than connecting to on-premises workloads.
A company needs to graphically visualize AWS billing and usage over time. The company also needs information about its AWS monthly costs.
Which AWS Billing and Cost Management tool provides this data in a graphical format?
A. AWS Bills
B. Cost Explorer
C. AWS Cost and Usage Report
D. AWS Budgets
The AWS Billing and Cost Management tool that provides AWS billing and usage data in a graphical format is:
B. Cost Explorer
AWS Cost Explorer is a graphical tool that allows you to visualize your AWS billing and usage data over time. It provides various charts, graphs, and reports to help you understand your monthly costs and usage patterns. You can use it to explore and analyze your AWS spending, view usage trends, and set up cost and usage budgets.
Option A (AWS Bills) typically provides itemized lists of charges but does not offer the graphical visualization of cost and usage data.
Option C (AWS Cost and Usage Report) provides detailed data in CSV files but does not offer graphical visualization.
Option D (AWS Budgets) allows you to set up cost and usage budgets and receive alerts but does not provide graphical visualization of data.
A company wants to run production workloads on AWS. The company needs concierge service, a designated AWS technical account manager (TAM), and technical support that is available 24 hours a day, 7 days a week.
Which AWS Support plan will meet these requirements?
A. AWS Basic Support
B. AWS Enterprise Support
C. AWS Business Support
D. AWS Developer Support
To meet the requirements of having a designated AWS technical account manager (TAM) and 24/7 technical support, the appropriate AWS Support plan is:
B. AWS Enterprise Support
AWS Enterprise Support offers the concierge service of a designated technical account manager (TAM) and provides 24/7 technical support, making it the right choice for production workloads with these specific requirements.
Options A (AWS Basic Support), C (AWS Business Support), and D (AWS Developer Support) do not include a designated TAM or 24/7 support to the same extent as AWS Enterprise Support.
Which architecture design principle describes the need to isolate failures between dependent components in the AWS Cloud?
A. Use a monolithic design.
B. Design for automation.
C. Design for single points of failure.
D. Loosely couple components.
The architecture design principle that describes the need to isolate failures between dependent components in the AWS Cloud is:
D. Loosely couple components.
In a well-architected system, components should be loosely coupled, meaning that they interact with each other through well-defined interfaces and APIs. This design approach helps to isolate failures in one component from affecting other components, promoting resilience and fault tolerance.
Options A (Use a monolithic design) and C (Design for single points of failure) are not best practices in AWS architecture design. Monolithic designs are typically less fault-tolerant, and designing for single points of failure is not recommended. AWS architecture aims to minimize single points of failure.
Option B (Design for automation) is an important principle but not the one that specifically addresses isolating failures between dependent components.
Which AWS services are managed database services? (Choose two.)
A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon S3
C. Amazon RDS
D. Amazon Elastic File System (Amazon EFS)
E. Amazon DynamoDB
The managed database services provided by AWS are:
C. Amazon RDS: Amazon Relational Database Service (RDS) is a managed relational database service that supports multiple database engines such as MySQL, PostgreSQL, Oracle, SQL Server, and MariaDB.
E. Amazon DynamoDB: Amazon DynamoDB is a managed NoSQL database service that provides fast and flexible database capabilities for applications requiring seamless scalability.
Options A (Amazon Elastic Block Store - EBS) and D (Amazon Elastic File System - EFS) are storage services, not managed database services.
Option B (Amazon S3) is a scalable object storage service but not a managed database service.
A company is using the AWS Free Tier for several AWS services for an application.
What will happen if the Free Tier usage period expires or if the application use exceeds the Free Tier usage limits?
A. The company will be charged the standard pay-as-you-go service rates for the usage that exceeds the Free Tier usage.
B. AWS Support will contact the company to set up standard service charges.
C. The company will be charged for the services it consumed during the Free Tier period, plus additional charges for service consumption after the Free Tier period.
D. The company’s AWS account will be frozen and can be restarted after a payment plan is established.
If the Free Tier usage period expires or if the application use exceeds the Free Tier usage limits, the following will typically occur:
A. The company will be charged the standard pay-as-you-go service rates for the usage that exceeds the Free Tier usage.
In other words, the company will start incurring standard charges for the services that go beyond the Free Tier limits. AWS will charge for the usage that exceeds the Free Tier allowance, but the account does not get frozen, and there is no need to set up a payment plan. AWS charges are based on actual usage beyond the Free Tier.
A company recently deployed an Amazon RDS instance in its VPC. The company needs to implement a stateful firewall to limit traffic to the private corporate network.
Which AWS service or feature should the company use to limit network traffic directly to its RDS instance?
A. Network ACLs
B. Security groups
C. AWS WAF
D. Amazon GuardDuty
To limit network traffic directly to an Amazon RDS instance, the company should use:
B. Security groups
Security groups act as stateful firewalls at the instance level and control inbound and outbound traffic to and from your Amazon RDS instances. You can define rules that allow or deny specific traffic based on source, destination, and port. Security groups are the primary means to control access to RDS instances within a Virtual Private Cloud (VPC).
Options A (Network ACLs) are network-level access control lists that control traffic at the subnet level but are less specific for controlling traffic to an RDS instance.
Options C (AWS WAF) and D (Amazon GuardDuty) are services related to web application firewall and threat detection, respectively, but they are not used to directly control network traffic to RDS instances in a VPC.
Which AWS service uses machine learning to help discover, monitor, and protect sensitive data that is stored in Amazon S3 buckets?
A. AWS Shield
B. Amazon Macie
C. AWS Network Firewall
D. Amazon Cognito
The AWS service that uses machine learning to help discover, monitor, and protect sensitive data stored in Amazon S3 buckets is:
B. Amazon Macie
Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to automatically discover and classify sensitive data, monitor data access, and provide alerts and actionable recommendations for securing data in Amazon S3 buckets.
Options A (AWS Shield) is a service that provides DDoS protection, not sensitive data discovery and protection.
Options C (AWS Network Firewall) is a service for filtering network traffic and is not related to sensitive data protection.
Option D (Amazon Cognito) is a service for user authentication and authorization, not related to data discovery and protection in S3 buckets.
A company wants to improve the overall availability and performance of its applications that are hosted on AWS.
Which AWS service should the company use?
A. Amazon Connect
B. Amazon Lightsail
C. AWS Global Accelerator
D. AWS Storage Gateway
To improve the overall availability and performance of applications hosted on AWS, the company should use:
C. AWS Global Accelerator
AWS Global Accelerator is a service that helps you improve the availability and performance of applications by utilizing static IP addresses and routing traffic over the AWS global network to the optimal AWS endpoint based on health, geography, and routing policies. It provides a highly available and scalable solution for improving application performance and resiliency.
Options A (Amazon Connect) and B (Amazon Lightsail) are not related to improving application availability and performance; they are focused on other use cases.
Option D (AWS Storage Gateway) is a service for connecting on-premises environments to cloud storage and is not directly related to application availability and performance improvement.
Which AWS service or feature identifies whether an Amazon S3 bucket or an IAM role has been shared with an external entity?
A. AWS Service Catalog
B. AWS Systems Manager
C. AWS IAM Access Analyzer
D. AWS Organizations
The AWS service or feature that identifies whether an Amazon S3 bucket or an IAM role has been shared with an external entity is:
C. AWS IAM Access Analyzer
AWS IAM Access Analyzer is a service that examines your policies to help you identify and review access that has been granted to external entities (accounts outside your AWS organization) for your S3 buckets, IAM roles, and other resources. It helps you identify unintended or potentially risky access and security misconfigurations.
A company does not want to rely on elaborate forecasting to determine its usage of compute resources. Instead, the company wants to pay only for the resources that it uses. The company also needs the ability to increase or decrease its resource usage to meet business requirements.
Which pillar of the AWS Well-Architected Framework aligns with these requirements?
A. Operational excellence
B. Security
C. Reliability
D. Cost optimization
The requirements described align with the following pillar of the AWS Well-Architected Framework:
D. Cost optimization
This pillar emphasizes the need to pay only for the resources that you use and to have the ability to increase or decrease resource usage to meet business requirements without relying on elaborate forecasting. Cost optimization focuses on maximizing the value of your cloud resources while minimizing unnecessary expenses, making it an important consideration for cost-effective and efficient resource management.
A company wants to launch its workload on AWS and requires the system to automatically recover from failure.
Which pillar of the AWS Well-Architected Framework includes this requirement?
A. Cost optimization
B. Operational excellence
C. Performance efficiency
D. Reliability
The requirement for a system to automatically recover from failure aligns with the following pillar of the AWS Well-Architected Framework:
D. Reliability
Reliability focuses on designing systems that can recover from failures and maintain a high level of availability and fault tolerance. This includes the ability to automatically recover from failures to ensure that the system continues to operate smoothly even when issues or outages occur.
A large enterprise with multiple VPCs in several AWS Regions around the world needs to connect and centrally manage network connectivity between its VPCs.
Which AWS service or feature meets these requirements?
A. AWS Direct Connect
B. AWS Transit Gateway
C. AWS Site-to-Site VPN
D. VPC endpoints
To connect and centrally manage network connectivity between multiple VPCs in different AWS Regions, the AWS service that meets these requirements is:
B. AWS Transit Gateway
AWS Transit Gateway is a service that simplifies the network architecture by allowing you to connect multiple VPCs, whether they are in the same Region or different Regions, and route traffic between them. It provides centralized management and simplifies network connectivity, making it an ideal solution for large enterprises with multi-VPC and multi-Region deployments.
Options A (AWS Direct Connect) and C (AWS Site-to-Site VPN) are used for connectivity to on-premises networks but are not specific to interconnecting VPCs in different Regions.
Option D (VPC endpoints) is used for direct connectivity to AWS services from within a VPC but is not designed for interconnecting multiple VPCs across Regions.
Which AWS service supports the creation of visual reports from AWS Cost and Usage Report data?
A. Amazon Athena
B. Amazon QuickSight
C. Amazon CloudWatch
D. AWS Organizations
The AWS service that supports the creation of visual reports from AWS Cost and Usage Report data is:
B. Amazon QuickSight
Amazon QuickSight is a business analytics service that allows you to create interactive and visual reports and dashboards from a variety of data sources, including AWS Cost and Usage Reports. It enables you to analyze and visualize your AWS cost and usage data to gain insights and make data-driven decisions.
Options A (Amazon Athena), C (Amazon CloudWatch), and D (AWS Organizations) are not typically used for creating visual reports from AWS Cost and Usage Report data. Amazon Athena is more focused on querying and analyzing data, while Amazon CloudWatch and AWS Organizations have different purposes.
Which AWS service should be used to monitor Amazon EC2 instances for CPU and network utilization?
A. Amazon Inspector
B. AWS CloudTrail
C. Amazon CloudWatch
D. AWS Config
The AWS service that should be used to monitor Amazon EC2 instances for CPU and network utilization is:
C. Amazon CloudWatch
Amazon CloudWatch is a monitoring and observability service that allows you to collect and track metrics, collect and monitor log files, and set alarms. It provides insights into the performance and operational health of AWS resources, including Amazon EC2 instances. You can use CloudWatch to monitor CPU and network utilization, among other metrics, for your EC2 instances.
Options A (Amazon Inspector) is a security assessment service and not used for general performance monitoring.
Option B (AWS CloudTrail) is used for logging API activity and is not focused on monitoring resource performance.
Option D (AWS Config) is used for assessing and auditing the configuration of AWS resources and is not a monitoring service.
A company is preparing to launch a new web store that is expected to receive high traffic for an upcoming event. The web store runs only on AWS, and the company has an AWS Enterprise Support plan.
Which AWS resource will provide guidance about how the company should scale its architecture and operational support during the event?
A. AWS Abuse team
B. The designated AWS technical account manager (TAM)
C. AWS infrastructure event management
D. AWS Professional Services
The AWS resource that will provide guidance about how the company should scale its architecture and receive operational support during the event, especially with an AWS Enterprise Support plan, is:
B. The designated AWS technical account manager (TAM)
The AWS technical account manager (TAM) is a designated resource provided as part of AWS Enterprise Support. The TAM serves as a trusted advisor and provides guidance on architectural best practices, operational optimization, and other support-related matters. In this scenario, the TAM can help the company with architectural and scaling recommendations to ensure the web store can handle the expected high traffic during the event. They can also assist with operational support to address any issues that may arise.
A user wants to deploy a service to the AWS Cloud by using infrastructure-as-code (IaC) principles.
Which AWS service can be used to meet this requirement?
A. AWS Systems Manager
B. AWS CloudFormation
C. AWS CodeCommit
D. AWS Config
To deploy a service to the AWS Cloud using infrastructure-as-code (IaC) principles, you should use:
B. AWS CloudFormation
AWS CloudFormation is a service that allows you to define and provision AWS infrastructure as code. You can create templates that describe the AWS resources needed for your application and then launch those resources as a stack. It enables you to automate the deployment of infrastructure, making it a suitable choice for IaC practices.
Options A (AWS Systems Manager), C (AWS CodeCommit), and D (AWS Config) are useful services for various purposes, but they are not specifically designed for infrastructure provisioning and deployment as AWS CloudFormation is.
A company that has multiple business units wants to centrally manage and govern its AWS Cloud environments. The company wants to automate the creation of
AWS accounts, apply service control policies (SCPs), and simplify billing processes.
Which AWS service or tool should the company use to meet these requirements?
A. AWS Organizations
B. Cost Explorer
C. AWS Budgets
D. AWS Trusted Advisor
To centrally manage and govern AWS Cloud environments, automate the creation of AWS accounts, apply service control policies (SCPs), and simplify billing processes, the company should use:
A. AWS Organizations
AWS Organizations is a service that allows you to consolidate multiple AWS accounts into an organization, create new AWS accounts, and apply service control policies (SCPs) to centrally manage the permissions and resources across those accounts. It also simplifies billing processes by providing consolidated billing for the accounts within the organization.
Options B (Cost Explorer), C (AWS Budgets), and D (AWS Trusted Advisor) are useful services, but they do not provide the centralized account management and governance capabilities that AWS Organizations offers.
Which IT controls do AWS and the customer share, according to the AWS shared responsibility model? (Choose two.)
A. Physical and environmental controls
B. Patch management
C. Cloud awareness and training
D. Zone security
E. Application data encryption
In the AWS shared responsibility model, the following IT controls are shared between AWS and the customer:
B. Patch management: While AWS manages the underlying infrastructure, it is the customer’s responsibility to manage patches and updates for their operating systems and applications running on AWS services.
D. Zone security: Security within an Amazon VPC, including configuring security groups and network ACLs to control traffic to and from AWS resources, is a shared responsibility. AWS provides the infrastructure, but customers configure the security settings.
Options A (Physical and environmental controls), C (Cloud awareness and training), and E (Application data encryption) are not typically shared responsibilities as per the AWS shared responsibility model. AWS takes care of physical and environmental controls, and customers are responsible for their own cloud awareness and training, as well as data encryption within their applications.
A company is launching an application in the AWS Cloud. The application will use Amazon S3 storage. A large team of researchers will have shared access to the data. The company must be able to recover data that is accidentally overwritten or deleted.
Which S3 feature should the company turn on to meet this requirement?
A. Server access logging
B. S3 Versioning
C. S3 Lifecycle rules
D. Encryption in transit and at rest
To ensure the company can recover data that is accidentally overwritten or deleted while allowing a large team of researchers to have shared access to the data in Amazon S3, the company should turn on:
B. S3 Versioning
Amazon S3 Versioning allows you to preserve, retrieve, and restore every version of every object stored in a bucket. This feature is useful for data recovery and maintaining a historical record of changes made to objects. In case of accidental deletions or overwrites, you can retrieve a previous version of the object.
Options A (Server access logging), C (S3 Lifecycle rules), and D (Encryption in transit and at rest) are not primarily focused on data recovery for accidental deletions or overwrites. Server access logging records access to objects, Lifecycle rules are used for managing object storage, and encryption addresses security rather than versioning and data recovery.
A manufacturing company has a critical application that runs at a remote site that has a slow internet connection. The company wants to migrate the workload to
AWS. The application is sensitive to latency and interruptions in connectivity. The company wants a solution that can host this application with minimum latency.
Which AWS service or feature should the company use to meet these requirements?
A. Availability Zones
B. AWS Local Zones
C. AWS Wavelength
D. AWS Outposts
To host a critical application with minimum latency and sensitivity to interruptions in connectivity, given that the remote site has a slow internet connection, the company should use:
C. AWS Wavelength
AWS Wavelength is designed to minimize latency for applications that require ultra-low latency and high bandwidth connectivity. It brings AWS services to the edge of the 5G network, allowing you to run your application in proximity to 5G networks. This can significantly reduce the round-trip time for data to travel between the application and the AWS infrastructure.
Options A (Availability Zones), B (AWS Local Zones), and D (AWS Outposts) do not specifically address the requirement for minimizing latency for applications sensitive to interruptions in connectivity. AWS Wavelength is designed for low-latency, high-throughput use cases, making it a suitable choice for this scenario.
A company wants to migrate its applications from its on-premises data center to a VPC in the AWS Cloud. These applications will need to access on-premises resources.
Which actions will meet these requirements? (Choose two.)
A. Use AWS Service Catalog to identify a list of on-premises resources that can be migrated.
B. Create a VPN connection between an on-premises device and a virtual private gateway in the VPC.
C. Use an Amazon CloudFront distribution and configure it to accelerate content delivery close to the on-premises resources.
D. Set up an AWS Direct Connect connection between the on-premises data center and AWS.
E. Use Amazon CloudFront to restrict access to static web content provided through the on-premises web servers.
To migrate applications from an on-premises data center to a VPC in the AWS Cloud while ensuring access to on-premises resources, the following actions can meet these requirements:
B. Create a VPN connection between an on-premises device and a virtual private gateway in the VPC.
- A VPN connection allows secure communication between the VPC and on-premises resources over the public internet.
D. Set up an AWS Direct Connect connection between the on-premises data center and AWS.
- AWS Direct Connect provides a dedicated network connection that ensures high bandwidth, low-latency access between the on-premises data center and AWS.
Option A (Use AWS Service Catalog) is a service for organizing, governing, and provisioning cloud resources but does not directly address the requirement for connectivity to on-premises resources.
Option C (Use an Amazon CloudFront distribution) is a content delivery network service and does not directly provide connectivity to on-premises resources.
Option E (Use Amazon CloudFront to restrict access to static web content) pertains to web content delivery and access control and is not related to connecting on-premises resources to a VPC.
A company wants to use the AWS Cloud to provide secure access to desktop applications that are running in a fully managed environment.
Which AWS service should the company use to meet this requirement?
A. Amazon S3
B. Amazon AppStream 2.0
C. AWS AppSync
D. AWS Outposts
To provide secure access to desktop applications running in a fully managed environment in the AWS Cloud, the company should use:
B. Amazon AppStream 2.0
Amazon AppStream 2.0 is a service that enables you to securely stream desktop applications to users’ web browsers. It allows for secure access to desktop applications from a fully managed environment, with control over user access and resources. This is particularly useful for scenarios where you need to deliver desktop applications securely to remote users.
Options A (Amazon S3), C (AWS AppSync), and D (AWS Outposts) have different use cases and are not designed for providing secure access to desktop applications in a fully managed environment as AppStream 2.0 does.
A company wants to implement threat detection on its AWS infrastructure. However, the company does not want to deploy additional software.
Which AWS service should the company use to meet these requirements?
A. Amazon VPC
B. Amazon EC2
C. Amazon GuardDuty
D. AWS Direct Connect
To implement threat detection on AWS infrastructure without deploying additional software, the company should use:
C. Amazon GuardDuty
Amazon GuardDuty is a fully managed threat detection service that continuously monitors for malicious activity and unauthorized behavior within your AWS environment. It uses machine learning, anomaly detection, and integrated threat intelligence to identify threats without requiring additional software deployment. GuardDuty can help you protect your AWS infrastructure by detecting potential security threats.
Options A (Amazon VPC), B (Amazon EC2), and D (AWS Direct Connect) are not specifically designed for threat detection; they are infrastructure and networking services within AWS. GuardDuty is the service tailored for threat detection in AWS environments.
Which AWS service uses edge locations?
A. Amazon Aurora
B. AWS Global Accelerator
C. Amazon Connect
D. AWS Outposts
The AWS service that uses edge locations is:
B. AWS Global Accelerator
AWS Global Accelerator is a service that uses edge locations to route traffic over the AWS global network to the optimal AWS endpoint based on your application’s requirements and the health of the endpoints. Edge locations are a critical part of the AWS global network infrastructure, and Global Accelerator leverages them for optimized content delivery and application performance.
Options A (Amazon Aurora), C (Amazon Connect), and D (AWS Outposts) are separate services and do not directly use edge locations for traffic optimization in the same way as AWS Global Accelerator.
A company needs to install an application in a Docker container.
Which AWS service eliminates the need to provision and manage the container hosts?
A. AWS Fargate
B. Amazon FSx for Windows File Server
C. Amazon Elastic Container Service (Amazon ECS)
D. Amazon EC2
To eliminate the need to provision and manage container hosts when running Docker containers, the company should use:
A. AWS Fargate
AWS Fargate is a serverless compute engine for containers. It allows you to run containers without having to manage the underlying infrastructure. With Fargate, you only need to define your application’s requirements, and AWS takes care of provisioning, scaling, and managing the containers for you.
Options B (Amazon FSx for Windows File Server), C (Amazon Elastic Container Service), and D (Amazon EC2) do not provide the same level of serverless container management as AWS Fargate. Fargate is designed to simplify the deployment and management of containers by abstracting the host infrastructure.
Which AWS service or feature checks access policies and offers actionable recommendations to help users set secure and functional policies?
A. AWS Systems Manager
B. AWS IAM Access Analyzer
C. AWS Trusted Advisor
D. Amazon GuardDuty
The AWS service or feature that checks access policies and offers actionable recommendations to help users set secure and functional policies is:
B. AWS IAM Access Analyzer
AWS IAM Access Analyzer is a service that examines your resource policies to identify and provide recommendations on policies that may allow unintended access or that have other security issues. It helps you ensure that your policies are both secure and function as intended, and it provides actionable recommendations to improve your access policies. This is especially useful for maintaining a secure IAM (Identity and Access Management) configuration.
Options A (AWS Systems Manager), C (AWS Trusted Advisor), and D (Amazon GuardDuty) have different use cases and do not specifically focus on analyzing and providing recommendations for IAM access policies.
A company has a fleet of cargo ships. The cargo ships have sensors that collect data at sea, where there is intermittent or no internet connectivity. The company needs to collect, format, and process the data at sea and move the data to AWS later.
Which AWS service should the company use to meet these requirements?
A. AWS IoT Core
B. Amazon Lightsail
C. AWS Storage Gateway
D. AWS Snowball Edge
To collect, format, and process data at sea where there is intermittent or no internet connectivity and then move the data to AWS later, the company should use:
D. AWS Snowball Edge
AWS Snowball Edge is a ruggedized device with on-board compute and storage capabilities. It is designed for use cases where you need to collect and process data in remote or disconnected environments and then transfer that data to AWS when connectivity is available. It can be used to transport data from edge locations, including at sea, to an AWS region.
Options A (AWS IoT Core), B (Amazon Lightsail), and C (AWS Storage Gateway) have different use cases and do not provide the same capabilities as AWS Snowball Edge for collecting and transporting data from remote, disconnected environments.
A retail company needs to build a highly available architecture for a new ecommerce platform. The company is using only AWS services that replicate data across multiple Availability Zones.
Which AWS services should the company use to meet this requirement? (Choose two.)
A. Amazon EC2
B. Amazon Elastic Block Store (Amazon EBS)
C. Amazon Aurora
D. Amazon DynamoDB
E. Amazon Redshift
To build a highly available architecture for a new ecommerce platform using only AWS services that replicate data across multiple Availability Zones, the company should use the following services:
C. Amazon Aurora
- Amazon Aurora is a fully managed, highly available, and scalable relational database service that replicates data across multiple Availability Zones for high availability and durability.
D. Amazon DynamoDB
- Amazon DynamoDB is a managed NoSQL database service that automatically replicates data across multiple Availability Zones within an AWS region to ensure high availability and fault tolerance.
Options A (Amazon EC2), B (Amazon Elastic Block Store), and E (Amazon Redshift) do not inherently replicate data across multiple Availability Zones; this capability depends on the specific configuration and choices made when using these services. For high availability and data replication across Availability Zones, Aurora and DynamoDB are suitable choices.
Which characteristic of the AWS Cloud helps users eliminate underutilized CPU capacity?
A. Agility
B. Elasticity
C. Reliability
D. Durability
The characteristic of the AWS Cloud that helps users eliminate underutilized CPU capacity is:
B. Elasticity
Elasticity in the AWS Cloud allows users to automatically and dynamically scale their resources up or down based on demand. This means that users can easily adjust their resource allocation to match the workload’s requirements, eliminating the need for over-provisioning and reducing underutilized CPU capacity. It helps optimize resource utilization and cost-effectiveness by ensuring that you only pay for the resources you actually use.
Service control policies (SCPs) manage permissions for which of the following?
A. Availability Zones
B. AWS Regions
C. AWS Organizations
D. Edge locations
Service control policies (SCPs) manage permissions for:
C. AWS Organizations
SCPs are used in AWS Organizations to set fine-grained permissions for member accounts within the organization. They allow you to control what services and actions are allowed or denied for accounts and organizational units (OUs) in the AWS organization. SCPs help you manage and restrict access to AWS services and actions at the organizational level, providing centralized control over permissions.
Which AWS service can be used to encrypt data at rest?
A. Amazon GuardDuty
B. AWS Shield
C. AWS Security Hub
D. AWS Key Management Service (AWS KMS)
The AWS service that can be used to encrypt data at rest is:
D. AWS Key Management Service (AWS KMS)
AWS Key Management Service (KMS) is a fully managed encryption service that allows you to create and control encryption keys used to encrypt and decrypt your data at rest. It provides robust security controls and options for data encryption, making it an essential component for securing data in AWS. While options A (Amazon GuardDuty), B (AWS Shield), and C (AWS Security Hub) are valuable services for security and threat detection, they do not provide data encryption at rest like AWS KMS does.
Which characteristics are advantages of using the AWS Cloud? (Choose two.)
A. A 100% service level agreement (SLA) for all AWS services
B. Compute capacity that is adjusted on demand
C. Availability of AWS Support for code development
D. Enhanced security
E. Increases in cost and complexity
The advantages of using the AWS Cloud include:
B. Compute capacity that is adjusted on demand:
- AWS provides the ability to dynamically scale your compute resources up or down based on demand. This elasticity allows you to efficiently allocate resources when needed and reduce them during periods of lower demand.
D. Enhanced security:
- AWS offers a wide range of security services, features, and best practices to help secure your workloads. AWS has a shared responsibility model, and it provides the tools and services necessary to help you secure your applications and data in the cloud.
Options A (A 100% service level agreement), C (Availability of AWS Support for code development), and E (Increases in cost and complexity) do not accurately represent typical characteristics of using the AWS Cloud. While AWS offers SLAs for its services, they vary by service and are not uniformly 100%. AWS Support is available for various purposes, not just code development. The goal of using AWS is often to reduce complexity and cost, not increase it.
A user is storing objects in Amazon S3. The user needs to restrict access to the objects to meet compliance obligations.
What should the user do to meet this requirement?
A. Use AWS Secrets Manager.
B. Tag the objects in the S3 bucket.
C. Use security groups.
D. Use network ACLs.
To restrict access to objects stored in Amazon S3 and meet compliance obligations, the user should:
B. Tag the objects in the S3 bucket.
Tagging objects allows you to assign metadata to the objects, and you can use these tags to implement fine-grained access controls based on object metadata. By setting appropriate bucket policies or access control lists (ACLs) based on the tags, you can control who has access to the objects. AWS Identity and Access Management (IAM) policies can also be applied based on object tags to restrict access.
Options A (Use AWS Secrets Manager), C (Use security groups), and D (Use network ACLs) do not directly address the need to restrict access to objects in Amazon S3 based on compliance requirements. Tagging is a common method to implement object-level access control in S3.
A company wants to convert video files and audio files from their source format into a format that will play on smartphones, tablets, and web browsers.
Which AWS service will meet these requirements?
A. Amazon Elastic Transcoder
B. Amazon Comprehend
C. AWS Glue
D. Amazon Rekognition
To convert video and audio files from their source format into a format that will play on smartphones, tablets, and web browsers, you should use:
A. Amazon Elastic Transcoder
Amazon Elastic Transcoder is a fully managed media transcoding service that allows you to convert media files from their source format into different formats suitable for various devices and playback scenarios, including smartphones, tablets, and web browsers. It’s designed to make it easy to transcode and optimize media for delivery over the internet and across different platforms. This service is ideal for tasks like video and audio format conversion, resizing, and more.
Which of the following are benefits of Amazon EC2 Auto Scaling? (Choose two.)
A. Improved health and availability of applications
B. Reduced network latency
C. Optimized performance and costs
D. Automated snapshots of data
E. Cross-Region Replication
The benefits of Amazon EC2 Auto Scaling include:
A. Improved health and availability of applications:
- Auto Scaling helps maintain the desired number of instances and replaces unhealthy instances, which improves the overall health and availability of applications. It helps ensure that your application can handle changes in traffic and maintain responsiveness.
C. Optimized performance and costs:
- Auto Scaling can automatically adjust the number of instances in response to changing workloads. This optimization helps control costs by scaling out when traffic is high and scaling in when demand decreases.
Options B (Reduced network latency), D (Automated snapshots of data), and E (Cross-Region Replication) are not typically associated with Amazon EC2 Auto Scaling. Auto Scaling primarily focuses on managing the number of EC2 instances based on workload changes to ensure application availability and cost optimization.
A company has several departments. Each department has its own AWS accounts for its applications. The company wants all AWS costs on a single invoice to simplify payment, but the company wants to know the costs that each department is incurring.
Which AWS tool or feature will provide this functionality?
A. AWS Cost and Usage Reports
B. Consolidated billing
C. Savings Plans
D. AWS Budgets
To achieve the goal of having all AWS costs on a single invoice while still being able to track costs incurred by each department, the company should use:
B. Consolidated billing
AWS Consolidated Billing allows an organization to consolidate multiple AWS accounts (belonging to different departments, teams, or business units) into a single paying account. This provides the benefit of a single invoice for all the accounts. However, it also maintains individual billing and cost tracking for each of the linked AWS accounts. Each linked account can continue to access and manage its own resources and services, and the payer account can see the costs associated with each linked account. This way, you can simplify payment while still tracking costs by department or account.
Option A (AWS Cost and Usage Reports) is a reporting tool that can provide detailed usage and cost data, but it does not offer the ability to consolidate billing. Options C (Savings Plans) and D (AWS Budgets) are related to cost optimization and budgeting but do not address the consolidation of billing and cost tracking for multiple AWS accounts.
A company runs its workloads on premises. The company wants to forecast the cost of running a large application on AWS.
Which AWS service or tool can the company use to obtain this information?
A. AWS Pricing Calculator
B. AWS Budgets
C. AWS Trusted Advisor
D. Cost Explorer
To forecast the cost of running a large application on AWS, the company can use:
A. AWS Pricing Calculator
The AWS Pricing Calculator is a web-based service provided by AWS that allows you to estimate and calculate the cost of running various AWS services based on your specific usage and requirements. You can input details about the services you plan to use, such as the type and number of instances, storage, data transfer, and more. The calculator provides you with cost estimates and helps you plan your AWS budget.
Option B (AWS Budgets) is used for creating and managing custom budgets to track your AWS cost and usage over time, but it is not specifically for forecasting costs. Options C (AWS Trusted Advisor) and D (Cost Explorer) are more focused on providing recommendations and insights into cost optimization and usage, rather than forecasting costs for a specific workload.
A company wants to eliminate the need to guess infrastructure capacity before deployments. The company also wants to spend its budget on cloud resources only as the company uses the resources.
Which advantage of the AWS Cloud matches the company’s requirements?
A. Reliability
B. Global reach
C. Economies of scale
D. Pay-as-you-go pricing
The advantage of the AWS Cloud that matches the company’s requirements is:
D. Pay-as-you-go pricing
Pay-as-you-go pricing in the AWS Cloud means that you only pay for the cloud resources you consume, without the need to make upfront commitments or long-term contracts. This eliminates the need to guess infrastructure capacity before deployments and allows you to align your expenses with your actual resource usage. It provides flexibility and cost efficiency by scaling resources up or down as needed, so you can make the most of your budget while avoiding over-provisioning.
Option A (Reliability) relates to the availability and durability of AWS services but is not directly related to cost or pricing. Option B (Global reach) is about AWS’s presence in multiple regions and does not specifically address pricing. Option C (Economies of scale) is a factor that contributes to the cost-effectiveness of AWS, but it doesn’t directly address the elimination of guessing infrastructure capacity and pay-as-you-go pricing.
Which AWS service supports a hybrid architecture that gives users the ability to extend AWS infrastructure, AWS services, APIs, and tools to data centers, co- location environments, or on-premises facilities?
A. AWS Snowmobile
B. AWS Local Zones
C. AWS Outposts
D. AWS Fargate
The AWS service that supports a hybrid architecture, enabling users to extend AWS infrastructure, AWS services, APIs, and tools to data centers, co-location environments, or on-premises facilities, is:
C. AWS Outposts
AWS Outposts is a service that allows you to run AWS infrastructure and services on-premises. It extends the AWS cloud to your own data centers or co-location facilities, creating a seamless hybrid architecture that integrates your on-premises environment with AWS. This enables you to leverage the benefits of AWS services locally while still having the ability to connect and manage your on-premises resources.
Options A (AWS Snowmobile), B (AWS Local Zones), and D (AWS Fargate) are AWS services with different purposes and do not primarily focus on hybrid architectures or extending on-premises environments.
A company has a physical tape library to store data backups. The tape library is running out of space. The company needs to extend the tape library’s capacity to the AWS Cloud.
Which AWS service should the company use to meet this requirement?
A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon S3
C. Amazon Elastic File System (Amazon EFS)
D. AWS Storage Gateway
To extend the capacity of a physical tape library to the AWS Cloud, the company should use:
D. AWS Storage Gateway
AWS Storage Gateway is a service that enables hybrid cloud storage between on-premises environments and AWS. It allows you to seamlessly integrate your on-premises applications and storage with the AWS Cloud. One of the modes of AWS Storage Gateway is the “Tape Gateway” mode, which helps you extend your existing backup infrastructure to the cloud. This mode allows you to store backups in Amazon S3 and Amazon S3 Glacier, providing an efficient and cost-effective way to extend your backup capabilities.
Options A (Amazon EBS), B (Amazon S3), and C (Amazon EFS) are AWS storage services, but they do not directly address the need to extend a physical tape library’s capacity to the AWS Cloud in the same manner as AWS Storage Gateway’s Tape Gateway mode.
An online retail company has seasonal sales spikes several times a year, primarily around holidays. Demand is lower at other times. The company finds it difficult to predict the increasing infrastructure demand for each season.
Which advantages of moving to the AWS Cloud would MOST benefit the company? (Choose two.)
A. Global footprint
B. Elasticity
C. AWS service quotas
D. AWS shared responsibility model
E. Pay-as-you-go pricing
The advantages of moving to the AWS Cloud that would MOST benefit the online retail company with seasonal sales spikes are:
B. Elasticity: Elasticity in the AWS Cloud allows the company to dynamically adjust its infrastructure resources to meet varying demand. During seasonal sales spikes, the company can easily scale up its resources to handle the increased workload and scale down during periods of lower demand. This elasticity ensures that the company can efficiently manage its infrastructure without over-provisioning or incurring unnecessary costs.
E. Pay-as-you-go pricing: Pay-as-you-go pricing in the AWS Cloud means the company only pays for the resources it consumes, allowing cost optimization during periods of lower demand. The company can avoid upfront capital expenses and adapt its spending to align with business needs, especially during seasonal peaks.
Options A (Global footprint), C (AWS service quotas), and D (AWS shared responsibility model) are valuable aspects of the AWS Cloud, but they are not as directly related to addressing the company’s challenges with seasonal sales spikes and infrastructure demand management as the elasticity and pay-as-you-go pricing advantages.
Which AWS service can be used to turn text into lifelike speech?
A. Amazon Polly
B. Amazon Kendra
C. Amazon Rekognition
D. Amazon Connect
A. Amazon Polly
Amazon Polly is the AWS service that can be used to turn text into lifelike speech. It provides text-to-speech capabilities, allowing you to convert text input into natural-sounding speech in various languages and voices.
Which AWS service or tool can be used to capture information about inbound and outbound traffic in an Amazon VPC?
A. VPC Flow Logs
B. Amazon Inspector
C. VPC endpoint services
D. NAT gateway
A. VPC Flow Logs
Amazon VPC Flow Logs is the AWS service or tool that can be used to capture information about inbound and outbound traffic in an Amazon Virtual Private Cloud (VPC). VPC Flow Logs provide detailed network traffic information, which can be useful for monitoring, troubleshooting, and analyzing network activity within a VPC.
A company wants to ensure that two Amazon EC2 instances are in separate data centers with minimal communication latency between the data centers.
How can the company meet this requirement?
A. Place the EC2 instances in two separate AWS Regions connected with a VPC peering connection.
B. Place the EC2 instances in two separate Availability Zones within the same AWS Region.
C. Place one EC2 instance on premises and the other in an AWS Region. Then connect them by using an AWS VPN connection.
D. Place both EC2 instances in a placement group for dedicated bandwidth.
B. Place the EC2 instances in two separate Availability Zones within the same AWS Region.
To ensure that two Amazon EC2 instances are in separate data centers with minimal communication latency between the data centers, you should place the EC2 instances in two separate Availability Zones within the same AWS Region. Availability Zones are designed to provide physically separated and isolated data centers with low-latency connections, making them an ideal choice for high availability and minimal latency between instances.
In which situations should a company create an IAM user instead of an IAM role? (Choose two.)
A. When an application that runs on Amazon EC2 instances requires access to other AWS services
B. When the company creates AWS access credentials for individuals
C. When the company creates an application that runs on a mobile phone that makes requests to AWS
D. When the company needs to add users to IAM groups
E. When users are authenticated in the corporate network and want to be able to use AWS without having to sign in a second time
A. When an application that runs on Amazon EC2 instances requires access to other AWS services
B. When the company creates AWS access credentials for individuals
IAM users are typically used in scenarios where individuals or applications require access to AWS services and resources. For example:
A. When an application running on Amazon EC2 instances requires access to other AWS services, you can create an IAM user and assign necessary permissions to that user, and then configure the application to use the IAM user’s credentials for authentication and access.
B. When the company creates AWS access credentials for individuals, such as employees or partners, you can create IAM users for them to provide secure and controlled access to AWS resources.
IAM roles, on the other hand, are typically used for applications and services that run on AWS resources like EC2 instances, Lambda functions, or containers. Roles allow these resources to securely make requests to other AWS services without needing to store long-term access keys or credentials.
Which AWS services should a company use to read and write data that changes frequently? (Choose two.)
A. Amazon S3 Glacier
B. Amazon RDS
C. AWS Snowball
D. Amazon Redshift
E. Amazon Elastic File System (Amazon EFS)
B. Amazon RDS
E. Amazon Elastic File System (Amazon EFS)
Amazon RDS (Relational Database Service) is a managed relational database service that allows you to create, read, and write data to a database that can handle frequent data changes and transactions.
Amazon Elastic File System (Amazon EFS) is a managed file storage service that’s designed to work with multiple EC2 instances, making it suitable for reading and writing data that changes frequently and needs to be shared across multiple compute resources.
Which AWS service is used to provide encryption for Amazon EBS?
A. AWS Certificate Manager
B. AWS Systems Manager
C. AWS KMS
D. AWS Config
C. AWS KMS (Key Management Service)
AWS Key Management Service (KMS) is used to provide encryption for Amazon EBS (Elastic Block Store) volumes. It allows you to create and manage encryption keys that can be used to protect data at rest.
Which AWS services make use of global edge locations? (Choose two.)
A. AWS Fargate
B. Amazon CloudFront
C. AWS Global Accelerator
D. AWS Wavelength
E. Amazon VPC
B. Amazon CloudFront
C. AWS Global Accelerator
Amazon CloudFront and AWS Global Accelerator both make use of AWS’s global edge locations to deliver content and improve the availability and performance of web applications. These edge locations are distributed worldwide to cache and serve content closer to end-users.
A company is operating several factories where it builds products. The company needs the ability to process data, store data, and run applications with local system interdependencies that require low latency.
Which AWS service should the company use to meet these requirements?
A. AWS IoT Greengrass
B. AWS Lambda
C. AWS Outposts
D. AWS Snowball Edge
C. AWS Outposts
AWS Outposts is designed to extend AWS infrastructure to on-premises locations, including factories. It allows you to run AWS services on-premises with the same APIs and control plane used in the AWS Cloud. This would provide the ability to process data, store data, and run applications with low latency while maintaining a local system’s interdependencies.
Which of the following is a recommended design principle for AWS Cloud architecture?
A. Design tightly coupled components.
B. Build a single application component that can handle all the application functionality.
C. Make large changes on fewer iterations to reduce chances of failure.
D. Avoid monolithic architecture by segmenting workloads.
D. Avoid monolithic architecture by segmenting workloads.
Segmenting workloads into smaller, loosely coupled components is a recommended design principle for AWS Cloud architecture. Monolithic architectures can be less flexible and more challenging to scale and manage, while segmenting workloads into smaller components promotes agility, scalability, and easier maintenance.
A company is designing its AWS workloads so that components can be updated regularly and so that changes can be made in small, reversible increments.
Which pillar of the AWS Well-Architected Framework does this design support?
A. Security
B. Performance efficiency
C. Operational excellence
D. Reliability
C. Operational excellence
Designing workloads to be updated regularly, and making changes in small, reversible increments aligns with the pillar of operational excellence. It encourages best practices for operational processes, such as automation, frequent updates, and the ability to quickly adapt to changes, ensuring that systems are running efficiently and reliably.
Which of the following acts as an instance-level firewall to control inbound and outbound access?
A. Network access control list
B. Security groups
C. AWS Trusted Advisor
D. Virtual private gateways
B. Security groups
Security groups act as an instance-level firewall to control inbound and outbound access to Amazon EC2 instances. They allow you to specify rules that control the traffic to and from your instances. Network access control lists (NACLs) operate at the subnet level, and they provide additional network-level controls for inbound and outbound traffic. Virtual private gateways are used in VPN connections, and AWS Trusted Advisor is a service that helps optimize your AWS infrastructure but is not used for firewall rules.
A company has a workload that will run continuously for 1 year. The workload cannot tolerate service interruptions.
Which Amazon EC2 purchasing option will be MOST cost-effective?
A. All Upfront Reserved Instances
B. Partial Upfront Reserved Instances
C. Dedicated Instances
D. On-Demand Instances
B. Partial Upfront Reserved Instances
To run a workload continuously for one year without any interruptions and make it cost-effective, you can use Partial Upfront Reserved Instances. Reserved Instances offer significant cost savings compared to On-Demand Instances, and Partial Upfront Reserved Instances allow you to pay part of the cost upfront, reducing the hourly rate you pay for your instances while maintaining availability and predictability. This is an ideal option for a continuous, one-year workload that cannot tolerate service interruptions. Dedicated Instances might be used for compliance or security reasons but don’t inherently provide cost savings. On-Demand Instances would be more expensive over a one-year period compared to Reserved Instances.
Which AWS service helps protect against DDoS attacks?
A. AWS Shield
B. Amazon Inspector
C. Amazon GuardDuty
D. Amazon Detective
A. AWS Shield
AWS Shield is the service designed to protect against Distributed Denial of Service (DDoS) attacks. It provides protection for AWS resources and applications against various types of DDoS attacks, helping to ensure the availability of your applications and data.
Using AWS Config to record, audit, and evaluate changes to AWS resources to enable traceability is an example of which AWS Well-Architected Framework pillar?
A. Security
B. Operational excellence
C. Performance efficiency
D. Cost optimization
A. Security
Using AWS Config to record, audit, and evaluate changes to AWS resources is primarily focused on the “Security” pillar of the AWS Well-Architected Framework. It helps to enhance security by providing visibility into changes made to your AWS resources and helps you maintain compliance and traceability.
Which AWS tool or feature acts as a VPC firewall at the subnet level?
A. Security group
B. Network ACL
C. Traffic Mirroring
D. Internet gateway
B. Network ACL
Network Access Control Lists (Network ACLs) in AWS act as a VPC firewall at the subnet level. They are used to control inbound and outbound traffic at the subnet level, allowing you to define rules to permit or deny traffic based on IP addresses, port ranges, and protocols. Security groups, on the other hand, are used to control traffic at the instance level.
Which AWS service can be used to decouple applications?
A. AWS Config
B. Amazon Simple Queue Service (Amazon SQS)
C. AWS Batch
D. Amazon Simple Email Service (Amazon SES)
The AWS service that can be used to decouple applications is Amazon Simple Queue Service (Amazon SQS).
Amazon SQS is a fully managed message queuing service that enables decoupling of application components or microservices. It allows different parts of an application to communicate by sending messages to a central queue, which can then be consumed by other components when they are ready. This decoupling can help improve system scalability, reliability, and flexibility.`
Which disaster recovery option is the LEAST expensive?
A. Warm standby
B. Multisite
C. Backup and restore
D. Pilot light
The “Pilot light” disaster recovery option is typically the least expensive among the provided options.
In a “Pilot light” scenario, the essential components of an application or system are pre-configured and ready to launch in case of a disaster. This minimal setup often involves only core components or databases that are kept in a standby state. It is more cost-effective compared to a warm standby or multisite option because it involves less infrastructure and fewer active resources.
Here’s a brief overview of each option:
- Warm standby involves having a scaled-down version of your environment running and partially active. It’s more expensive than a “Pilot light” because it requires more resources to be actively running.
- Multisite is a high-availability setup where you have active sites in multiple locations. While it offers excellent redundancy and availability, it’s typically more expensive due to the infrastructure and data replication requirements.
- Backup and restore is a backup solution but doesn’t provide immediate recovery options like the other three methods. It’s essential for data recovery but doesn’t offer the same level of continuity as the others.
The “Pilot light” option ensures that your critical components can be quickly scaled up if needed, but it doesn’t keep all components running at all times, which helps minimize costs.
Which type of AWS storage is ephemeral and is deleted when an Amazon EC2 instance is stopped or terminated?
A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon EC2 instance store
C. Amazon Elastic File System (Amazon EFS)
D. Amazon S3
The type of AWS storage that is ephemeral and is deleted when an Amazon EC2 instance is stopped or terminated is Amazon EC2 instance store (option B).
EC2 instance store is local, temporary storage that is physically attached to the host machine of an EC2 instance. It is often used for temporary storage of data that doesn’t need to be persistent, such as caching, temporary files, or scratch data. However, data stored on EC2 instance store volumes is lost when the instance is stopped or terminated, as it is tied to the lifecycle of the instance itself.
In contrast, Amazon Elastic Block Store (EBS) provides persistent block storage that can be detached from an EC2 instance and attached to another, making it suitable for storing data that needs to persist beyond the lifecycle of an instance. Amazon Elastic File System (EFS) is a network-attached file storage service, and Amazon S3 is an object storage service, both of which are designed for durable, highly available, and scalable storage.
Which of the following is a characteristic of the AWS account root user?
A. The root user is the only user that can be configured with multi-factor authentication (MFA).
B. The root user is the only user that can access the AWS Management Console.
C. The root user is the first sign-in identity that is available when an AWS account is created.
D. The root user has a password that cannot be changed.
The characteristic of the AWS account root user is:
C. The root user is the first sign-in identity that is available when an AWS account is created.
The root user is created by default when an AWS account is set up. It has the highest level of privilege in the AWS account and can perform any action in the account. It is important to secure the root user’s credentials, as it is the most powerful and should be used only for initial setup and emergency situations. It’s recommended to configure multi-factor authentication (MFA) for the root user for added security. Other users and roles should be created for day-to-day operations to follow best security practices.
A company hosts an application on an Amazon EC2 instance. The EC2 instance needs to access several AWS resources, including Amazon S3 and Amazon
DynamoDB.
What is the MOST operationally efficient solution to delegate permissions?
A. Create an IAM role with the required permissions. Attach the role to the EC2 instance.
B. Create an IAM user and use its access key and secret access key in the application.
C. Create an IAM user and use its access key and secret access key to create a CLI profile in the EC2 instance
D. Create an IAM role with the required permissions. Attach the role to the administrative IAM user.
The MOST operationally efficient solution to delegate permissions for an Amazon EC2 instance to access AWS resources like Amazon S3 and Amazon DynamoDB is:
A. Create an IAM role with the required permissions and attach the role to the EC2 instance.
By using IAM roles and attaching them to the EC2 instance, you can provide secure and temporary credentials to the EC2 instance without needing to manage access keys directly. This approach is more secure and operationally efficient compared to using IAM users’ access keys in your EC2 instance. Using IAM roles for EC2 instances is the recommended best practice for delegating permissions and ensuring the security of your AWS resources.
Options B and C suggest using IAM users’ access keys, which is not the best practice for EC2 instances, as it requires manual management of access keys and can be less secure.
Option D is not operationally efficient and is not the best practice, as it suggests attaching a role to an administrative IAM user, which doesn’t directly address the requirement of granting permissions to an EC2 instance.
Which of the following is a component of the AWS Global Infrastructure?
A. Amazon Alexa
B. AWS Regions
C. Amazon Lightsail
D. AWS Organizations
B. AWS Regions is a component of the AWS Global Infrastructure.
AWS Regions are physical locations around the world where AWS has data centers. Each AWS Region is a separate geographic area and is entirely isolated from the other AWS Regions to provide fault tolerance. AWS customers can choose the AWS Region where they want to run their applications and services, allowing them to be close to their end-users or meet data residency requirements.
Options A, C, and D are not components of the AWS Global Infrastructure. Amazon Alexa is a voice-controlled virtual assistant, Amazon Lightsail is a simplified virtual server offering, and AWS Organizations is a service for managing multiple AWS accounts.
What is the purpose of having an internet gateway within a VPC?
A. To create a VPN connection to the VPC
B. To allow communication between the VPC and the internet
C. To impose bandwidth constraints on internet traffic
D. To load balance traffic from the internet across Amazon EC2 instances
B. To allow communication between the VPC and the internet is the purpose of having an internet gateway within a VPC.
An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It serves as a gateway for traffic to and from the internet, enabling resources within your VPC to access the internet or be accessed from the internet.
Options A, C, and D do not accurately describe the primary purpose of an internet gateway within a VPC. An internet gateway does not create VPN connections, impose bandwidth constraints, or load balance traffic from the internet across Amazon EC2 instances.
Which AWS service allows users to download security and compliance reports about the AWS infrastructure on demand?
A. Amazon GuardDuty
B. AWS Security Hub
C. AWS Artifact
D. AWS Shield
C. AWS Artifact allows users to download security and compliance reports about the AWS infrastructure on demand. AWS Artifact provides access to various compliance reports, including SOC reports, PCI DSS reports, and more. Users can access and download these reports to help demonstrate and validate their security and compliance efforts in the AWS Cloud.
A pharmaceutical company operates its infrastructure in a single AWS Region. The company has thousands of VPCs in a various AWS accounts that it wants to interconnect.
Which AWS service or feature should the company use to help simplify management and reduce operational costs?
A. VPC endpoint
B. AWS Direct Connect
C. AWS Transit Gateway
D. VPC peering
C. AWS Transit Gateway should be used by the pharmaceutical company to help simplify management and reduce operational costs when interconnecting thousands of VPCs across various AWS accounts within a single AWS Region. AWS Transit Gateway acts as a hub for connecting multiple VPCs and VPN connections, making it easier to manage and scale the network connectivity within your AWS environment. It provides a centralized and simplified way to route traffic and enforce policies across interconnected VPCs. This is particularly useful in scenarios with a large number of VPCs that need to communicate efficiently.
A company is planning an infrastructure deployment to the AWS Cloud. Before the deployment, the company wants a cost estimate for running the infrastructure.
Which AWS service or feature can provide this information?
A. Cost Explorer
B. AWS Trusted Advisor
C. AWS Cost and Usage Report
D. AWS Pricing Calculator
D. AWS Pricing Calculator can provide a cost estimate for running infrastructure in the AWS Cloud. The AWS Pricing Calculator is a web-based tool that allows you to estimate your monthly bill based on your expected usage of AWS services and resources. It helps you understand the cost implications of your architectural decisions and allows you to plan and budget for your AWS infrastructure before deployment. You can input details about the services and resources you plan to use, and it will provide you with a cost estimate based on that information.
Which AWS service of tool helps to centrally manage billing and allow controlled access to resources across AWS accounts?
A. AWS Identity and Access Management (IAM)
B. AWS Organizations
C. Cost Explorer
D. AWS Budgets
B. AWS Organizations helps to centrally manage billing and allows controlled access to resources across AWS accounts. AWS Organizations is a service that allows you to centrally manage multiple AWS accounts within your organization. It helps you consolidate billing, apply policies across accounts, and simplify the management of your AWS resources. By using AWS Organizations, you can create a hierarchical structure of accounts and apply Service Control Policies (SCPs) to control the actions that accounts and their users can perform within the organization. This allows for better resource governance and cost management across multiple AWS accounts.
Which of the following are Amazon Virtual Private Cloud (Amazon VPC) resources?
A. Objects; access control lists (ACLs)
B. Subnets; internet gateways
C. Access policies; buckets
D. Groups; roles
B. Subnets; internet gateways are Amazon Virtual Private Cloud (Amazon VPC) resources.
Amazon VPC resources include subnets, route tables, network ACLs, security groups, internet gateways, and more. These resources are used to configure and control your VPC’s network environment within AWS. Subnets are logical divisions of an IP address range within your VPC, and internet gateways enable outbound and inbound traffic between your VPC and the internet.
A company needs to identify the last time that a specific user accessed the AWS Management Console.
Which AWS service will provide this information?
A. Amazon Cognito
B. AWS CloudTrail
C. Amazon Inspector
D. Amazon GuardDuty
B. AWS CloudTrail provides the capability to track user access to the AWS Management Console and other AWS services. You can use CloudTrail logs to identify when specific users accessed the AWS Management Console and what actions they performed, including the last time they accessed it. CloudTrail records API calls, including console sign-in events, which can be used to audit user activity in AWS.
A company launched an Amazon EC2 instance with the latest Amazon Linux 2 Amazon Machine Image (AMI).
Which actions can a system administrator take to connect to the EC2 instance? (Choose two.)
A. Use Amazon EC2 Instance Connect.
B. Use a Remote Desktop Protocol (RDP) connection.
C. Use AWS Batch
D. Use AWS Systems Manager Session Manager.
E. Use Amazon Connect
To connect to an Amazon EC2 instance launched with the latest Amazon Linux 2 AMI, you can use the following methods:
A. Use Amazon EC2 Instance Connect: This allows you to connect directly from the AWS Management Console or using the AWS CLI without requiring a separate key pair.
D. Use AWS Systems Manager Session Manager: AWS Systems Manager provides a secure and auditable way to access EC2 instances. Session Manager is a part of AWS Systems Manager that allows you to connect to and manage your instances through a web-based shell or the AWS CLI.
Remote Desktop Protocol (RDP) connections (option B) are typically used for Windows instances, and AWS Batch (option C) is used for batch processing, not for interactive instance access. Amazon Connect (option E) is a service for setting up cloud-based contact centers and is not used for connecting to EC2 instances.
A company wants to perform sentiment analysis on customer service email messages that it receives. The company wants to identify whether the customer service engagement was positive or negative.
Which AWS service should the company use to perform this analysis?
A. Amazon Textract
B. Amazon Translate
C. Amazon Comprehend
D. Amazon Rekognition
To perform sentiment analysis on customer service email messages and identify whether the customer service engagement was positive or negative, the company should use Amazon Comprehend (option C). Amazon Comprehend is a natural language processing (NLP) service that includes sentiment analysis as one of its features. It can analyze text to determine the sentiment, such as positive, negative, or neutral, in the given content.
Options like Amazon Textract (option A) are used for text extraction from documents, Amazon Translate (option B) is used for language translation, and Amazon Rekognition (option D) is used for image and video analysis. These services are not designed for sentiment analysis.
What is the total amount of storage offered by Amazon S3?
A. 100MB
B. 5 GB
C. 5 TB
D. Unlimited
Amazon S3 (Simple Storage Service) does not have a fixed total amount of storage. It is a scalable and highly flexible object storage service that allows users to store as much data as they need. The amount of storage you can use in Amazon S3 is virtually unlimited, and you can scale your storage needs based on your requirements. Therefore, the correct answer is D. Unlimited.
A company is migrating to Amazon S3. The company needs to transfer 60 TB of data from an on-premises data center to AWS within 10 days.
Which AWS service should the company use to accomplish this migration?
A. Amazon S3 Glacier
B. AWS Database Migration Service (AWS DMS)
C. AWS Snowball
D. AWS Direct Connect
For a large-scale data migration like this, where you need to transfer a significant amount of data to Amazon S3 within a specified time frame, AWS Snowball is the most suitable service.
AWS Snowball is a physical data transfer service that allows you to securely move large amounts of data into and out of AWS. It provides a physical storage device that you can ship to your data center for the initial data transfer. Once the data is loaded onto the Snowball device, you can ship it back to AWS, where the data is then imported into an S3 bucket.
In this case, you can use AWS Snowball to transfer the 60 TB of data from your on-premises data center to AWS within the 10-day timeframe. Snowball offers a secure and efficient way to handle large data transfer tasks.
What type of database is Amazon DynamoDB?
A. In-memory
B. Relational
C. Key-value
D. Graph
Amazon DynamoDB is a NoSQL database service provided by AWS. It is specifically categorized as a key-value and document database. While it primarily stores data in a key-value format, it also allows you to store more complex data structures, including documents, lists, or maps. DynamoDB is designed for high availability, scalability, and low-latency performance.
A large organization has a single AWS account.
What are the advantages of reconfiguring the single account into multiple AWS accounts? (Choose two.)
A. It allows for administrative isolation between different workloads.
B. Discounts can be applied on a quarterly basis by submitting cases in the AWS Management Console.
C. Transitioning objects from Amazon S3 to Amazon S3 Glacier in separate AWS accounts will be less expensive.
D. Having multiple accounts reduces the risks associated with malicious activity targeted at a single account.
E. Amazon QuickSight offers access to a cost tool that provides application-specific recommendations for environments running in multiple accounts.
Reconfiguring a single AWS account into multiple AWS accounts provides several advantages, but not all the options you mentioned are correct:
A. It allows for administrative isolation between different workloads.
- Advantage: By having separate AWS accounts for different workloads or teams, you can enforce stricter access control and isolate administrative responsibilities.
D. Having multiple accounts reduces the risks associated with malicious activity targeted at a single account.
- Advantage: Isolating workloads into separate accounts helps limit the blast radius in case of security breaches or malicious activity in one account, reducing the overall risk.
Options B, C, and E are not valid advantages of having multiple AWS accounts:
B. Discounts can be applied on a quarterly basis by submitting cases in the AWS Management Console.
- Discounts and pricing depend on various factors but are not typically tied to the number of AWS accounts.
C. Transitioning objects from Amazon S3 to Amazon S3 Glacier in separate AWS accounts will be less expensive.
- This statement doesn’t accurately describe the cost model for transitioning objects between Amazon S3 and Amazon S3 Glacier.
E. Amazon QuickSight offers access to a cost tool that provides application-specific recommendations for environments running in multiple accounts.
- While AWS provides various cost management tools, this specific statement doesn’t accurately represent the functionality of Amazon QuickSight or AWS cost management tools.
A retail company has recently migrated its website to AWS. The company wants to ensure that it is protected from SQL injection attacks. The website uses an
Application Load Balancer to distribute traffic to multiple Amazon EC2 instances.
Which AWS service or feature can be used to create a custom rule that blocks SQL injection attacks?
A. Security groups
B. AWS WAF
C. Network ACLs
D. AWS Shield
To protect against SQL injection attacks in this scenario, you should use AWS Web Application Firewall (WAF).
B. AWS WAF (Amazon Web Application Firewall) allows you to create custom rules and WebACLs (Web Access Control Lists) to protect your web applications from common web exploits, including SQL injection attacks. You can configure AWS WAF with your Application Load Balancer (ALB) to inspect incoming web requests and filter out malicious traffic based on the rules you set. This helps protect your application from common security threats.
So, the correct answer is option B: AWS WAF.