AWS Certified Solutions Architect Associate Practice Test 2 - Results (Bonzo) Flashcards
For data privacy, a healthcare company has been asked to comply with the Health Insurance Portability and Accountability Act (HIPAA). The company stores all its backups on an Amazon S3 bucket. It is required that data stored on the S3 bucket must be encrypted.
What is the best option to do this? (Select TWO.)
A. Before sending the data to Amazon S3 over HTTPS, encrypt the data locally first using your own encryption keys
B. Enable Server Side Encryption on an S3 bucket to make use of AWS-128 encryption
C. Store the data in encrypted EBS snapshots
D. Store the data on EBS volumes with encryption enabled instead of using Amazon S3
E. Enable Server Side Encryption on an S3 bucket to make use of AES-256 encryption
A. Before sending the data to Amazon S3 over HTTPS, encrypt the data locally first using your own encryption keys
E. Enable Server Side Encryption on an S3 bucket to make use of AES-256 encryption
Explanation:
Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects. For example, if you share your objects using a pre-signed URL, that URL works the same way for both encrypted and unencrypted objects.
You have three mutually exclusive options depending on how you choose to manage the encryption keys:
Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) Use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS) Use Server-Side Encryption with Customer-Provided Keys (SSE-C)
The options that say: Before sending the data to Amazon S3 over HTTPS, encrypt the data locally first using your own encryption keys and Enable Server-Side Encryption on an S3 bucket to make use of AES-256 encryption are correct because these options are using client-side encryption and Amazon S3-Managed Keys (SSE-S3) respectively. Client-side encryption is the act of encrypting data before sending it to Amazon S3 while SSE-S3 uses AES-256 encryption.
Storing the data on EBS volumes with encryption enabled instead of using Amazon S3 and storing the data in encrypted EBS snapshots are incorrect because both options use EBS encryption and not S3.
Enabling Server-Side Encryption on an S3 bucket to make use of AES-128 encryption is incorrect as S3 doesn’t provide AES-128 encryption, only AES-256.
An accounting application uses an RDS database configured with Multi-AZ deployments to improve availability. What would happen to RDS if the primary database instance fails?
A. The IP Address of the primary DB instance is switched to the standby DB instance
B. The canonical name record (CNAME) is switched from the primary to the standby instance
C. The primary database instance will reboot
D. A new database instance is created in the standby Availability Zone
B. The canonical name record (CNAME) is switched from the primary to the standby instance
Explanation:
In Amazon RDS, failover is automatically handled so that you can resume database operations as quickly as possible without administrative intervention in the event that your primary database instance goes down. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary.
The option that says: The IP address of the primary DB instance is switched to the standby DB instance is incorrect since IP addresses are per subnet, and subnets cannot span multiple AZs.
The option that says: The primary database instance will reboot is incorrect since in the event of a failure, there is no database to reboot with.
The option that says: A new database instance is created in the standby Availability Zone is incorrect since with multi-AZ enabled, you already have a standby database in another AZ.
A Solutions Architect of a multinational gaming company develops video games for PS4, Xbox One, and Nintendo Switch consoles, plus a number of mobile games for Android and iOS. Due to the wide range of their products and services, the architect proposed that they use API Gateway.
What are the key features of API Gateway that the architect can tell to the client? (Select TWO.)
A. Enables you to build RESTful APIs and WebSocket APIs, that are optimized for serverless workloads
B. Provides you with static anycast IP addresses that server as a fixed entry point to your applications hosted in one or more AWS regions
C. You pay only for the API calls you receive and the amount of data transferred out
D. Enables you to run applications requiring high levels of inter-node communications at scale on AWS through its custom built OS bypass hardware interface
E. It automatically provides a query language for your APIs similar to Graph
A. Enables you to build RESTful APIs and WebSocket APIs, that are optimized for serverless workloads
C. You pay only for the API calls you receive and the amount of data transferred out
Explanation:
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any web application. Since it can use AWS Lambda, you can run your APIs without servers.
Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. Amazon API Gateway has no minimum fees or startup costs. You pay only for the API calls you receive and the amount of data transferred out.
Hence, the correct answers are:
- Enables you to build RESTful APIs and WebSocket APIs that are optimized for serverless workloads
- You pay only for the API calls you receive and the amount of data transferred out.
The option that says: It automatically provides a query language for your APIs similar to GraphQL is incorrect because this is not provided by API Gateway.
The option that says: Provides you with static anycast IP addresses that serve as a fixed entry point to your applications hosted in one or more AWS Regions is incorrect because this is a capability of AWS Global Accelerator and not API Gateway.
The option that says: Enables you to run applications requiring high levels of inter-node communications at scale on AWS through its custom-built operating system (OS) bypass hardware interface is incorrect because this is a capability of Elastic Fabric Adapter and not API Gateway.
A company has a requirement to move 80 TB data warehouse to the cloud. It would take 2 months to transfer the data given their current bandwidth allocation.
Which is the most cost-effective service that would allow you to quickly upload their data into AWS?
A. AWS S3 Multipart Upload
B. AWS Snowball Edge
C. AWS Direct Connect
D. AWS Snowmobile
B. AWS Snowball Edge
Explanation:
AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.
Each Snowball Edge device can transport data at speeds faster than the internet. This transport is done by shipping the data in the appliances through a regional carrier. The appliances are rugged shipping containers, complete with E Ink shipping labels. The AWS Snowball Edge device differs from the standard Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality.
Snowball Edge devices have three options for device configurations – storage optimized, compute optimized, and with GPU.
Hence, the correct answer is: AWS Snowball Edge.
AWS Snowmobile is incorrect because this is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. It is not suitable for transferring a small amount of data, like 80 TB in this scenario. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. A more cost-effective solution here is to order a Snowball Edge device instead.
AWS Direct Connect is incorrect because it is primarily used to establish a dedicated network connection from your premises network to AWS. This is not suitable for one-time data transfer tasks, like what is depicted in the scenario.
Amazon S3 Multipart Upload is incorrect because this feature simply enables you to upload large objects in multiple parts. It still uses the same Internet connection of the company, which means that the transfer will still take time due to its current bandwidth allocation.
A company plans to migrate all of their applications to AWS. The Solutions Architect suggested to store all the data to EBS volumes. The Chief Technical Officer is worried that EBS volumes are not appropriate for the existing workloads due to compliance requirements, downtime scenarios, and IOPS performance.
Which of the following are valid points in proving that EBS is the best service to use for migration? (Select TWO.)
A. EBS volumes can be attached to any EC2 Instance in any Availability Zone
B. An EBS volume is off instance storage that can persist independently from the life of an instance
C. Amazon EBS provides the ability to create snapshots (backups) of any EBS volume and write a copy of the data in the volume to Amazon RDS, where it is stored redundantly in multiple Availability Zones
D. When you create an EBS volume in an Availability Zone, it is automatically replicated on a separate AWS region to prevent data loss due to a failure of any single hardware component
E. EBS Volumes support live configuration changes while in production which means that you can modify the volume type, volume size and IOPS capacity without service interruptions
B. An EBS volume is off instance storage that can persist independently from the life of an instance
E. EBS Volumes support live configuration changes while in production which means that you can modify the volume type, volume size and IOPS capacity without service interruptions
Explanation:
An Amazon EBS volume is a durable, block-level storage device that you can attach to a single EC2 instance. You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application. You can also use them for throughput-intensive applications that perform continuous disk scans. EBS volumes persist independently from the running life of an EC2 instance.
Here is a list of important information about EBS Volumes:
- When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to a failure of any single hardware component.
- An EBS volume can only be attached to one EC2 instance at a time.
- After you create a volume, you can attach it to any EC2 instance in the same Availability Zone
- An EBS volume is off-instance storage that can persist independently from the life of an instance. You can specify not to terminate the EBS volume when you terminate the EC2 instance during instance creation.
- EBS volumes support live configuration changes while in production which means that you can modify the volume type, volume size, and IOPS capacity without service interruptions.
- Amazon EBS encryption uses 256-bit Advanced Encryption Standard algorithms (AES-256)
- EBS Volumes offer 99.999% SLA.
The option that says: When you create an EBS volume in an Availability Zone, it is automatically replicated on a separate AWS region to prevent data loss due to a failure of any single hardware component is incorrect because when you create an EBS volume in an Availability Zone, it is automatically replicated within that zone only, and not on a separate AWS region, to prevent data loss due to a failure of any single hardware component.
The option that says: EBS volumes can be attached to any EC2 Instance in any Availability Zone is incorrect as EBS volumes can only be attached to an EC2 instance in the same Availability Zone.
The option that says: Amazon EBS provides the ability to create snapshots (backups) of any EBS volume and write a copy of the data in the volume to Amazon RDS, where it is stored redundantly in multiple Availability Zones is almost correct. But instead of storing the volume to Amazon RDS, the EBS Volume snapshots are actually sent to Amazon S3.
A media company has two VPCs: VPC-1 and VPC-2 with peering connection between each other. VPC-1 only contains private subnets while VPC-2 only contains public subnets. The company uses a single AWS Direct Connect connection and a virtual interface to connect their on-premises network with VPC-1.
Which of the following options increase the fault tolerance of the connection to VPC-1? (Select TWO.)
A. Establish a hardware VPN over the Internet between VPC-1 and the on-premises network
B. Establish a hardware VPN over the Internet between VPC-2 and the on-premises network
C. Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2
D. Establish another AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1
E. Use the AWS VPN CloudHUb to create a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2
A. Establish a hardware VPN over the Internet between VPC-1 and the on-premises network
D. Establish another AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1
Explanation:
In this scenario, you have two VPCs which have peering connections with each other. Note that a VPC peering connection does not support edge to edge routing. This means that if either VPC in a peering relationship has one of the following connections, you cannot extend the peering relationship to that connection:
- A VPN connection or an AWS Direct Connect connection to a corporate network
- An Internet connection through an Internet gateway
- An Internet connection in a private subnet through a NAT device
- A gateway VPC endpoint to an AWS service; for example, an endpoint to Amazon S3.
- (IPv6) A ClassicLink connection. You can enable IPv4 communication between a linked EC2-Classic instance and instances in a VPC on the other side of a VPC peering connection. However, IPv6 is not supported in EC2-Classic, so you cannot extend this connection for IPv6 communication.
For example, if VPC A and VPC B are peered, and VPC A has any of these connections, then instances in VPC B cannot use the connection to access resources on the other side of the connection. Similarly, resources on the other side of a connection cannot use the connection to access VPC B.
Hence, this means that you cannot use VPC-2 to extend the peering relationship that exists between VPC-1 and the on-premises network. For example, traffic from the corporate network can’t directly access VPC-1 by using the VPN connection or the AWS Direct Connect connection to VPC-2, which is why the following options are incorrect:
- Use the AWS VPN CloudHub to create a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
- Establish a hardware VPN over the Internet between VPC-2 and the on-premises network.
- Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
You can do the following to provide a highly available, fault-tolerant network connection:
- Establish a hardware VPN over the Internet between the VPC and the on-premises network.
- Establish another AWS Direct Connect connection and private virtual interface in the same AWS region.
A company is running a multi-tier web application farm in a virtual private cloud (VPC) that is not connected to their corporate network. They are connecting to the VPC over the Internet to manage the fleet of Amazon EC2 instances running in both the public and private subnets. The Solutions Architect has added a bastion host with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC.
Which of the following bastion host deployment options will meet this requirement?
A. Deploy a Windows Bastion host on the corporate network that has RDP access to all EC2 instances in the VPC
B. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow RDP access to bastion only from the corporate IP addresses
C. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow SSH access to the bastion from anywhere
D. Deploy a Window Bastion host with an Elastic IP address in the private subnet, and restrict RDP access to the bastion from only the corporate public IP addresses
B. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow RDP access to bastion only from the corporate IP addresses
Explanation:
The correct answer is to deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow RDP access to bastion only from the corporate IP addresses.
A bastion host is a special purpose computer on a network specifically designed and configured to withstand attacks. If you have a bastion host in AWS, it is basically just an EC2 instance. It should be in a public subnet with either a public or Elastic IP address with sufficient RDP or SSH access defined in the security group. Users log on to the bastion host via SSH or RDP and then use that session to manage other hosts in the private subnets.
Deploying a Windows Bastion host on the corporate network that has RDP access to all EC2 instances in the VPC is incorrect since you do not deploy the Bastion host to your corporate network. It should be in the public subnet of a VPC.
Deploying a Windows Bastion host with an Elastic IP address in the private subnet and restricting RDP access to the bastion from only the corporate public IP addresses is incorrect since it should be deployed in a public subnet, not a private subnet.
Deploying a Windows Bastion host with an Elastic IP address in the public subnet and allowing SSH access to the bastion from anywhere is incorrect. Since it is a Windows bastion, you should allow RDP access and not SSH as this is mainly used for Linux-based systems.
A company plans to conduct a network security audit. The web application is hosted on an Auto Scaling group of EC2 Instances with an Application Load Balancer in front to evenly distribute the incoming traffic. A Solutions Architect has been tasked to enhance the security posture of the company’s cloud infrastructure and minimize the impact of DDoS attacks on its resources.
Which of the following is the most effective solution that should be implemented?
A. Configure Amazon CloudFront distribution and set a Network Load Balancer as the origin. Use Amazon GuardDuty to block suspicious hosts based on its security findings. Set up a custom AWS Lambda function that processes the security logs and invokes Amazon SNS for notification
B. Configure Amazon CloudFront distribution and set Application Load Balancer as the origin. Create a rate based web ACL rule using AWS WAF and associate it with Amazon CloudFront
C. Configure Amazon CloudFront Distribution and set a Network Load Balancer as the origin. Use VPC Flow Logs to monitor abnormal traffic patterns. Set up a custom AWS Lambda function that processes the flow logs and invokes Amazon SNS for notification
D. Configure Amazon CloudFront distribution and set an Application Load Balancer as the origin. Create a security group rule and deny all the suspicious addresses. Use Amazon SNS for notification
B. Configure Amazon CloudFront distribution and set Application Load Balancer as the origin. Create a rate based web ACL rule using AWS WAF and associate it with Amazon CloudFront
Explanation:
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define. You can deploy AWS WAF on Amazon CloudFront as part of your CDN solution, the Application Load Balancer that fronts your web servers or origin servers running on EC2, or Amazon API Gateway for your APIs.
To detect and mitigate DDoS attacks, you can use AWS WAF in addition to AWS Shield. AWS WAF is a web application firewall that helps detect and mitigate web application layer DDoS attacks by inspecting traffic inline. Application layer DDoS attacks use well-formed but malicious requests to evade mitigation and consume application resources. You can define custom security rules that contain a set of conditions, rules, and actions to block attacking traffic. After you define web ACLs, you can apply them to CloudFront distributions, and web ACLs are evaluated in the priority order you specified when you configured them.
By using AWS WAF, you can configure web access control lists (Web ACLs) on your CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures. Each Web ACL consists of rules that you can configure to string match or regex match one or more request attributes, such as the URI, query-string, HTTP method, or header key. In addition, by using AWS WAF’s rate-based rules, you can automatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you define. Requests from offending client IP addresses will receive 403 Forbidden error responses and will remain blocked until request rates drop below the threshold. This is useful for mitigating HTTP flood attacks that are disguised as regular web traffic.
It is recommended that you add web ACLs with rate-based rules as part of your AWS Shield Advanced protection. These rules can alert you to sudden spikes in traffic that might indicate a potential DDoS event. A rate-based rule counts the requests that arrive from any individual address in any five-minute period. If the number of requests exceeds the limit that you define, the rule can trigger an action such as sending you a notification.
Hence, the correct answer is: Configure Amazon CloudFront distribution and set Application Load Balancer as the origin. Create a rate-based web ACL rule using AWS WAF and associate it with Amazon CloudFront.
The option that says: Configure Amazon CloudFront distribution and set a Network Load Balancer as the origin. Use VPC Flow Logs to monitor abnormal traffic patterns. Set up a custom AWS Lambda function that processes the flow logs and invokes Amazon SNS for notification is incorrect because this option only allows you to monitor the traffic that is reaching your instance. You can’t use VPC Flow Logs to mitigate DDoS attacks.
The option that says: Configure Amazon CloudFront distribution and set an Application Load Balancer as the origin. Create a security group rule and deny all the suspicious addresses. Use Amazon SNS for notification is incorrect. To deny suspicious addresses, you must manually insert the IP addresses of these hosts. This is a manual task which is not a sustainable solution. Take note that attackers generate large volumes of packets or requests to overwhelm the target system. Using a security group in this scenario won’t help you mitigate DDoS attacks.
The option that says: Configure Amazon CloudFront distribution and set a Network Load Balancer as the origin. Use Amazon GuardDuty to block suspicious hosts based on its security findings. Set up a custom AWS Lambda function that processes the security logs and invokes Amazon SNS for notification is incorrect because Amazon GuardDuty is just a threat detection service. You should use AWS WAF and create your own AWS WAF rate-based rules for mitigating HTTP flood attacks that are disguised as regular web traffic.
A company hosts its web application on a set of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The application has an embedded NoSQL database. As the application receives more traffic, the application becomes overloaded mainly due to database requests. The management wants to ensure that the database is eventually consistent and highly available.
Which of the following options can meet the company requirements with the least operational overhead?
A. Configure the Auto Scaling group to spread the Amazon EC2 instances across three availability zones. Use the AWS Database Migration Service (DMS) with a replication server and an ongoing replication task to migrate the embedded NoSQL database to Amazon DynamoDB
B. Change the ALB with a network load balancer to handle more traffic. Use the AWS Migration Service (DMS) to migrate the embedded NoSQL database to Amazon DynamoDB
C. Configure the Auto Scaling groups to spread the Amazon EC2 instances across three availability zones. Configure replication of the NoSQL database on the set of Amazon EC2 instances to spread the database load
D. Change the ALB with a Network Load Balancer (NLB) to handle more traffic and integrate AWS Global Accelerator to ensure high availability. Configure replication of the NoSQL database on the set of Amazon EC2 Instances to spread the database load
A. Configure the Auto Scaling group to spread the Amazon EC2 instances across three availability zones. Use the AWS Database Migration Service (DMS) with a replication server and an ongoing replication task to migrate the embedded NoSQL database to Amazon DynamoDB
Explanation:
AWS Database Migration Service (AWS DMS) is a cloud service that makes it easy to migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. You can use AWS DMS to migrate your data into the AWS Cloud or between combinations of cloud and on-premises setups.
With AWS DMS, you can perform one-time migrations, and you can replicate ongoing changes to keep sources and targets in sync. If you want to migrate to a different database engine, you can use the AWS Schema Conversion Tool (AWS SCT) to translate your database schema to the new platform. You then use AWS DMS to migrate the data. Because AWS DMS is a part of the AWS Cloud, you get the cost efficiency, speed to market, security, and flexibility that AWS services offer.
You can use AWS DMS to migrate data to an Amazon DynamoDB table. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. AWS DMS supports using a relational database or MongoDB as a source.
Therefore, the correct answer is: Configure the Auto Scaling group to spread the Amazon EC2 instances across three Availability Zones. Use the AWS Database Migration Service (DMS) with a replication server and an ongoing replication task to migrate the embedded NoSQL database to Amazon DynamoDB. Using an Auto Scaling group of EC2 instances and migrating the embedded database to Amazon DynamoDB will ensure that both the application and database are highly available with low operational overhead.
The option that says: Change the ALB with a Network Load Balancer (NLB) to handle more traffic and integrate AWS Global Accelerator to ensure high availability. Configure replication of the NoSQL database on the set of Amazon EC2 instances to spread the database load is incorrect. It is not recommended to run a production system with an embedded database on EC2 instances. A better option is to migrate the database to a managed AWS service such as Amazon DynamoDB, so you won’t have to manually maintain, patch, provision and scale your database yourself. In addition, using an AWS Global Accelerator is not warranted since the architecture is only hosted in a single AWS region and not in multiple regions.
The option that says: Change the ALB with a Network Load Balancer (NLB) to handle more traffic. Use the AWS Migration Service (DMS) to migrate the embedded NoSQL database to Amazon DynamoDB is incorrect. The scenario did not require handling millions of requests per second or very low latency to justify the use of NLB. The ALB should be able to able to scale and handle scaling traffic.
The option that says: Configure the Auto Scaling group to spread the Amazon EC2 instances across three Availability Zones. Configure replication of the NoSQL database on the set of Amazon EC2 instances to spread the database load is incorrect. This may be possible, but it entails an operational overhead of manually configuring the embedded database to replicate and scale with the EC2 instances. It would be better to migrate the database to a managed AWS database service such as Amazon DynamoDB.
A company has developed public APIs hosted in Amazon EC2 instances behind an Elastic Load Balancer. The APIs will be used by various clients from their respective on-premises data centers. A Solutions Architect received a report that the web service clients can only access trusted IP addresses whitelisted on their firewalls.
What should you do to accomplish the above requirement?
A. Create an Alias Record in Route 53 which maps to the DNS name of the load balancer
B. Associate an Elastic IP address to an Application Load Balancer
C. Associate an Elastic IP address to a Network Load Balancer
D. Create a CloudFront distribution whose origin points to the private IP addresses of your web servers
C. Associate an Elastic IP address to a Network Load Balancer
Explanation:
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the default rule’s target group. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.
Based on the given scenario, web service clients can only access trusted IP addresses. To resolve this requirement, you can use the Bring Your Own IP (BYOIP) feature to use the trusted IPs as Elastic IP addresses (EIP) to a Network Load Balancer (NLB). This way, there’s no need to re-establish the whitelists with new IP addresses.
Hence, the correct answer is: Associate an Elastic IP address to a Network Load Balancer.
The option that says: Associate an Elastic IP address to an Application Load Balancer is incorrect because you can’t assign an Elastic IP address to an Application Load Balancer. The alternative method you can do is assign an Elastic IP address to a Network Load Balancer in front of the Application Load Balancer.
The option that says: Create a CloudFront distribution whose origin points to the private IP addresses of your web servers is incorrect because web service clients can only access trusted IP addresses. The fastest way to resolve this requirement is to attach an Elastic IP address to a Network Load Balancer.
The option that says: Create an Alias Record in Route 53 which maps to the DNS name of the load balancer is incorrect. This approach won’t still allow them to access the application because of trusted IP addresses on their firewalls.
Both historical records and frequently accessed data are stored on an on-premises storage system. The amount of current data is growing at an exponential rate. As the storage’s capacity is nearing its limit, the company’s Solutions Architect has decided to move the historical records to AWS to free up space for the active data.
Which of the following architectures deliver the best solution in terms of cost and operational management?
A. Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data
B. Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
C. Use AWS DataSync to move the historical records from on premises to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify S3 Lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days
D. Use AWS Storage Gateway to move the historical records from on premises to AWS. Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days
B. Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
Explanation:
AWS DataSync makes it simple and fast to move large amounts of data online between on-premises storage and Amazon S3, Amazon Elastic File System (Amazon EFS), or Amazon FSx for Windows File Server. Manual tasks related to data transfers can slow down migrations and burden IT operations. DataSync eliminates or automatically handles many of these tasks, including scripting copy jobs, scheduling, and monitoring transfers, validating data, and optimizing network utilization. The DataSync software agent connects to your Network File System (NFS), Server Message Block (SMB) storage, and your self-managed object storage, so you don’t have to modify your applications.
DataSync can transfer hundreds of terabytes and millions of files at speeds up to 10 times faster than open-source tools, over the Internet or AWS Direct Connect links. You can use DataSync to migrate active data sets or archives to AWS, transfer data to the cloud for timely analysis and processing, or replicate data to AWS for business continuity. Getting started with DataSync is easy: deploy the DataSync agent, connect it to your file system, select your AWS storage resources, and start moving data between them. You pay only for the data you move.
Since the problem is mainly about moving historical records from on-premises to AWS, using AWS DataSync is a more suitable solution. You can use DataSync to move cold data from expensive on-premises storage systems directly to durable and secure long-term storage, such as Amazon S3 Glacier or Amazon S3 Glacier Deep Archive.
Hence, the correct answer is the option that says: Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
The following options are both incorrect:
- Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
- Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
Although you can copy data from on-premises to AWS with Storage Gateway, it is not suitable for transferring large sets of data to AWS. Storage Gateway is mainly used in providing low-latency access to data by caching frequently accessed data on-premises while storing archive data securely and durably in Amazon cloud storage services. Storage Gateway optimizes data transfer to AWS by sending only changed data and compressing data.
The option that says: Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days is incorrect because, with AWS DataSync, you can transfer data from on-premises directly to Amazon S3 Glacier Deep Archive. You don’t have to configure the S3 lifecycle policy and wait for 30 days to move the data to Glacier Deep Archive.
A company has a top priority requirement to monitor a few database metrics and then afterward, send email notifications to the Operations team in case there is an issue. Which AWS services can accomplish this requirement? (Select TWO.)
A. Amazon EC2 Instance with a running Berkley Internet Name Domain (BIND) Server
B. Amazon CloudWatch
C. Amazon Simple Queue Service (SQS)
D. Amazon Simple Email Service
E. Amazon Simple Notification Servcie (SNS)
B. Amazon CloudWatch
E. Amazon Simple Notification Servcie (SNS)
Explanation:
Amazon CloudWatch and Amazon Simple Notification Service (SNS) are correct. In this requirement, you can use Amazon CloudWatch to monitor the database and then Amazon SNS to send the emails to the Operations team. Take note that you should use SNS instead of SES (Simple Email Service) when you want to monitor your EC2 instances.
CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS, and on-premises servers.
SNS is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.
Amazon Simple Email Service is incorrect. SES is a cloud-based email sending service designed to send notifications and transactional emails.
Amazon Simple Queue Service (SQS) is incorrect. SQS is a fully-managed message queuing service. It does not monitor applications nor send email notifications, unlike SES.
Amazon EC2 Instance with a running Berkeley Internet Name Domain (BIND) Server is incorrect because BIND is primarily used as a Domain Name System (DNS) web service. This is only applicable if you have a private hosted zone in your AWS account. It does not monitor applications nor send email notifications.
A company has a static corporate website hosted in a standard S3 bucket and a new web domain name that was registered using Route 53. You are instructed by your manager to integrate these two services in order to successfully launch their corporate website.
What are the prerequisites when routing traffic using Amazon Route 53 to a website that is hosted in an Amazon S3 Bucket? (Select TWO.)
A. A registered domain name
B. The S3 bucket must be in the same region as the hosted zone
C. The Cross Origin Resource Sharing (CORS) option should be enabled in the S3 bucket
D. The record set must be of type MX
E. The S3 bucket name must be the same as the domain name
A. A registered domain name
E. The S3 bucket name must be the same as the domain name
Explanation:
Here are the prerequisites for routing traffic to a website that is hosted in an Amazon S3 Bucket:
- An S3 bucket that is configured to host a static website. The bucket must have the same name as your domain or subdomain. For example, if you want to use the subdomain portal.tutorialsdojo.com, the name of the bucket must be portal.tutorialsdojo.com.
- A registered domain name. You can use Route 53 as your domain registrar, or you can use a different registrar.
- Route 53 as the DNS service for the domain. If you register your domain name by using Route 53, we automatically configure Route 53 as the DNS service for the domain.
The option that says: The record set must be of type “MX” is incorrect since an MX record specifies the mail server responsible for accepting email messages on behalf of a domain name. This is not what is being asked by the question.
The option that says: The S3 bucket must be in the same region as the hosted zone is incorrect. There is no constraint that the S3 bucket must be in the same region as the hosted zone in order for the Route 53 service to route traffic into it.
The option that says: The Cross-Origin Resource Sharing (CORS) option should be enabled in the S3 bucket is incorrect because you only need to enable Cross-Origin Resource Sharing (CORS) when your client web application on one domain interacts with the resources in a different domain.
An organization needs to control the access for several S3 buckets. They plan to use a gateway endpoint to allow access to trusted buckets.
Which of the following could help you achieve this requirement?
A. Generate a bucket policy for trusted VPCs
B. Generate an endpoint policy for trusted VPCs
C. Generate a bucket policy for trusted S3 buckets
D. Generate an endpoint policy for trusted S3 buckets
D. Generate an endpoint policy for trusted S3 buckets
Explanation:
A Gateway endpoint is a type of VPC endpoint that provides reliable connectivity to Amazon S3 and DynamoDB without requiring an internet gateway or a NAT device for your VPC. Instances in your VPC do not require public IP addresses to communicate with resources in the service.
When you create a Gateway endpoint, you can attach an endpoint policy that controls access to the service to which you are connecting. You can modify the endpoint policy attached to your endpoint and add or remove the route tables used by the endpoint. An endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies). It is a separate policy for controlling access from the endpoint to the specified service.
We can use a bucket policy or an endpoint policy to allow the traffic to trusted S3 buckets. The options that have ‘trusted S3 buckets’ key phrases will be the possible answer in this scenario. It would take you a lot of time to configure a bucket policy for each S3 bucket instead of using a single endpoint policy. Therefore, you should use an endpoint policy to control the traffic to the trusted Amazon S3 buckets.
Hence, the correct answer is: Generate an endpoint policy for trusted S3 buckets.
The option that says: Generate a bucket policy for trusted S3 buckets is incorrect. Although this is a valid solution, it takes a lot of time to set up a bucket policy for each and every S3 bucket. This can be simplified by whitelisting access to trusted S3 buckets in a single S3 endpoint policy.
The option that says: Generate a bucket policy for trusted VPCs is incorrect because you are generating a policy for trusted VPCs. Remember that the scenario only requires you to allow the traffic for trusted S3 buckets, not to the VPCs.
The option that says: Generate an endpoint policy for trusted VPCs is incorrect because it only allows access to trusted VPCs, and not to trusted Amazon S3 buckets.
A media company is setting up an ECS batch architecture for its image processing application. It will be hosted in an Amazon ECS Cluster with two ECS tasks that will handle image uploads from the users and image processing. The first ECS task will process the user requests, store the image in an S3 input bucket, and push a message to a queue. The second task reads from the queue, parses the message containing the object name, and then downloads the object. Once the image is processed and transformed, it will upload the objects to the S3 output bucket. To complete the architecture, the Solutions Architect must create a queue and the necessary IAM permissions for the ECS tasks.
Which of the following should the Architect do next?
A. Launch a new Amazon AppStream 2.0 queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and AppStream 2.0 queue. Declare the IAM role (taskRoleArn) in the task definition
B. Launch a new Amazon Kinesis Data Firehopse and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and Kinesis Data Firehose. Specify the ARN of the IAM role in the (taskDefintionArn) field of the task definition
C. Launch a new Amazon SQS queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and SQS queue. Declare the IAM role (taskRoleArn) in the task definition
D. Launch a new Amazon MQ queue and configure the second ECS task to get read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and Amazon MQ queue. Set the (EnableTaskIAMRole) option to true in the task definition
C. Launch a new Amazon SQS queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and SQS queue. Declare the IAM role (taskRoleArn) in the task definition
Explanation:
Docker containers are particularly suited for batch job workloads. Batch jobs are often short-lived and embarrassingly parallel. You can package your batch processing application into a Docker image so that you can deploy it anywhere, such as in an Amazon ECS task.
Amazon ECS supports batch jobs. You can use Amazon ECS Run Task action to run one or more tasks once. The Run Task action starts the ECS task on an instance that meets the task’s requirements including CPU, memory, and ports.
For example, you can set up an ECS Batch architecture for an image processing application. You can set up an AWS CloudFormation template that creates an Amazon S3 bucket, an Amazon SQS queue, an Amazon CloudWatch alarm, an ECS cluster, and an ECS task definition. Objects uploaded to the input S3 bucket trigger an event that sends object details to the SQS queue. The ECS task deploys a Docker container that reads from that queue, parses the message containing the object name and then downloads the object. Once transformed it will upload the objects to the S3 output bucket.
By using the SQS queue as the location for all object details, you can take advantage of its scalability and reliability as the queue will automatically scale based on the incoming messages and message retention can be configured. The ECS Cluster will then be able to scale services up or down based on the number of messages in the queue.
You have to create an IAM Role that the ECS task assumes in order to get access to the S3 buckets and SQS queue. Note that the permissions of the IAM role don’t specify the S3 bucket ARN for the incoming bucket. This is to avoid a circular dependency issue in the CloudFormation template. You should always make sure to assign the least amount of privileges needed to an IAM role.
Hence, the correct answer is: Launch a new Amazon SQS queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and SQS queue. Declare the IAM Role (taskRoleArn) in the task definition.
The option that says: Launch a new Amazon AppStream 2.0 queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and AppStream 2.0 queue. Declare the IAM Role (taskRoleArn) in the task definition is incorrect because Amazon AppStream 2.0 is a fully managed application streaming service and can’t be used as a queue. You have to use Amazon SQS instead.
The option that says: Launch a new Amazon Kinesis Data Firehose and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and Kinesis Data Firehose. Specify the ARN of the IAM Role in the (taskDefinitionArn) field of the task definition is incorrect because Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data. Although it can stream data to an S3 bucket, it is not suitable to be used as a queue for a batch application in this scenario. In addition, the ARN of the IAM Role should be declared in the taskRoleArn and not in the taskDefinitionArn field.
The option that says: Launch a new Amazon MQ queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and Amazon MQ queue. Set the (EnableTaskIAMRole) option to true in the task definition is incorrect because Amazon MQ is primarily used as a managed message broker service and not a queue. The EnableTaskIAMRole option is only applicable for Windows-based ECS Tasks that require extra configuration.
A solutions architect is designing a cost-efficient, highly available storage solution for company data. One of the requirements is to ensure that the previous state of a file is preserved and retrievable if a modified version of it is uploaded. Also, to meet regulatory compliance, data over 3 years must be retained in an archive and will only be accessible once a year.
How should the solutions architect build the solution?
A. Create an S3 Standard Bucket with object level versioning enabled and configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years
B. Create an S3 Standard bucket with S3 Object Lock in compliance mode enabled then configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years
C. Create a One Zone IA bucket with object level versioning enabled and configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years
D. Create an S3 standard bucket and enable S3 Object Lock in governance mode
A. Create an S3 Standard Bucket with object level versioning enabled and configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years
Explanation:
Versioning in Amazon S3 is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets. With versioning, you can recover more easily from both unintended user actions and application failures. After versioning is enabled for a bucket, if Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of those objects.
Hence, the correct answer is: Create an S3 Standard bucket with object-level versioning enabled and configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years.
The S3 Object Lock feature allows you to store objects using a write-once-read-many (WORM) model. In the scenario, changes to objects are allowed, but their previous versions should be preserved and remain retrievable. If you enable the S3 Object Lock feature, you won’t be able to upload new versions of an object. This feature is only helpful when you want to prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
Therefore, the following options are incorrect:
- Create an S3 Standard bucket and enable S3 Object Lock in governance mode.
- Create an S3 Standard bucket with S3 Object Lock in compliance mode enabled then configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years.
The option that says: Create a One-Zone-IA bucket with object-level versioning enabled and configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years is incorrect. One-Zone-IA is not highly available as it only relies on one availability zone for storing data.
A company is running a dashboard application on a Spot EC2 instance inside a private subnet. The dashboard is reachable via a domain name that maps to the private IPv4 address of the instance’s network interface. A solutions architect needs to increase network availability by allowing the traffic flow to resume in another instance if the primary instance is terminated.
Which solution accomplishes these requirements?|
A. Create a secondary elastic network interface and point its private IPv4 address to the applications’ domain name. Attach the new network interface to the primary instance. If the instance goes down, move the secondary network interface to another instance
B. Use the AWS Network Firewall to detach the instance’s primary elastic network interface and move it to a new instance upon failure
C. Attach an elastic IP address to the instance’s primary network interface and point its IP address to the applications’ domain name. Automatically move the EIP to a secondary instance if the primary instance becomes unavailable using the AWS Transit Gateway
D. Set up AWS Transfer for FTPS service in Implicit FTPS mode to automatically disable the source/destination checks on the instance’s primary elastic network interface and reassociate it to another instance
A. Create a secondary elastic network interface and point its private IPv4 address to the applications’ domain name. Attach the new network interface to the primary instance. If the instance goes down, move the secondary network interface to another instance
Explanation:
If one of your instances serving a particular function fails, its network interface can be attached to a replacement or hot standby instance pre-configured for the same role in order to rapidly recover the service. For example, you can use a network interface as your primary or secondary network interface to a critical service such as a database instance or a NAT instance. If the instance fails, you (or more likely, the code running on your behalf) can attach the network interface to a hot standby instance.
Because the interface maintains its private IP addresses, Elastic IP addresses, and MAC address, network traffic begins flowing to the standby instance as soon as you attach the network interface to the replacement instance. Users experience a brief loss of connectivity between the time the instance fails and the time that the network interface is attached to the standby instance, but no changes to the route table or your DNS server are required.
Hence, the correct answer is Create a secondary elastic network interface and point its private IPv4 address to the application’s domain name. Attach the new network interface to the primary instance. If the instance goes down, move the secondary network interface to another instance.
The option that says: Attach an elastic IP address to the instance’s primary network interface and point its IP address to the application’s domain name. Automatically move the EIP to a secondary instance if the primary instance becomes unavailable using the AWS Transit Gateway is incorrect. Elastic IPs are not needed in the solution since the application is private. Furthermore, an AWS Transit Gateway is primarily used to connect your Amazon Virtual Private Clouds (VPCs) and on-premises networks through a central hub. This particular networking service cannot be used to automatically move an Elastic IP address to another EC2 instance.
The option that says: Set up AWS Transfer for FTPS service in Implicit FTPS mode to automatically disable the source/destination checks on the instance’s primary elastic network interface and reassociate it to another instance is incorrect. First of all, the AWS Transfer for FTPS service is not capable of automatically disabling the source/destination checks and it only supports Explicit FTPS mode. Disabling the source/destination check only allows the instance to which the ENI is connected to act as a gateway (both a sender and a receiver). It is not possible to make the primary ENI of any EC2 instance detachable. A more appropriate solution would be to use an Elastic IP address which can be reassociated with your secondary instance.
The option that says: Use the AWS Network Firewall to detach the instance’s primary elastic network interface and move it to a new instance upon failure is incorrect. It’s not possible to detach the primary network interface of an EC2 instance. In addition, the AWS Network Firewall is only used for filtering traffic at the perimeter of your VPC and not for detaching ENIs.
A company is building an internal application that serves as a repository for images uploaded by a couple of users. Whenever a user uploads an image, it would be sent to Kinesis Data Streams for processing before it is stored in an S3 bucket. If the upload was successful, the application will return a prompt informing the user that the operation was successful. The entire processing typically takes about 5 minutes to finish.
Which of the following options will allow you to asynchronously process the request to the application from upload request to Kinesis, S3, and return a reply in the most cost-effective manner?
A. Use a combination of Lambda and Step Functions to orchestrate service components and asynchronously process the requests
B. Use a combination of SNS to buffer the requests and then asynchronously process them using On Demand EC2 instances
C. Use a combination of SQS to queue the requests then asynchronously process them using On Demand EC2 instances
D. Replace the Kinesis Data Streams with an Amazon SQS queue. Create a Lambda function that will asynchronously process the requests
D. Replace the Kinesis Data Streams with an Amazon SQS queue. Create a Lambda function that will asynchronously process the requests
Explanation:
AWS Lambda supports the synchronous and asynchronous invocation of a Lambda function. You can control the invocation type only when you invoke a Lambda function. When you use an AWS service as a trigger, the invocation type is predetermined for each service. You have no control over the invocation type that these event sources use when they invoke your Lambda function. Since processing only takes 5 minutes, Lambda is also a cost-effective choice.
You can use an AWS Lambda function to process messages in an Amazon Simple Queue Service (Amazon SQS) queue. Lambda event source mappings support standard queues and first-in, first-out (FIFO) queues. With Amazon SQS, you can offload tasks from one component of your application by sending them to a queue and processing them asynchronously.
Kinesis Data Streams is a real-time data streaming service that requires the provisioning of shards. Amazon SQS is a cheaper option because you only pay for what you use. Since there is no requirement for real-time processing in the scenario given, replacing Kinesis Data Streams with Amazon SQS would save more costs.
Hence, the correct answer is: Replace the Kinesis stream with an Amazon SQS queue. Create a Lambda function that will asynchronously process the requests.
Using a combination of Lambda and Step Functions to orchestrate service components and asynchronously process the requests is incorrect. The AWS Step Functions service lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Although this can be a valid solution, it is not cost-effective since the application does not have a lot of components to orchestrate. Lambda functions can effectively meet the requirements in this scenario without using Step Functions. This service is not as cost-effective as Lambda.
Using a combination of SQS to queue the requests and then asynchronously processing them using On-Demand EC2 Instances and Using a combination of SNS to buffer the requests and then asynchronously processing them using On-Demand EC2 Instances are both incorrect as using On-Demand EC2 instances is not cost-effective. It is better to use a Lambda function instead.
An organization stores and manages financial records of various companies in its on-premises data center, which is almost out of space. The management decided to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional security, all records must be prevented from being deleted or overwritten.
Which of the following should you do to meet the above requirement?
A. Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock
B. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon EBS and enable object lock
C. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock
D. Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock
D. Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock
Explanation:
AWS DataSync allows you to copy large datasets with millions of files without having to build custom solutions with open source tools or licenses and manage expensive commercial network acceleration software. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity.
AWS DataSync enables you to migrate your on-premises data to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server. You can configure DataSync to make an initial copy of your entire dataset and schedule subsequent incremental transfers of changing data towards Amazon S3. Enabling S3 Object Lock prevents your existing and future records from being deleted or overwritten.
AWS DataSync is primarily used to migrate existing data to Amazon S3. On the other hand, AWS Storage Gateway is more suitable if you still want to retain access to the migrated data and for ongoing updates from your on-premises file-based applications.
Hence, the correct answer in this scenario is: Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock.
The option that says: Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock is incorrect because Amazon EFS only supports file locking. Object lock is a feature of Amazon S3 and not Amazon EFS.
The options that says: Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock is incorrect because the scenario requires that all of the existing records must be migrated to AWS. The future records will also be stored in AWS and not in the on-premises network. This means that setting up a hybrid cloud storage is not necessary since the on-premises storage will no longer be used.
The option that says: Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon EBS, and enable object lock is incorrect because Amazon EBS does not support object lock. Amazon S3 is the only service capable of locking objects to prevent an object from being deleted or overwritten.
A company needs to assess and audit all the configurations in their AWS account. It must enforce strict compliance by tracking all configuration changes made to any of its Amazon S3 buckets. Publicly accessible S3 buckets should also be identified automatically to avoid data breaches.
Which of the following options will meet this requirement?
A. use AWS CloudTrail and review the event history of your AWS Account
B. Use AWS IAM to generate a credential report
C. Use AWS Config to set up a rule in your AWS account
D. Use AWS Trusted Advisor to analyze your AWS environment
C. Use AWS Config to set up a rule in your AWS account
Explanation:
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.
You can use AWS Config to evaluate the configuration settings of your AWS resources. By creating an AWS Config rule, you can enforce your ideal configuration in your AWS account. It also checks if the applied configuration in your resources violates any of the conditions in your rules. The AWS Config dashboard shows the compliance status of your rules and resources. You can verify if your resources comply with your desired configurations and learn which specific resources are noncompliant.
Hence, the correct answer is: Use AWS Config to set up a rule in your AWS account.
The option that says: Use AWS Trusted Advisor to analyze your AWS environment is incorrect because AWS Trusted Advisor only provides best practice recommendations. It cannot define rules for your AWS resources.
The option that says: Use AWS IAM to generate a credential report is incorrect because this report will not help you evaluate resources. The IAM credential report is just a list of all IAM users in your AWS account.
The option that says: Use AWS CloudTrail and review the event history of your AWS account is incorrect. Although it can track changes and store a history of what happened to your resources, this service still cannot enforce rules to comply with your organization’s policies.
A GraphQL API hosted is hosted in an Amazon EKS cluster with Fargate launch type and deployed using AWS SAM. The API is connected to an Amazon DynamoDB table with an Amazon DynamoDB Accelerator (DAX) as its data store. Both resources are hosted in the us-east-1 region.
The AWS IAM authenticator for Kubernetes is integrated into the EKS cluster for role-based access control (RBAC) and cluster authentication. A solutions architect must improve network security by preventing database calls from traversing the public internet. An automated cross-account backup for the DynamoDB table is also required for long-term retention.
Which of the following should the solutions architect implement to meet the requirement?
A. Create a DynamoDB interface endpoint. Associate the endpoint to the appropriate route table. Enable Point in Time Recovery (PITR) to restore the DynamoDB table to a particular point in time on the same or a different AWS account
B. Create a DynamoDB gateway endpoint. Set up a Network Access Control List (NACL) rule that allows outbound traffic to the dynamodb.us-east-1.amazonaws.com gateway endpoint. Use the built in on demand DynamoDB backups for cross account backup and recovery
C. Create a DynamoDB interface endpoint. Set up a stateless rule using AWS Network Firewall to controll all outbound traffic to only use the dynamodb.us-east-1.amazonaws.com endpoint. Integrate the DynamoDB table with Amazon Timestream to allow point in time recovery from a different AWS account
D. Create a DynamoDB gateway endpoint. Associate the endpoint to the appropriate route table. Use AWS Backup to automatically copy the on demand DynamoDB backups to another AWS account for disaster recovery
D. Create a DynamoDB gateway endpoint. Associate the endpoint to the appropriate route table. Use AWS Backup to automatically copy the on demand DynamoDB backups to another AWS account for disaster recovery
Explanation:
Since DynamoDB tables are public resources, applications within a VPC rely on an Internet Gateway to route traffic to/from Amazon DynamoDB. You can use a Gateway endpoint if you want to keep the traffic between your VPC and Amazon DynamoDB within the Amazon network. This way, resources residing in your VPC can use their private IP addresses to access DynamoDB with no exposure to the public internet.
When you create a DynamoDB Gateway endpoint, you specify the VPC where it will be deployed as well as the route table that will be associated with the endpoint. The route table will be updated with an Amazon DynamoDB prefix list (list of CIDR blocks) as the destination and the endpoint’s ID as the target.
DynamoDB on-demand backups are available at no additional cost beyond the normal pricing that’s associated with backup storage size. DynamoDB on-demand backups cannot be copied to a different account or Region. To create backup copies across AWS accounts and Regions and for other advanced features, you should use AWS Backup.
With AWS Backup, you can configure backup policies and monitor activity for your AWS resources and on-premises workloads in one place. Using DynamoDB with AWS Backup, you can copy your on-demand backups across AWS accounts and Regions, add cost allocation tags to on-demand backups, and transition on-demand backups to cold storage for lower costs. To use these advanced features, you must opt into AWS Backup. Opt-in choices apply to the specific account and AWS Region, so you might have to opt into multiple Regions using the same account.
Hence, the correct answer is: Create a DynamoDB gateway endpoint. Associate the endpoint to the appropriate route table. Use AWS Backup to automatically copy the on-demand DynamoDB backups to another AWS account for disaster recovery.
The option that says: Create a DynamoDB interface endpoint. Associate the endpoint to the appropriate route table. Enable Point-in-Time Recovery (PITR) to restore the DynamoDB table to a particular point in time on the same or a different AWS account is incorrect because Amazon DynamoDB does not support interface endpoint. You have to create a DynamoDB Gateway endpoint instead. In addition, the Point-in-Time Recovery (PITR) feature is not capable of restoring a DynamoDB table to a particular point in time in a different AWS account. If this functionality is needed, you have to use the AWS Backup service instead.
The option that says: Create a DynamoDB gateway endpoint. Set up a Network Access Control List (NACL) rule that allows outbound traffic to the dynamodb.us-east-1.amazonaws.com gateway endpoint. Use the built-in on-demand DynamoDB backups for cross-account backup and recovery is incorrect because using a Network Access Control List alone is not enough to prevent traffic traversing to the public Internet. Moreover, you cannot copy DynamoDB on-demand backups to a different account or Region.
The option that says: Create a DynamoDB interface endpoint. Set up a stateless rule using AWS Network Firewall to control all outbound traffic to only use the dynamodb.us-east-1.amazonaws.com endpoint. Integrate the DynamoDB table with Amazon Timestream to allow point-in-time recovery from a different AWS account is incorrect. Keep in mind that the dynamodb.us-east-1.amazonaws.com is a public service endpoint for Amazon DynamoDB. Since the application is able to communicate with Amazon DynamoDB prior to the required architectural change, it’s implied that no firewalls (security group, NACL, etc.) are blocking traffic to/from Amazon DynamoDB, hence, adding an NACL rule to allow outbound traffic to DynamoDB is unnecessary. Furthermore, the use of the AWS Network Firewall in this solution is simply incorrect as you have to integrate this with your Amazon VPC. The use of Amazon Timestream is also wrong since this is a time series database service in AWS for IoT and operational applications. You cannot directly integrate DynamoDB and Amazon Timestream for the purpose of point-in-time data recovery.
A solutions architect is instructed to host a website that consists of HTML, CSS, and some Javascript files. The web pages will display several high-resolution images. The website should have optimal loading times and be able to respond to high request rates.
Which of the following architectures can provide the most cost-effective and fastest loading experience?
A. Host the website using an Nginx server in an EC2 instance. Upload the images in an S3 bucket. Use CloudFront as a CDN to deliver the images closer to end users
B. Upload the HTML, CSS, JavaScript and the images in a single bucket. Then enable website hosting. Create a CloudFront distribution and point the domain on the S3 website endpoint
C. Launch an Auto Scaling Group using an AMI that has preconfigure Apache web server, then configure the scaling policy accordingly. Store the images in an Elastic Block Store. Then, point your instances endpoint to AWS Global Accelerator
D. Host the website in an AWS Elastic Beanstalk environment. Upload the images in an S3 bucket. Use CloudFront as a CDN to deliver the images closer to your end users
B. Upload the HTML, CSS, JavaScript and the images in a single bucket. Then enable website hosting. Create a CloudFront distribution and point the domain on the S3 website endpoint
Explanation:
Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. Additionally, You can use Amazon S3 to host a static website. On a static website, individual webpages include static content. Amazon S3 is highly scalable and you only pay for what you use, you can start small and grow your application as you wish, with no compromise on performance or reliability.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds. CloudFront can be integrated with Amazon S3 for fast delivery of data originating from an S3 bucket to your end-users. By design, delivering data out of CloudFront can be more cost-effective than delivering it from S3 directly to your users.
In the scenario, Since we are only dealing with static content, we can leverage the web hosting feature of S3. Then we can improve the architecture further by integrating it with CloudFront. This way, users will be able to load both the web pages and images faster than if we hosted them on a webserver that we built from scratch.
Hence, the correct answer is: Upload the HTML, CSS, Javascript, and the images in a single bucket. Then enable website hosting. Create a CloudFront distribution and point the domain on the S3 website endpoint.
The option that says: Host the website using an Nginx server in an EC2 instance. Upload the images in an S3 bucket. Use CloudFront as a CDN to deliver the images closer to end-users is incorrect. Creating your own web server to host a static website in AWS is a costly solution. Web Servers on an EC2 instance are usually used for hosting applications that require server-side processing (connecting to a database, data validation, etc.). Since static websites contain web pages with fixed content, we should use S3 website hosting instead.
The option that says: Launch an Auto Scaling Group using an AMI that has a pre-configured Apache web server, then configure the scaling policy accordingly. Store the images in an Elastic Block Store. Then, point your instance’s endpoint to AWS Global Accelerator is incorrect. This is how we serve static websites in the old days. Now, with the help of S3 website hosting, we can host our static contents from a durable, high-availability, and highly scalable environment without managing any servers. Hosting static websites in S3 is cheaper than hosting it in an EC2 instance. In addition, Using ASG for scaling instances that host a static website is an over-engineered solution that carries unnecessary costs. S3 automatically scales to high requests and you only pay for what you use.
The option that says: Host the website in an AWS Elastic Beanstalk environment. Upload the images in an S3 bucket. Use CloudFront as a CDN to deliver the images closer to your end-users is incorrect. AWS Elastic Beanstalk simply sets up the infrastructure (EC2 instance, load balancer, auto-scaling group) for your application. It’s a more expensive and a bit of an overkill solution for hosting a bunch of client-side files.
A Solutions Architect is working for an online hotel booking firm with terabytes of customer data coming from the websites and applications. There is an annual corporate meeting where the Architect needs to present the booking behavior and acquire new insights from the customers’ data. The Architect is looking for a service to perform super-fast analytics on massive data sets in near real-time.
Which of the following services gives the Architect the ability to store huge amounts of data and perform quick and flexible queries on it?
A. Amazon DynamoDB
B. Amazon ElastiCache
C. Amazon Redshift
D. Amazon RDS
C. Amazon Redshift
Explanation:
Amazon Redshift is a fast, scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake. Redshift delivers ten times faster performance than other data warehouses by using machine learning, massively parallel query execution, and columnar storage on a high-performance disk.
You can use Redshift to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It also allows you to run complex analytic queries against terabytes to petabytes of structured and semi-structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution.
Hence, the correct answer is Amazon Redshift.
Amazon DynamoDB is incorrect because DynamoDB is a NoSQL database that is based on key-value pairs used for fast processing of small data that dynamically grows and changes. But if you need to scan large amounts of data (i.e., a lot of keys all in one query), the performance will not be optimal.
Amazon ElastiCache is incorrect because this is used to increase the performance, speed, and redundancy with which applications can retrieve data by providing an in-memory database caching system, and not for database analytical processes.
Amazon RDS is incorrect because this is mainly used for On-Line Transaction Processing (OLTP) applications and not for Online Analytics Processing (OLAP).
A company wants to organize the way it tracks its spending on AWS resources. A report that summarizes the total billing accrued by each department must be generated at the end of the month.
Which solution will meet the requirements?
A. Tag resources with the department name and configure a budget action in AWS Budget
B. Use AWS Cost Explorer to view spending and filter usage data by Resource
C. Create a Cost and Usage report for AWS services that each department is using
D. Tag resources with the department name and enable cost allocation tags
D. Tag resources with the department name and enable cost allocation tags
Explanation:
A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. For each resource, each tag key must be unique, and each tag key can have only one value. You can use tags to organize your resources and cost allocation tags to track your AWS costs on a detailed level.
After you or AWS applies tags to your AWS resources (such as Amazon EC2 instances or Amazon S3 buckets) and you activate the tags in the Billing and Cost Management console, AWS generates a cost allocation report as a comma-separated value (CSV file) with your usage and costs grouped by your active tags. You can apply tags that represent business categories (such as cost centers, application names, or owners) to organize your costs across multiple services.
Hence, the correct answer is: Tag resources with the department name and enable cost allocation tags.
The option that says: Tag resources with the department name and configure a budget action in AWS Budget is incorrect. AWS Budgets only allows you to be alerted and run custom actions if your budget thresholds are exceeded.
The option that says: Use AWS Cost Explorer to view spending and filter usage data by Resource is incorrect. The Resource filter just lets you track costs on EC2 instances. This is quite limited compared with using the Cost Allocation Tags method.
The option that says: Create a Cost and Usage report for AWS services that each department is using is incorrect. The report must contain a breakdown of costs incurred by each department based on tags and not based on AWS services, which is what the Cost and Usage Report (CUR) contains.
A company has two On-Demand EC2 instances inside the Virtual Private Cloud in the same Availability Zone but are deployed to different subnets. One EC2 instance is running a database and the other EC2 instance a web application that connects with the database. You need to ensure that these two instances can communicate with each other for the system to work properly.
What are the things you have to check so that these EC2 instances can communicate inside the VPC? (Select TWO.)
A. Check if all security groups are set to allow the application host to communicate to the database on the right port and protocol
B. Check if both instances are the same instance class
C. Check the Network ACL if it allows communication between the two subnets
D. Check if the default route is set to a NAT instance or Internet Gateway (IGW) for them to communicate
E. Ensure that the EC2 instances are in the same Placement Group
A. Check if all security groups are set to allow the application host to communicate to the database on the right port and protocol
C. Check the Network ACL if it allows communication between the two subnets
Explanation:
First, the Network ACL should be properly set to allow communication between the two subnets. The security group should also be properly configured so that your web server can communicate with the database server.
Hence, these are the correct answers:
Check if all security groups are set to allow the application host to communicate to the database on the right port and protocol. Check the Network ACL if it allows communication between the two subnets.
The option that says: Check if both instances are the same instance class is incorrect because the EC2 instances do not need to be of the same class in order to communicate with each other.
The option that says: Check if the default route is set to a NAT instance or Internet Gateway (IGW) for them to communicate is incorrect because an Internet gateway is primarily used to communicate to the Internet.
The option that says: Ensure that the EC2 instances are in the same Placement Group is incorrect because Placement Group is mainly used to provide low-latency network performance necessary for tightly-coupled node-to-node communication.
A business has a network of surveillance cameras installed within the premises of its data center. Management wants to leverage Artificial Intelligence to monitor and detect unauthorized personnel entering restricted areas. Should an unauthorized person be detected, the security team must be alerted via SMS.
Which solution satisfies the requirement?
A. Replace the existing cameras with AWS IoT. Upload a face detection model to the AWS IoT devices and send them over to AWS Control Tower for checking and notification
B. Use Amazon Kinesis Video to stream live feeds from the cameras. Use Amazon Rekognition to detect authorized personnel. Set the phone numbers of the security as subscribers to an SNS topic
C. Configure Amazon Elastic Transcode to stream live feeds from cameras. Use Amazon Kendra to detect authorized personnel. Set the phone numbers of the security as subscribers to an SNS topic
D. Set up Amazon Managed Service for Prometheus to stream live feeds from the cameras. Use Amazon Fraud Detector to detected unauthorized personnel. Set the phone numbers of the security as subscribers to an SNS topic
B. Use Amazon Kinesis Video to stream live feeds from the cameras. Use Amazon Rekognition to detect authorized personnel. Set the phone numbers of the security as subscribers to an SNS topic
Explanation:
Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), playback, and other processing. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices.
Amazon Rekognition Video can detect objects, scenes, faces, celebrities, text, and inappropriate content in videos. You can also search for faces appearing in a video using your own repository or collection of face images.
The image above illustrates how we can combine these two services to create an intruder alert system using face recognition.
Hence, the correct answer is: Use Amazon Kinesis Video to stream live feeds from the cameras. Use Amazon Rekognition to detect unauthorized personnel. Set the phone numbers of the security as subscribers to an SNS topic.
The option that says: Configure Amazon Elastic Transcoder to stream live feeds from the cameras. Use Amazon Kendra to detect unauthorized personnel. Set the phone numbers of the security as subscribers to an SNS topic is incorrect. Amazon Elastic Transcoder just allows you to convert media files from one format to another. Also, Amazon Kendra can’t be used for face detection as it’s just an intelligent search service.
The option that says: Replace the existing cameras with AWS IoT. Upload a face detection model to the AWS IoT devices and send them over to AWS Control Tower for checking and notification is incorrect. AWS IoT simply provides the cloud services that connect your IoT devices to other devices and AWS cloud services. This is basically a device software that can help you integrate your IoT devices into AWS IoT-based solutions and is not used as a physical camera. AWS Control Tower is primarily used to set up and govern a secure multi-account AWS environment and not for receiving video feeds.
The option that says: Set up Amazon Managed Service for Prometheus to stream live feeds from the cameras. Use Amazon Fraud Detector to detect unauthorized personnel. Set the phone numbers of the security as subscribers to an SNS topic is incorrect. The Amazon Managed Service for Prometheus is a Prometheus-compatible monitoring and alerting service, which is not used to stream live video feeds. This service makes it easy for you to monitor containerized applications and infrastructure at scale but not stream live feeds. Amazon Fraud Detector is a fully managed service that identifies potentially fraudulent online activities such as online payment fraud and fake account creation. Take note that the Amazon Fraud Detector service is not capable of detecting unauthorized personnel through live streaming feeds alone.