AWSExam_2 Flashcards
Your IT Director instructed you to ensure that all of the AWS resources in your VPC don’t go beyond their respective service limits. You should prepare a system that provides you real-time guidance in provisioning your resources that adheres to the AWS best practices.
Which of the following is the MOST appropriate service to use to satisfy this task?
AWS Trusted Advisor
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps.
What is Amazon Inspector?
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
You are a Solutions Architect working for a startup which is currently migrating their production environment to AWS. Your manager asked you to set up access to the AWS console using Identity Access Management (IAM). You have created 5 users for your system administrators using the AWS CLI.
What further steps do you need to take to enable your system administrators to get access to the AWS console?
Provide a password for each user created and give these passwords to your system administrators..
The AWS Management Console is the web interface used to manage your AWS resources using your web browser. To access this, your users should have a password that they can use to login to the web console.
You have EC2 instances running on your VPC. You have both UAT and production EC2 instances running. You want to ensure that employees who are responsible for the UAT instances don’t have the access to work on the production instances to minimize security risks. Which of the following would be the best way to achieve this?
Define the tags on the UAT and production servers and add a condition to the IAM policy which allows access to specific tags.
A leading e-commerce company is in need of a storage solution that can be accessed by 1000 Linux servers in multiple availability zones. The service should be able to handle the rapidly changing data at scale while still maintaining high performance. It should also be highly durable and highly available whenever the servers will pull data from it, with little need for management. As the Solutions Architect, which of the following services is the most cost-effective choice that you should use to meet the above requirement?
EFS
in this scenario, the keywords are rapidly changing data and 1000 Linux servers.
Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances. EFS provides the same level of high availability and high scalability like S3 however, this service is more suitable for scenarios where it is required to have a POSIX-compatible file system or if you are storing rapidly changing data.
You are assigned to design a highly available architecture in AWS. You have two target groups with three EC2 instances each, which are added to an Application Load Balancer. In the security group of the EC2 instance, you have verified that the port 80 for HTTP is allowed. However, the instances are still showing out of service from the load balancer. What could be the root cause of this issue?
- The wrong instance type was used for the EC2 instance
- The instances are using the wrong AMI
- The health check configuration is not properly defined
- The wrong subnet was used in your VPC
The health check configuration is not properly defined
You are working as an IT Consultant for a large media company where you are tasked to design a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this S3 bucket to immediately receive over 2000 PUT requests and 3500 GET requests per second at peak hour. What should you do to ensure optimal performance?
Do nothing. Amazon S3 will automatically manage performance at this scale.
Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance significantly.
company which has both an on-premises data center as well as an AWS cloud infrastructure. They store their graphics, audios, videos, and other multimedia assets primarily in their on-premises storage server and use an S3 Standard storage class bucket as a backup. Their data are heavily used for only a week (7 days) but after that period, it will be infrequently used by their customers. You are instructed to save storage costs in AWS yet maintain the ability to fetch their media assets in a matter of minutes for a surprise annual data audit, which will be conducted both on-premises and on their cloud storage. Which of the following options should you implement to meet the above requirement? (Choose 2)
- set af lifecycle policy in the bucket to transition to S3 - IA after 30 days
- set a lifecycle policy in the bucket to transition the data to S3 - OneZone IA after one week (7 days)
- set a lifecycle policy in the bucket to transition to S3 Glacier Deep Archive after one week (7 days)
- set a lifecycle policy to transition to S3 - IA after one week (7 days)
- set a lifecycle policy to transition to Glacier after one week (7 days)
- set af lifecycle policy in the bucket to transition to S3 - IA after 30 days
- ⇒ Objects must be stored at least 30 days in S3 standard before you can transition them to S3 IA or S3 OneZone IA
- set a lifecycle policy to transition to Glacier after one week (7 days)
- can retrieve data within minutes
You are setting up a cost-effective architecture for a log processing application which has frequently accessed, throughput-intensive workloads with large, sequential I/O operations. The application should be hosted in an already existing On-Demand EC2 instance in your VPC. You have to attach a new EBS volume that will be used by the application. Which of the following is the most suitable EBS volume type that you should use in this scenario?
EBS throughput optimized HDD (st1)
Throughput Optimized HDD (st1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Bootable st1 volumes are not supported.
Throughput Optimized HDD (st1) volumes, though similar to Cold HDD (sc1) volumes, are designed to support frequently accessed data. (Cold HDD for less frequently accessed workloads)
You have an existing On-demand EC2 instance and you are planning to create a new EBS volume that will be attached to this instance. The data that will be stored are confidential medical records so you have to make sure that the data is protected. How can you secure the data at rest of the new EBS volume that you will create?
Create an encrypted EBS volume by ticking the encryption tickbox and attach it to the instance
You created a new CloudFormation template that creates 4 EC2 instances and are connected to one Elastic Load Balancer (ELB). Which section of the template should you configure to get the Domain Name Server hostname of the ELB upon the creation of the AWS stack?
Outputs
Outputs is an optional section of the CloudFormation template that describes the values that are returned whenever you view your stack’s properties.
An On-Demand EC2 instance is launched into a VPC subnet with the Network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance’s security group has an inbound rule to allow SSH from any IP address and does not have any outbound rules. In this scenario, what are the changes needed to allow SSH connection to the instance?
The outbound network ACL needs to be modified to allow outbound traffic
In order for you to establish an SSH connection from your home computer to your EC2 instance, you need to do the following:
- On the Security Group, add an Inbound Rule to allow SSH traffic to your EC2 instance.
- On the NACL, add both an Inbound and Outbound Rule to allow SSH traffic to your EC2 instance.
An investment bank has a distributed batch processing application which is hosted in an Auto Scaling group of Spot EC2 instances with an SQS queue. You configured your components to use client-side buffering so that the calls made from the client will be buffered first and then sent as a batch request to SQS. What is a period of time during which the SQS queue prevents other consuming components from receiving and processing a message?
Visibility Timeout
Immediately after the message is received, it remains in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The maximum is 12 hours.
A web application is deployed in an On-Demand EC2 instance in your VPC. There is an issue with the application which requires you to connect to it via an SSH connection. Which of the following is needed in order to access an EC2 instance from the Internet? (Choose 3)
- An Internet gateway
- A Private IP address attached to the instance
- A Public IP address attached to the instance
- a Private Elastic IP address attached to the instance
- A route entry to the internet gateway in the Route table of the VPC
- a VPN peering connection
- An Internet gateway
- A Public IP address attached to the instance
- A route entry to the internet gateway in the Route table of the VPC
An e-commerce application is using a fanout messaging pattern for its order management system. For every order, it sends an Amazon SNS message to an SNS topic, and the message is replicated and pushed to multiple Amazon SQS queues for parallel asynchronous processing. A Spot EC2 instance retrieves the message from each SQS queue and processes the message. There was an incident that while an EC2 instance is currently processing a message, the instance was abruptly terminated, and the processing was not completed in time. In this scenario, what happens to the SQS message?
when the message visibility timeout expires, the message becomes available for processing by other EC2 instances..
Because Amazon SQS is a distributed system, there’s no guarantee that the consumer actually receives the message (for example, due to a connectivity issue, or due to an issue in the consumer application). Thus, the consumer must delete the message from the queue after receiving and processing it.
Immediately after the message is received, it remains in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The maximum is 12 hour
What are Dead Letter Queues?
Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that can’t be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn’t succeed.
You just joined a large tech company with an existing Amazon VPC. When reviewing the Auto Scaling events, you noticed that their web application is scaling up and down multiple times within the hour. What design change could you make to optimize cost while preserving elasticity?
Change the cooldown period of the Auto Scaling group and set the CloudWatch metric to a higher threshold….
Since the application is scaling up and down multiple times within the hour, the issue lies on the cooldown period of the Auto Scaling group.
The cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that it doesn’t launch or terminate additional instances before the previous scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple scaling policy, it waits for the cooldown period to complete before resuming scaling activities.
When you manually scale your Auto Scaling group, the default is not to wait for the cooldown period, but you can override the default and honor the cooldown period. If an instance becomes unhealthy, the Auto Scaling group does not wait for the cooldown period to complete before replacing the unhealthy instance.
You are a working as a Solutions Architect for a fast-growing startup which just started operations during the past 3 months. They currently have an on-premises Active Directory and 10 computers. To save costs in procuring physical workstations, they decided to deploy virtual desktops for their new employees in a virtual private cloud in AWS. The new cloud infrastructure should leverage on the existing security controls in AWS but can still communicate with their on-premises network. Which set of AWS services will you use to meet these requirements?
- AWS Directory Services, VPN connection and AWS IAM
- AWS Directory Services, VPN Connection and Amazon workspace
- AWS Directory Services, VPN Connection and ClassicLink
- AWS Directory Services, VPN connection and S3
AWS Directory Services, VPN Connection and Amazon workspace
First, you need a VPN connection to connect the VPC and your on-premises network. Second, you need AWS Directory Services to integrate with your on-premises Active Directory and lastly, you need to use Amazon Workspace to create the needed virtual desktops in your VPC.
You are running an EC2 instance store-based instance. You shut it down and then start the instance. You noticed that the data which you have saved earlier is no longer available. What might be the cause of this?
the EC2 instance was using instance store volumes, which are ephemeral and ony live for the life of the instance
An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
You are working for a top IT Consultancy that has a VPC with two On-Demand EC2 instances with Elastic IP addresses. You were notified that your EC2 instances are currently under SSH brute force attacks over the Internet. Their IT Security team has identified the IP addresses where these attacks originated. You have to immediately implement a temporary fix to stop these attacks while the team is setting up AWS WAF, GuardDuty, and AWS Shield Advanced to permanently fix the security vulnerability. Which of the following provides the quickest way to stop the attacks to your instances?
Block the IP addresses in the Network Access Control List
(Removing the Internet Gateway from the VPC is incorrect because doing this will also make your EC2 instance inaccessible to you as it will cut down the connection to the Internet.)
What is a static Anycast IP address for?
Assigning a static Anycast IP address to each EC2 instance is primarily used by AWS Global Accelerator to enable organizations to seamlessly route traffic to multiple regions and improve availability and performance for their end-users.
You have a web application hosted on a fleet of EC2 instances located in two Availability Zones that are all placed behind an Application Load Balancer. As a Solutions Architect, you have to add a health check configuration to ensure your application is highly-available. Which health checks will you implement?
HTTP or HTTPS health check
The type of ELB that is mentioned here is an Application Elastic Load Balancer. This is used if you want a flexible feature set for your web applications with HTTP and HTTPS traffic. Conversely, it only allows 2 types of health check: HTTP and HTTPS.
When is TCP health checks offered?
TCP health checks are only offered in Network Load Balancer. it is used if you need ultra-high performance.
You are implementing a hybrid architecture for your company where you are connecting their Amazon Virtual Private Cloud (VPC) to their on-premises network. Which of the following can be used to create a private connection between the VPC and your company’s on-premises network?
Direct Connect
Direct Connect creates a direct, private connection from your on-premises data center to AWS, letting you establish a 1-gigabit or 10-gigabit dedicated network connection using Ethernet fiber-optic cable.
You are consulted by a multimedia company that needs to deploy web services to an AWS region which they have never used before. The company currently has an IAM role for their Amazon EC2 instance which permits the instance to access Amazon DynamoDB. They want their EC2 instances in the new region to have the exact same privileges. What should you do to accomplish this?
Assign the existing IAM role to instances in the new region
In this scenario, the company has an existing IAM role hence you don’t need to create a new one. IAM roles are global service that are available to all regions hence, all you have to do is assign the existing IAM role to the instance in the new region.
A company has 10 TB of infrequently accessed financial data files that would need to be stored in AWS. These data would be accessed infrequently during specific weeks when they are retrieved for auditing purposes. The retrieval time is not strict as long as it does not exceed 24 hours. Which of the following would be a secure, durable, and cost-effective solution for this scenario?
upload the data directly to Amazon Glacier through the AWS Management console
- upload the data to S3 and set a lifecycle policy to transition to Glacier after 0 days
- upload the data to S3 and transition to S3 OneZone IA
- upload the data to S3 and transition to S3 IA
upload the data to S3 and set a lifecycle policy to transition to Glacier after 0 days
Glacier has a management console which you can use to create and delete vaults. However, you cannot directly upload archives to Glacier by using the management console. To upload data, such as photos, videos, and other documents, you must either use the AWS CLI or write code to make requests, by using either the REST API directly or by using the AWS SDKs.
You are managing a global news website which has a very high traffic. To improve the performance, you redesigned the application architecture to use a Classic Load Balancer with an Auto Scaling Group in multiple Availability Zones. However, you noticed that one of the Availability Zones is not receiving any traffic. What is the root cause of this issue?
- by default, you are not allowed to use a load balancer with multi-AZ. you have to send a request form to AWS in order for this to work
- the AZ is not properly added to the load balancer which is why it is not receiving any traffic
- auto scaling should be disable for the load balancer to route the traffic to multiple AZs
- the classic load balancer is down
the AZ is not properly added to the load balancer which is why it is not receiving any traffic
In this scenario, one of the Availability Zones is not properly added to the Elastic load balancer. Hence, that Availability Zone is not receiving any traffic.
You can set up your load balancer in EC2-Classic to distribute incoming requests across EC2 instances in a single Availability Zone or multiple Availability Zones. First, launch EC2 instances in all the Availability Zones that you plan to use. Next, register these instances with your load balancer. Finally, add the Availability Zones to your load balancer. After you add an Availability Zone, the load balancer starts routing requests to the registered instances in that Availability Zone. Note that you can modify the Availability Zones for your load balancer at any time.
By default, the load balancer routes requests evenly across its Availability Zones. To route requests evenly across the registered instances in the Availability Zones, enable cross-zone load balancing.
You have a web application running on EC2 instances which processes sensitive financial information. All of the data are stored on an Amazon S3 bucket. The financial information is accessed by users over the Internet. The security team of the company is concerned that the Internet connectivity to Amazon S3 is a security risk. In this scenario, what will you do to resolve this security concern?
change the web architecture to access the financial data in your S3 bucket through a Gateway VPC endpoint..
Take note that your VPC lives within a larger AWS network and the services, such as S3, DynamoDB, RDS and many others, are located outside of your VPC, but still within the AWS network. By default, the connection that your VPC uses to connect to your S3 bucket or any other service traverses the public Internet via your Internet Gateway.
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
You are planning to migrate a MySQL database from your on-premises data center to your AWS Cloud. This database will be used by a legacy batch application which has steady-state workloads in the morning but has its peak load at night for the end-of-day processing. You need to choose an EBS volume which can handle a maximum of 450 GB of data and can also be used as the system boot volume for your EC2 instance. Which of the following is the most cost-effective storage type to use in this scenario?
Amazon EBS general purpose SSD (gp2)
The EBS volume that you should use has to handle a maximum of 450 GB of data and can also be used as the system boot volume for your EC2 instance. Since HDD volumes cannot be used as a bootable volume, we can narrow down our options by selecting SSD volumes. In addition, SSD volumes are more suitable for transactional database workloads
General Purpose: These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 10,000 IOPS (at 3,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size
Your company has a web-based ticketing service that utilizes Amazon SQS and a fleet of EC2 instances. The EC2 instances that consume messages from the SQS queue are configured to poll the queue as often as possible to keep end-to-end throughput as high as possible. You noticed that polling the queue in tight loops is using unnecessary CPU cycles, resulting in increased operational costs due to empty responses. In this scenario, what will you do to make the system more cost-effective?
Configure the SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero
The ReceiveMessageWaitTimeSeconds is the queue attribute that determines whether you are using Short or Long polling. By default, its value is zero which means it is using Short polling. If it is set to a value greater than zero, then it is Long polling.
A health organization is using a large Dedicated EC2 instance with multiple EBS volumes to host its health records web application. The EBS volumes must be encrypted due to the confidentiality of the data that they are handling and also to comply with the HIPAA (Health Insurance Portability and Accountability Act) standard. In EBS encryption, what service does AWS use to secure the volume’s data at rest? (Choose 2)
- by using Amazon-managed keys in AWS KMS
- by using a password stored in CloudHSM
- by using your own keys in AWS KMS
- by using the SSL certificates provided by the AWS Certificate Manager
- by using S3 client-side encryption
- by using S3 server-side encryption
the correct answers are: using your own keys in AWS Key Management Service (KMS) and using Amazon-managed keys in AWS Key Management Service (KMS).
Amazon EBS encryption offers seamless encryption of EBS data volumes, boot volumes, and snapshots, eliminating the need to build and maintain a secure key management infrastructure. EBS encryption enables data at rest security by encrypting your data using Amazon-managed keys, or keys you create and manage using the AWS Key Management Service (KMS).
(using S3 server-side or client-side encryption relates only to S3)
A Solutions Architect is migrating several Windows-based applications to AWS that require a scalable file system storage for high-performance computing (HPC). The storage service must have full support for the SMB protocol and Windows NTFS, Active Directory (AD) integration, and Distributed File System (DFS). Which of the following is the MOST suitable storage service that the Architect should use to fulfill this scenario?
Amazon FSx for Windows File Server
Amazon FSx provides fully managed third-party file systems. Amazon FSx provides you with the native compatibility of third-party file systems with feature sets for workloads such as Windows-based storage, high-performance computing (HPC), machine learning, and electronic design automation (EDA). You don’t have to worry about managing file servers and storage, as Amazon FSx automates the time-consuming administration tasks such as hardware provisioning, software configuration, patching, and backups. Amazon FSx integrates the file systems with cloud-native AWS services, making them even more useful for a broader set of workloads.
(Amazon FSx for Lustre is incorrect because this service doesn’t support the Windows-based applications as well as Windows servers.)
Amazon FSx for Windows File Server
Amazon FSx provides fully managed third-party file systems. Amazon FSx provides you with the native compatibility of third-party file systems with feature sets for workloads such as Windows-based storage, high-performance computing (HPC), machine learning, and electronic design automation (EDA). You don’t have to worry about managing file servers and storage, as Amazon FSx automates the time-consuming administration tasks such as hardware provisioning, software configuration, patching, and backups. Amazon FSx integrates the file systems with cloud-native AWS services, making them even more useful for a broader set of workloads.
(Amazon FSx for Lustre is incorrect because this service doesn’t support the Windows-based applications as well as Windows servers.)
The social media company that you are working for needs to capture the detailed information of all HTTP requests that went through their public-facing application load balancer every five minutes. They want to use this data for analyzing traffic patterns and for troubleshooting their web applications in AWS. Which of the following options meet the customer requirements?
enables access logs on the application load balancer
Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time.
A company is planning to launch a High Performance Computing (HPC) cluster in AWS that does Computational Fluid Dynamics (CFD) simulations. The solution should scale-out their simulation jobs to experiment with more tunable parameters for faster and more accurate results. The cluster is composed of Windows servers hosted on t3a.medium EC2 instances. As the Solutions Architect, you should ensure that the architecture provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. Which is the MOST suitable and cost-effective solution that the Architect should implement to achieve the above requirements?
enable Enhanced Networking with Elastic Network Adapter (ENA) on the windows EC2 instance..
Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.
Amazon EC2 provides enhanced networking capabilities through the Elastic Network Adapter (ENA). It supports network speeds of up to 100 Gbps for supported instance types. Elastic Network Adapters (ENAs) provide traditional IP networking features that are required to support VPC networking.
An Elastic Fabric Adapter (EFA) is simply an Elastic Network Adapter (ENA) with added capabilities. It provides all of the functionality of an ENA, with additional OS-bypass functionality. OS-bypass is an access model that allows HPC and machine learning applications to communicate directly with the network interface hardware to provide low-latency, reliable transport functionality.
The OS-bypass capabilities of EFAs are not supported on Windows instances. If you attach an EFA to a Windows instance, the instance functions as an Elastic Network Adapter, without the added EFA capabilities.
Hence, the correct answer is to enable Enhanced Networking with Elastic Network Adapter (ENA) on the Windows EC2 Instances.
You are working for a large financial company. In their enterprise application, they want to apply a group of database-specific settings to their Relational Database Instances.
Which of the following options can be used to easily apply the settings in one go for all of the Relational database instances?
Parameter Groups
You manage your DB engine configuration through the use of parameters in a DB parameter group. DB parameter groups act as a container for engine configuration values that are applied to one or more DB instances.
A Junior DevOps Engineer deployed a large EBS-backed EC2 instance to host a NodeJS web app in AWS which was developed by an IT contractor. He properly configured the security group and used a key pair named “tutorialsdojokey” which has a tutorialsdojokey.pem private key file. The EC2 instance works as expected and the junior DevOps engineer can connect to it using an SSH connection. The IT contractor was also given the key pair and he has made various changes in the instance as well to the files located in .ssh folder to make the NodeJS app work. After a few weeks, the IT contractor and the junior DevOps engineer cannot connect the EC2 instance anymore, even with a valid private key file. They are constantly getting a “Server refused our key” error even though their private key is valid.
In this scenario, which one of the following options is not a possible reason for this issue?
- the SSH private key that you are using has a file permission of 0777
- you don’t have permissions for the .ssh file
- you’re using an SSH private key but the corresponding public is not the authorized_keys file
- you don’t have permissions for your authorized_keys file
All of the options here are correct except for the option that says: The SSH private key that you are using has a file permission of 0777 because if the private key that you are using has a file permission of 0777, then it will throw an “Unprotected Private Key File” error and not a “Server refused our key” error.
You might be unable to log into an EC2 instance if:
- You’re using an SSH private key but the corresponding public key is not in the authorized_keys file.
- You don’t have permissions for your authorized_keys file.
- You don’t have permissions for the .ssh folder.
- Your authorized_keys file or .ssh folder isn’t named correctly.
- Your authorized_keys file or .ssh folder was deleted.
- Your instance was launched without a key, or it was launched with an incorrect key.
You have just launched a new API Gateway service which uses AWS Lambda as a serverless computing service. In what type of protocol will your API endpoint be exposed?
HTTPS
All of the APIs created with Amazon API Gateway expose HTTPS endpoints only (unencrypted, HTTP endpoints are not supported)
In a startup company you are working for, you are asked to design a web application that requires a NoSQL database that has no limit on the storage size for a given table. The startup is still new in the market and it has very limited human resources who can take care of the database infrastructure.
Which is the most suitable service that you can implement that provides a fully managed, scalable and highly available NoSQL service?
DynamoDB
Your manager instructed you to use Route 53 instead of an ELB to load balance the incoming request to your web application. The system is deployed to two EC2 instances to which the traffic needs to be distributed to. You want to set a specific percentage of traffic to go to each instance. Which routing policy would you use?
Weighted
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes including load balancing and testing new versions of software. You can set a specific percentage of how much traffic will be allocated to the resource by specifying the weights.
The start-up company that you are working for has a batch job application that is currently hosted on an EC2 instance. It is set to process messages from a queue created in SQS with default settings. You configured the application to process the messages once a week. After 2 weeks, you noticed that not all messages are being processed by the application. What is the root cause of this issue?
Amazon SQS has automatically deleted the messages that have been in a queue for more than the maximum message retention period…
Amazon SQS automatically deletes messages that have been in a queue for more than the maximum message retention period. The default message retention period is 4 days. Since the queue is configured to the default settings and the batch job application only processes the messages once a week, the messages that are in the queue for more than 4 days are deleted. This is the root cause of the issue.
To fix this, you can increase the message retention period to a maximum of 14 days using the SetQueueAttributes action.
An application is hosted in an On-Demand EC2 instance and is using Amazon SDK to communicate to other AWS services such as S3, DynamoDB, and many others. As part of the upcoming IT audit, you need to ensure that all API calls to your AWS resources are logged and durably stored. Which is the most suitable service that you should use to meet this requirement?
AWS CloudTrail
AWS CloudTrail increases visibility into your user and resource activity by recording AWS Management Console actions and API calls. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred.
A client is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The client also uses Amazon Route 53 to manage their public DNS. How should the client configure the DNS zone apex record to point to the load balancer?
- Create an alias for CNAME record to the load balancer DNS name
- create a CNAME record pointing to the load balancer DNS name
- Create an A record pointing to the IP address of the load balancer
- Create an A record aliased to the load balancer DNS name
Create an A record aliased to the load balancer DNS name
Additionally, Route 53 supports the alias resource record set, which lets you map your zone apex (e.g. tutorialsdojo.com) DNS name to your load balancer DNS name. IP addresses associated with Elastic Load Balancing can change at any time due to scaling or software updates. Route 53 responds to each request for an Alias resource record set with one IP address for the load balancer.
A website is running on an Auto Scaling group of On-Demand EC2 instances which are abruptly getting terminated from time to time. To automate the monitoring process, you started to create a simple script which uses the AWS CLI to find the root cause of this issue. Which of the following is the most suitable command to use?
- aws ec2 describe-images
- aws ec2 get-console-screenshot
- aws ec2 describe-volume-status
- aws ec2 describe-instances
aws ec2 describe-instances….
The describe-instances command shows the status of the EC2 instances including the recently terminated instances. It also returns a StateReason of why the instance was terminated.
A news company is planning to use a Hardware Security Module (CloudHSM) in AWS for secure key storage of their web applications. You have launched the CloudHSM cluster but after just a few hours, a support staff mistakenly attempted to log in as the administrator three times using an invalid password in the Hardware Security Module. This has caused the HSM to be zeroized, which means that the encryption keys on it have been wiped. Unfortunately, you did not have a copy of the keys stored anywhere else.
How can you obtain a new copy of the keys that you have stored on Hardware Security Module?
the keys are lost permanently if you did not have a copy
Attempting to log in as the administrator more than twice with the wrong password zeroizes your HSM appliance. When an HSM is zeroized, all keys, certificates, and other data on the HSM is destroyed. You can use your cluster’s security group to prevent an unauthenticated user from zeroizing your HSM.
Amazon does not have access to your keys nor to the credentials of your Hardware Security Module (HSM) and therefore has no way to recover your keys if you lose your credentials. Amazon strongly recommends that you use two or more HSMs in separate Availability Zones in any production CloudHSM Cluster to avoid loss of cryptographic keys.
You recently launched a fleet of on-demand EC2 instances to host a massively multiplayer online role-playing game (MMORPG) server in your VPC. The EC2 instances are configured with Auto Scaling and AWS Systems Manager. What can you use to configure your EC2 instances without having to establish a RDP or SSH connection to each instance?
Run Command…
You can use Run Command from the console to configure instances without having to login to each instance.
You are working for a data analytics startup that collects clickstream data and stores them in an S3 bucket. You need to launch an AWS Lambda function to trigger your ETL jobs to run as soon as new data becomes available in Amazon S3. Which of the following services can you use as an extract, transform, and load (ETL) service in this scenario?
AWS Glue
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You can create and run an ETL job with a few clicks in the AWS Management Console. You simply point AWS Glue to your data stored on AWS, and AWS Glue discovers your data and stores the associated metadata (e.g. table definition and schema) in the AWS Glue Data Catalog.
A financial analytics application that collects, processes and analyzes stock data in real-time is using Kinesis Data Streams. The producers continually push data to Kinesis Data Streams while the consumers process the data in real time. In Amazon Kinesis, where can the consumers store their results? (Choose 2)
- S3
- Redshift
Consumers (such as a custom application running on Amazon EC2, or an Amazon Kinesis Data Firehose delivery stream) can store their results using an AWS service such as Amazon DynamoDB, Amazon Redshift, or Amazon S3.
A leading bank has an application that is hosted on an Auto Scaling group of EBS-backed EC2 instances. As the Solutions Architect, you need to provide the ability to fully restore the data stored in their EBS volumes by using EBS snapshots. Which of the following approaches provide the lowest cost for Amazon Elastic Block Store snapshots?
- just maintain a single snapshot of the BS volume since the latest snapshot is both incremental and complete
- maintain a volume snapshot, subsequent snapshots will overwrite one another
- maintain two snapshots, the original snapshot and the latest incremental snapshot
- maintain the most current snapshot and then archive the original and incremental snapshots to Glacier
*
just maintain a single snapshot of the BS volume since the latest snapshot is both incremental and complete
You recently launched a news website which is expected to be visited by millions of people around the world. You chose to deploy the website in AWS to take advantage of its extensive range of cloud services and global infrastructure. Aside from AWS Region and Availability Zones, which of the following is part of the AWS Global Infrastructure that is used for content distribution?
Edge Locations
An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes. Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Choose 2)
- all data moving between the volume and the instance are encrypted
- snapshots are not automatically encrypted
- snapshots are automatically encrypted
- only the data in the volume is encrypted and not all the data moving between the volume and the instance
- the volumes created from the encrypted snapshots are not encrypted
When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:
- Data at rest inside the volume
- All data moving between the volume and the instance
- All snapshots created from the volume
- All volumes created from those snapshots
Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. You can encrypt both the boot and data volumes of an EC2 instance.
You are working as a Solutions Architect for a multinational IT consultancy company where you are managing an application hosted in an Auto Scaling group of EC2 instances which stores data in an S3 bucket. You must ensure that the data are encrypted at rest using an encryption key that is both provided and managed by the company. This change should also provide AES-256 encryption to their data to comply with the strict security policy of the company. Which of the following actions should you implement to achieve this? (Choose 2)
- implement S3 server-side encryption with AWS KMS
- encrypt the data on the client-side before sending to S3 using their own master key
- implement S3 server-side encryption with customer-provided keys (SSE-C)
- use SSL to encrypt the data while in transit to S3
- implement s3 server-side encryption with Amazon managed encryption keys
encrypt data on the client-side before sending to S3 using their own master key + implement s3 server-side encryption with customer-provided keys (SSE-C)
(using SSL to encrypt the data while in transit to S3 is incorrect because the requirement is to only secure the data at rest and not data in transit)
A company has recently adopted a hybrid cloud architecture and is planning to migrate a database hosted on-premises to AWS. The database currently has over 12 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow exponentially. The Solutions Architect should ensure that the database is ACID-compliant and can handle complex queries of the application. Which type of database service should the Architect use?
Amazon Aurora…..
!=Amazon Redshift is incorrect because this is primarily used for OLAP applications and not for OLTP. Moreover, it doesn’t scale automatically to handle the exponential growth of the database.
!=Amazon DynamoDB is incorrect because although you can use this to have an ACID-compliant database, it is not capable of handling complex queries and highly transactional (OLTP) workloads.
!=Amazon RDS is incorrect because although it is an ACID-compliant relational database that can handle complex queries and transactional (OLTP) workloads, it is not scalable to handle the growth of the database. Amazon Aurora is the better choice as its underlying storage can grow automatically as needed.
What is AWS Database Migration Service for?
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
What is Amazon Neptune?
Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets.
A financial company wants to store their data in Amazon S3 but at the same time, they want to store their frequently accessed data locally on their on-premises server. This is due to the fact that they do not have the option to extend their on-premises storage, which is why they are looking for a durable and scalable storage service to use in AWS. What is the best solution for this scenario?
Use Storage Gateway - Cached Volumes
By using Cached volumes, you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally in your on-premises network. Cached volumes offer substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. This is the best solution for this scenario.
A loan processing application is hosted in a single On-Demand EC2 instance in your VPC. To improve the scalability of your application, you have to use Auto Scaling to automatically add new EC2 instances to handle a surge of incoming requests. Which of the following items should be done in order to add an existing EC2 instance to an Auto Scaling group? (Choose 2)
- the instance is launched into one of the AZs defined in your Auto Scaling Group
- you must stop the instance first
- you have to ensure that the AMI used to launch the instance no longer exists
- you have to ensure that the AMI used to launch the instance still exists
- you have to ensure that the instance is in a different AZ as the Auto Scaling group
The instance that you want to attach must meet the following criteria:
- The instance is in the running state.
- The AMI used to launch the instance must still exist.
- The instance is not a member of another Auto Scaling group.
- The instance is launched into one of the Availability Zones defined in your Auto Scaling group.
- If the Auto Scaling group has an attached load balancer, the instance and the load balancer must both be in EC2-Classic or the same VPC. If the Auto Scaling group has an attached target group, the instance and the load balancer must both be in the same VPC.
A local bank has an in-house application which handles sensitive financial data in a private subnet. After the data is processed by the EC2 worker instances, they will be delivered to S3 for ingestion by other services. How should you design this solution so that the data does not pass through the public Internet?
Configure a VPC Gateway Endpoint along with a corresponding route entry that directs the data to S3
The important concept that you have to understand in the scenario is that your VPC and your S3 bucket are located within the larger AWS network. However, the traffic coming from your VPC to your S3 bucket is traversing the public Internet by default. To better protect your data in transit, you can set up a VPC endpoint so the incoming traffic from your VPC will not pass through the public Internet, but instead through the private AWS network.
You are an IT Consultant for a top investment bank which is in the process of building its new Forex trading platform. To ensure high availability and scalability, you designed the trading platform to use an Elastic Load Balancer in front of an Auto Scaling group of On-Demand EC2 instances across multiple Availability Zones. For its database tier, you chose to use a single Amazon Aurora instance to take advantage of its distributed, fault-tolerant and self-healing storage system. In the event of system failure on the primary database instance, what happens to Amazon Aurora during the failover?
Aurora will first attempt to create a new DB instance in the same AZ as the original instance. If unable to do so, Aurora will attempt to create a new DB instance in a different AZ.
= If you do not have an Amazon Aurora Replica (i.e. single instance) and are not running Aurora Serverless, Aurora will attempt to create a new DB Instance in the same Availability Zone as the original instance. This replacement of the original instance is done on a best-effort basis and may not succeed, for example, if there is an issue that is broadly affecting the Availability Zone.
(The options that say: Amazon Aurora flips the canonical name record (CNAME) for your DB Instance to point at the healthy replica, which in turn is promoted to become the new primary and Amazon Aurora flips the A record of your DB Instance to point at the healthy replica, which in turn is promoted to become the new primary are incorrect because this will only happen if you are using an Amazon Aurora Replica. In addition, Amazon Aurora flips the canonical name record (CNAME) and not the A record (IP address) of the instance.)
You are designing an online banking application which needs to have a distributed session data management. Currently, the application is hosted on an Auto Scaling group of On-Demand EC2 instances across multiple Availability Zones with a Classic Load Balancer that distributes the load. Which of the following options should you do to satisfy the given requirement?
Use Amazon ElastiCache
In this question, the keyword is distributed session data management. In AWS, you can use Amazon ElastiCache which offers fully managed Redis and Memcached service to manage and store session data for your web applications.
A data analytics company keeps a massive volume of data which they store in their on-premises data center. To scale their storage systems, they are looking for cloud-backed storage volumes that they can mount using Internet Small Computer System Interface (iSCSI) devices from their on-premises application servers. They have an on-site data analytics application which frequently access the latest data subsets locally while the older data are rarely accessed. You are required to minimize the need to scale the on-premises storage infrastructure while still providing their web application with low-latency access to the data. Which type of AWS Storage Gateway service will you use to meet the above requirements?
Cached Volume Gateway
In this scenario, the technology company is looking for a storage service that will enable their analytics application to frequently access the latest data subsets and not the entire data set because it was mentioned that the old data are rarely being used. This requirement can be fulfilled by setting up a Cached Volume Gateway in AWS Storage Gateway.
You are working as a Solutions Architect for a leading technology company where you are instructed to troubleshoot the operational issues of your cloud architecture by logging the AWS API call history of your AWS resources. You need to quickly identify the most recent changes made to resources in your environment, including creation, modification, and deletion of AWS resources. One of the requirements is that the generated log files should be encrypted to avoid any security issues. Which of the following is the most suitable approach to implement the encryption?
Use CloudTrail with its default settings…
By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE).
You are building a prototype for a cryptocurrency news website of a small startup. The website will be deployed to a Spot EC2 Linux instance and will use Amazon Aurora as its database. You requested a spot instance at a maximum price of $0.04/hr which has been fulfilled immediately and after 90 minutes, the spot price increases to $0.06/hr and then your instance was terminated by AWS. In this scenario, what would be the total cost of running your spot instance?
$0.06
Since the Spot instance has been running for more than an hour, which is past the first instance hour, this means that you will be charged from the time it was launched till the time it was terminated by AWS. The computation for your 90 minute usage would be $0.04 (60 minutes) + $0.02 (30 minutes) = $0.06 hence, the correct answer is $0.06.
How will I be charged if my Spot instance is interrupted?
If your Spot instance is terminated or stopped by Amazon EC2 in the first instance hour, you will not be charged for that usage. However, if you terminate the instance yourself, you will be charged to the nearest second. If the Spot instance is terminated or stopped by Amazon EC2 in any subsequent hour, you will be charged for your usage to the nearest second. If you are running on Windows and you terminate the instance yourself, you will be charged for an entire hour.
You are setting up a configuration management in your existing cloud architecture where you have to deploy and manage your EC2 instances including the other AWS resources using Chef and Puppet. Which of the following is the most suitable service to use in this scenario?
AWS OpsWorks
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.
You are working as an IT Consultant for a large financial firm. They have a requirement to store irreproducible financial documents using Amazon S3. For their quarterly reporting, the files are required to be retrieved after a period of 3 months. There will be some occasions when a surprise audit will be held, which requires access to the archived data that they need to present immediately. What will you do to satisfy this requirement in a cost-effective way?
Amazon S3 IA
In this scenario, the requirement is to have a storage option that is cost-effective and has the ability to access or retrieve the archived data immediately. The cost-effective options are Amazon Glacier Deep Archive and Amazon S3 Standard- Infrequent Access (Standard - IA). However, the former option is not designed for rapid retrieval of data which is required for the surprise audit.
You have an On-Demand EC2 instance with an attached EBS volume. There is a scheduled job that creates a snapshot of this EBS volume every midnight at 12 AM when the instance is not used. One night, there has been a production incident where you need to perform a change on both the instance and on the EBS volume at the same time, when the snapshot is currently taking place. Which of the following scenario is true when it comes to the usage of an EBS volume while the snapshot is in progress?
The EBS volume can be used while the snapshot is in progress…
Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed.
While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume hence, you can still use the EBS volume normally.
An application is hosted in an Auto Scaling group of EC2 instances. To improve the monitoring process, you have to configure the current capacity to increase or decrease based on a set of scaling adjustments. This should be done by specifying the scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process. Which of the following is the most suitable type of scaling policy that you should use?
Step Scaling
Amazon EC2 Auto Scaling supports the following types of scaling policies:
Target tracking scaling - Increase or decrease the current capacity of the group based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home – you select a temperature and the thermostat does the rest.
Step scaling - Increase or decrease the current capacity of the group based on a set of scaling adjustments, known as step adjustments, that vary based on the size of the alarm breach.
Simple scaling - Increase or decrease the current capacity of the group based on a single scaling adjustment.
If you are scaling based on a utilization metric that increases or decreases proportionally to the number of instances in an Auto Scaling group, then it is recommended that you use target tracking scaling policies. Otherwise, it is better to use step scaling policies instead.
Your IT Manager asks you to create a decoupled application whose process includes dependencies on EC2 instances and servers located in your company’s on-premises data center. Which of these options are you least likely to recommend as part of that process?
- Establish a Direct Connect connection from your on-premises network and VPC
- SQS Polling from an EC2 instance using IAM user credentials
- SQS polling from an EC2 instance deployed with an IAM role
- An SWF workflow
SQS polling from an EC2 instance using IAM user credentials
For decoupled applications, it is best to use SWF and SQS which are both available in all options. Note that this question asks you for the option that you would LEAST likely to recommend.
SQS polling from an EC2 instance using IAM user credentials is not the recommended way to do so. It should use an IAM role instead.
You are working as a Solutions Architect in a global investment bank which requires corporate IT governance and cost oversight of all of their AWS resources across their divisions around the world. Their corporate divisions want to maintain administrative control of the discrete AWS resources they consume and ensure that those resources are separate from other divisions. Which of the following options will support the autonomy of each corporate division while enabling the corporate IT to maintain governance and cost oversight? (Select TWO.)
- use AWS consolidated billing by creating AWS organizations to link the divisions’ accounts to a parent corporate account
- create separate VPCs for each division within the corporate IT aws account
- enable IAM cross-account access for all corporate IT administrators in each child account
- create separate availability zones for each division within the corporate IT aws account
In this scenario, enabling IAM cross-account access for all corporate IT administrators in each child account and using AWS Consolidated Billing by creating AWS Organizations to link the divisions’ accounts to a parent corporate account are the correct choices. The combined use of IAM and Consolidated Billing will support the autonomy of each corporate division while enabling corporate IT to maintain governance and cost oversight.
You are working as an AWS Engineer in a major telecommunications company in which you are tasked to make a network monitoring system. You launched an EC2 instance to host the monitoring system and used CloudWatch to monitor, store, and access the log files of your instance. Which of the following provides an automated way to send log data to CloudWatch Logs from your Amazon EC2 instance?
CloudWatch Logs agent
The CloudWatch Logs agent provides an automated way to send log data to CloudWatch Logs from Amazon EC2 instances. The agent is comprised of the following components: A plug-in to the AWS CLI that pushes log data to CloudWatch Logs.
You are trying to enable Cross-Region Replication to your S3 bucket but this option is disabled. Which of the following options is a valid reason for this?
- in order to use Cross-Region Replication feature in S3, you need to first enable versioning on the bucket
- the Cross-Region Replication feature is only available for S3-IA
- the Cross-Region Replication feature is only available for Amason S3 RRS
- this is a premium feature only available for AWS enterprise aacoutns
in order to use Cross-Region Replication feature in S3, you need to first enable versioning on the bucket
To enable the cross-region replication feature in S3, the following items should be met:
- The source and destination buckets must have versioning enabled.
- The source and destination buckets must be in different AWS Regions.
- Amazon S3 must have permissions to replicate objects from that source bucket to the destination bucket on your behalf.
A WordPress website hosted in an EC2 instance, which has an additional EBS volume attached, was mistakenly deployed in the us-east-1a Availability Zone due to a misconfiguration in your CloudFormation template. There is a requirement to quickly rectify the issue by moving and attaching the EBS volume to a new EC2 instance in the us-east-1b Availability Zone. As the Solutions Architect of the company, which of the following should you do to solve this issue?
First, create a snapshot of the EBS volume. Afterwards, create a volume using thr snapshot in the other AZ.
A company would like to store their old yet confidential corporate files that are infrequently accessed. Which is the MOST cost-efficient solution in AWS that should you recommend?
- S3
- Glacier
- Storage Gateway
- EBS
Glacier
A multinational company has been building its new data analytics platform with high-performance computing workloads (HPC) which requires a scalable, POSIX-compliant storage service. The data need to be stored redundantly across multiple AZs and allows concurrent connections from thousands of EC2 instances hosted on multiple Availability Zones. Which of the following AWS storage service is the most suitable one to use in this scenario?
EFS
In this question, you should take note of this phrase: “allows concurrent connections from multiple EC2 instances”. There are various AWS storage options that you can choose but whenever these criteria show up, always consider using EFS instead of using EBS Volumes which is mainly used as a “block” storage and can only have one connection to one EC2 instance at a time.
You are working as a Solutions Architect for a major accounting firm, and they have a legacy general ledger accounting application that needs to be moved to AWS. However, the legacy application has a dependency on multicast networking. On this scenario, which of the following options should you consider to ensure the legacy application works in AWS?
- all of the above
- provision Elastic Network Interfaces between the subnets
- Create all the subnets on another VPC and enable VPC peering
- create a virtual overlay network on the OS level of the instance
create a virtual overlay network on the OS level of the instance
Creating a virtual overlay network running on the OS level of the instance is correct because overlay multicast is a method of building IP level multicast across a network fabric supporting unicast IP routing, such as Amazon Virtual Private Cloud (Amazon VPC).
(Amazon VPC does not support multicast or broadcast networking)
You have a fleet of running Spot EC2 instances behind an Application Load Balancer. The incoming traffic comes from various users across multiple AWS regions and you would like to have the user’s session shared among your fleet of instances. You are required to set up a distributed session management layer that will provide a scalable and shared data storage for the user sessions. Which of the following would be the best choice to meet the requirement while still providing sub-millisecond latency for your users?
- ElastiCache in-memory caching
- Multi-master DynamoDB
- ELB sticky sessions
- Multi-AZ RDS
ElastiCache in-memory caching
For sub-millisecond latency caching, ElastiCache is the best choice.
(Multi-master DynamoDB and Multi-AZ RDS are incorrect because although you can use DynamoDB and RDS for storing session state, these two are not the best choices in terms of cost-effectiveness and performance when compared to ElastiCache. There is a significant difference in terms of latency if you used DynamoDB and RDS when you store the session data.)
You recently created a brand new IAM User with a default setting using AWS CLI. This is intended to be used to send API requests to your S3, DynamoDB, Lambda, and other AWS resources of your cloud infrastructure. Which of the following must be done to allow the user to make API calls to your AWS resources?
- Enable MFA for the user
- create a set of Access Keys for the user and attach the necessary permissions
- Assign an IAM policy to the user to allow it to send API calls
- Do nothing as the IAM user is already capable of sending API calls to your AWS resources
create a set of Access Keys for the user and attach the necessary permissions
You can choose the credentials that are right for your IAM user. When you use the AWS Management Console to create a user, you must choose to at least include a console password or access keys. By default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. You must create the type of credentials for an IAM user based on the needs of your user. + Assigning an IAM Policy to the user to allow it to send API calls is incorrect because adding a new IAM policy to the new user will not grant the needed Access Keys needed to make API calls to the AWS resources.
You are working for a startup that builds Internet of Things (IOT) devices and monitoring application. They are using IOT sensors to monitor all data by using Amazon Kinesis configured with default settings. You then send the data to an Amazon S3 bucket after 2 days. When you checked the data in S3, there are only data for the last day and nothing for the first day. What is the root cause of this issue?
By default, data records in Kinesis are only accessible for 24 hours from the time they are added to a stream
The company that you are working for has instructed you to create a cost-effective cloud solution for their online movie ticketing service. Your team has designed a solution of using a fleet of Spot EC2 instances to host the new ticketing web application. You requested a spot instance at a maximum price of $0.06/hr which has been fulfilled immediately. After 45 minutes, the spot price increased to $0.08/hr and then your instance was terminated by AWS. What was the total EC2 compute cost of running your spot instances?
$0.00
If your Spot instance is terminated or stopped by Amazon EC2 in the first instance hour, you will not be charged for that usage. However, if you terminate the instance yourself, you will be charged to the nearest second.
If the Spot instance is terminated or stopped by Amazon EC2 in any subsequent hour, you will be charged for your usage to the nearest second. If you are running on Windows and you terminate the instance yourself, you will be charged for an entire hour.
Your boss has asked you to launch a new MySQL RDS which ensures that you are available to recover from a database crash. Which of the below is not a recommended practice for RDS?
- use MyISAM as the storage engine for MySQL
- partition your large tables so that file sizes does not exceed the 16 TB limit
- ensure that automated backups are enabled for the RDS
- use the InnoDB as the storage engine for MySQL
Using MyISAM as the storage engine for MySQL is not recommended. The recommended storage engine for MySQL is InnoDB and not MyISAM.
A multinational corporate and investment bank is regularly processing steady workloads of accruals, loan interests, and other critical financial calculations every night at 10 PM to 3 AM on their on-premises data center for their corporate clients. Once the process is done, the results are then uploaded to the Oracle General Ledger which means that the processing should not be delayed nor interrupted. The CTO has decided to move their IT infrastructure to AWS to save cost and to improve the scalability of their digital financial services. As the Senior Solutions Architect, how can you implement a cost-effective architecture in AWS for their financial system?
use Scheduled Reserved instances, which compute capacity that is always on the specified recurring schedule
Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term.
You were hired as an IT Consultant in a startup cryptocurrency company that wants to go global with their international money transfer app. Your project is to make sure that the database of the app is highly available on multiple regions.
What are the benefits of adding Multi-AZ deployments in Amazon RDS? (Select TWO.)
- creates a primary DB instance and synchronously replicates the data to a standby instance in a different AZ in a different region
- provides SQL optimization
- it makes the database fault-tolerant to an AZ failure
- increased database availability in the case of system upgrades like OS patching or DB instance scaling
- significantly increase the database performance
- it makes the database fault-tolerant to an AZ failure
- increased database availability in the case of system upgrades like OS patching or DB instance scaling
You are working for a weather station in Asia with a weather monitoring system that needs to be migrated to AWS. Since the monitoring system requires a low network latency and high network throughput, you decided to launch your EC2 instances to a new cluster placement group. The system was working fine for a couple of weeks, however, when you try to add new instances to the placement group that already has running EC2 instances, you receive an ‘insufficient capacity error’. How will you fix this issue?
- create another Placement Group and launch the new instances in the new group
- Submit a capacity increase request to AWS as you are initially limited to only 12 instances per placement group
- verify all running instances are of the same size and type and then try the launch again
- stop and restart the instances in the Placement Group and then try the launch again
stop and restart the instances in the Placement Group and then try the launch again
The option that says: Stop and restart the instances in the Placement Group and then try the launch again is correct because you can resolve this issue just by launching again. If the instances are stopped and restarted, AWS may move the instances to a hardware that has capacity for all the requested instances.
As the Solutions Architect, you have built a photo-sharing site for an entertainment company. The site was hosted using 3 EC2 instances in a single availability zone with a Classic Load Balancer in front to evenly distribute the incoming load. What should you do to enable your Classic Load Balancer to bind a user’s session to a specific instance?
Sticky sessions
By default, a Classic Load Balancer routes each request independently to the registered instance with the smallest load. However, you can use the sticky session feature (also known as session affinity), which enables the load balancer to bind a user’s session to a specific instance. This ensures that all requests from the user during the session are sent to the same instance.
A tech company is running two production web servers hosted on Reserved EC2 instances with EBS-backed root volumes. These instances have a consistent CPU load of 90%. Traffic is being distributed to these instances by an Elastic Load Balancer. In addition, they also have Multi-AZ RDS MySQL databases for their production, test, and development environments.
What recommendation would you make to reduce cost in this AWS environment without affecting availability and performance of mission-critical systems? Choose the best answer.
- consider not using a multi-AZ RDS deployment for the development and test data
One thing that you should notice here is that the company is using Multi-AZ databases in all of their environments, including their development and test environment. This is costly and unnecessary as these two environments are not critical. It is better to use Multi-AZ for production environments to reduce costs, which is why the option that says: Consider not using a Multi-AZ RDS deployment for the development and test database is the correct answer.
You have several EC2 Reserved Instances in your account that needs to be decommissioned and shut down since they are no longer required. The data is still required by the Audit team. Which of the following steps can be taken for this scenario? (Select TWO.)
- convert the Ec2 instance to on-demand instances
- you can opt to sell these EC2 instances on the AWS Reserved Instance marketplace
- Convert the EC2 instances to Spot instances with a persistent Spot request type
- take snapshots of the EBS volumes and terminate the EC2 instances
You can create a snapshot of the instance to save its data and then sell the instance to the Reserved Instance Marketplace.
You deployed a web application to an EC2 instance that adds a variety of photo effects to a picture uploaded by the users. The application will put the generated photos to an S3 bucket by sending PUT requests to the S3 API. What is the best option for this scenario considering that you need to have API credentials to be able to send a request to the S3 API?
Create a role in IAM. Afterwards, assign this role to a new EC2 instance.
The best option is to create a role in IAM. Afterwards, assign this role to a new EC2 instance. Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances.
(storing your API credentials in S3 Glacier is incorrect as S3 Glacier is used for data archives and not for managing API credentials)
Your company has developed a financial analytics web application hosted in a Docker container using MEAN (MongoDB, Express.js, AngularJS, and Node.js) stack. You want to easily port that web application to AWS Cloud which can automatically handle all the tasks such as balancing load, auto-scaling, monitoring, and placing your containers across your cluster. Which of the following services can be used to fulfill this requirement?
AWS elastic beanstalk
Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren’t supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
What is OpsWork?
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet
What is AWS CodeDeploy?
CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions. It allows you to rapidly release new features, update Lambda function versions, avoid downtime during application deployment, and handle the complexity of updating your applications, without many of the risks associated with error-prone manual deployments.
A web application is hosted on a fleet of EC2 instances inside an Auto Scaling Group with a couple of Lambda functions for ad hoc processing. Whenever you release updates to your application every week, there are inconsistencies where some resources are not updated properly. You need a way to group the resources together and deploy the new version of your code consistently among the groups with minimal downtime. Which among these options should you do to satisfy the given requirement with the least effort?
Use deployment groups in CodeDeploy to automate code deployments in a consistent manner.
A commercial bank has designed their next generation online banking platform to use a distributed system architecture. As their Software Architect, you have to ensure that their architecture is highly scalable, yet still cost-effective. Which of the following will provide the most suitable solution for this scenario?
- launch multiple EC2 instances behind an ALB to host your application services, and SWF which will act as a highly-scalable buffer that stores messages as they travel between distributed applications
- Launch an Auto-scaling group of EC2 instances to host your application services and an SQS queue. Include an Auto Scaling trigger to watch the SQS queue size which will either scale in or out the number of EC2 instances based on the queue
- launch multiple on demand ec2 instances to host your application services and an SQS queue which will act as a highly-scalable buffer that stores messages as they travel between distributed applications
Launch an Auto-scaling group of EC2 instances to host your application services and an SQS queue. Include an Auto Scaling trigger to watch the SQS queue size which will either scale in or out the number of EC2 instances based on the queue
There are three main parts in a distributed messaging system: the components of your distributed system which can be hosted on EC2 instance; your queue (distributed on Amazon SQS servers); and the messages in the queue.
To improve the scalability of your distributed system, you can add Auto Scaling group to your EC2 instances.
You are the Solutions Architect of a software development company where you are required to connect the on-premises infrastructure to their AWS cloud. Which of the following AWS services can you use to accomplish this? (Select TWO.)
- AWS direct connect
- VPC peering
- NAT Gateway
- Amazon Connect
- IPsec VPN connection
Direct Connect + IPsec VPN Connection
You can connect your VPC to remote networks by using a VPN connection which can be Direct Connect, IPsec VPN connection, AWS VPN CloudHub, or a third party software VPN appliance. Hence, IPsec VPN connection and AWS Direct Connect are the correct answers.
(Amazon Connect is incorrect because this is not a VPN connectivity option. It is actually a self-service, cloud-based contact center service in AWS that makes it easy for any business to deliver better customer service at a lower cost. Amazon Connect is based on the same contact center technology used by Amazon customer service associates around the world to power millions of customer conversations.)
What is Amazon Connect?
It is actually a self-service, cloud-based contact center service in AWS that makes it easy for any business to deliver better customer service at a lower cost. Amazon Connect is based on the same contact center technology used by Amazon customer service associates around the world to power millions of customer conversations.)
(Amazon Connect is NOT a VPN connectivity option unlike Direct Connect.)
A multinational manufacturing company has multiple accounts in AWS to separate their various departments such as finance, human resources, engineering and many others. There is a requirement to ensure that certain access to services and actions are properly controlled to comply with the security policy of the company. As the Solutions Architect, which is the most suitable way to set up the multi-account AWS environment of the company?
- use AWS Organizations and Service Control Policies to control services on each account
- Connect all departments by setting up a cross-account access to each of the AWS accounts of the company. Create and attach IAM policies to your resources based on their respective departments to control access.
- set up a common IAM policy that can be applied across all AWS accounts
- provide access to externally authenticated users via Identity Federation. Set up an IAM role to specify permissions for users from each department whose identity is federated from your organization or a third-party identity provider
use AWS Organizations and Service Control Policies to control services on each account
AWS Organizations offers policy-based management for multiple AWS accounts. With Organizations, you can create groups of accounts, automate account creation, apply and manage policies for those groups. Organizations enables you to centrally manage policies across multiple accounts, without requiring custom scripts and manual processes. It allows you to create Service Control Policies (SCPs) that centrally control AWS service use across multiple AWS accounts.
(The option that says: Connect all departments by setting up a cross-account access to each of the AWS accounts of the company. Create and attach IAM policies to your resources based on their respective departments to control access is incorrect because although you can set up cross-account access to each department, this entails a lot of configuration compared with using AWS Organizations and Service Control Policies (SCPs). Cross-account access would be a more suitable choice if you only have two accounts to manage, but not for multiple accounts.)
You are a Solutions Architect in an intelligence agency that is currently hosting a learning and training portal in AWS. Your manager instructed you to launch a large EC2 instance with an attached EBS Volume and enable Enhanced Networking. What are the valid case scenarios in using Enhanced Networking? (Select TWO.)
- when you need consistently lower inter-instance latencies
- when you need high latency networking
- when you need a dedicated connection to your on-premises data center
- when you need a low packet-per-second performance
- when you need a higher packet-per-second performance
Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.
- when you need consistently lower inter-instance latencies
- when you need a higher packet-per-second performance
You are a Solutions Architect working for a software development company. You are planning to launch a fleet of EBS-backed EC2 instances and want to automatically assign each instance with a static private IP address which does not change even if the instances are restarted. What should you do to accomplish this?
Launch the instances in the AWS VPC
In EC2-Classic, your EC2 instance receives a private IPv4 address from the EC2-Classic range each time it’s started. In EC2-VPC on the other hand, your EC2 instance receives a static private IPv4 address from the address range of your default VPC. Hence, the correct answer is launching the instances in the Amazon Virtual Private Cloud (VPC) and not launching the instances in EC2-Classic
You are working for a startup which develops an AI-based traffic monitoring service. You need to register a new domain called www.tutorialsdojo-ai.com and set up other DNS entries for the other components of your system in AWS. Which of the following is not supported by Amazon Route 53?
- DNSSEC (Domain Name System Security Extensions)
- PTR (pointer record)
- SRV (service locator)
- SPF (sender policy framework)
Amazon Route 53’s DNS services does not support DNSSEC at this time. However, their domain name registration service supports configuration of signed DNSSEC keys for domains when DNS service is configured at another provider.
AWS Route 53 currently supports:
- -A (address record)
- -AAAA (IPv6 address record)
- -CNAME (canonical name record)
- -CAA (certification authority authorization)
- -MX (mail exchange record)
- -NAPTR (name authority pointer record)
- -NS (name server record)
- -PTR (pointer record)
- -SOA (start of authority record)
- -SPF (sender policy framework)
- -SRV (service locator)
- -TXT (text record)
A top university has recently launched its online learning portal where the students can take e-learning courses from the comforts of their homes. The portal is on a large On-Demand EC2 instance with a single Amazon Aurora database. How can you improve the availability of your Aurora database to prevent any unnecessary downtime of the online portal?
Use Multi-AZ deployment! (this is the best you can do)
next, Create Amazon Aurora Replicas (For most use cases, including read scaling and high availability, it is recommended using Amazon Aurora Replicas)
AWS hosts a variety of public datasets such as satellite imagery, geospatial, or genomic data that you want to use for your web application hosted in Amazon EC2. If you use these datasets, how much will it cost you?
AWS hosts a variety of public datasets that anyone can access for free.
You have designed and built a new AWS architecture. After deploying your application to an On-demand EC2 instance, you found that there is an issue in your application when connecting to port 443. After troubleshooting the issue, you added port 443 to the security group of the instance.How long will it take before the changes are applied to all of the resources in your VPC?
Immediately
The correct answer is Immediately. Changes made in a security group are immediately implemented. There is no need to wait for some amount of time for propagation nor reboot any instances for your changes to take effect.
A Solutions Architect designed a real-time data analytics system based on Kinesis Data Stream and Lambda. A week after the system has been deployed, the users noticed that it performed slowly as the data rate increases. The Architect identified that the performance of the Kinesis Data Streams is causing this problem. Which of the following should the Architect do to improve performance?
Increase the number of shards of the Kinesis stream by using the “UpdateShardCount” command
Amazon Kinesis Data Streams supports resharding, which lets you adjust the number of shards in your stream to adapt to changes in the rate of data flow through the stream.
There are two types of resharding operations: shard split and shard merge. In a shard split, you divide a single shard into two shards. In a shard merge, you combine two shards into a single shard. Splitting increases the number of shards in your stream and therefore increases the data capacity of the stream. Because you are charged on a per-shard basis, splitting increases the cost of your stream. Similarly, merging reduces the number of shards in your stream and therefore decreases the data capacity—and cost—of the stream.
You are working as a Senior Solutions Architect for a data analytics company which has a VPC for their human resource department, and another VPC located on a different region for their finance department. You need to configure your architecture to allow the finance department to access all resources that are in the human resource department and vice versa. Which type of networking connection in AWS should you set up to satisfy the above requirement?
Inter-Region VPC Peering
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.
You are working as the Solutions Architect for a global technology consultancy firm which has an application that uses multiple EC2 instances located in various AWS regions such as US East (Ohio), US West (N. California), and EU (Ireland). Your manager instructed you to set up a latency-based routing to route incoming traffic for www.tutorialsdojo.com to all the EC2 instances across all AWS regions. Which of the following options can satisfy the given requirement?
Use Route 42 latency based routing to distribute the load to the multiple EC2 instances across all AWS regions
What is AWS DataSync?
simply a service that provides a fast way to move large amounts of data online between on-premises storage and Amazon S3 or Amazon EFS.
A mobile application stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for this scenario?
Web Identity Federation
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure because you don’t have to embed and distribute long-term security credentials with your application.
You are working as a Solution Architect for a startup in Silicon Valley. Their application architecture is currently set up to store both the access key ID and the secret access key in a plain text file on a custom Amazon Machine Image (AMI). The EC2 instances, which are created by using this AMI, are using the stored access keys to connect to a DynamoDB table. What should you do to make the current architecture more secure?
Remove the stored access keys in the AMI. Create a new IAM role with permissions to access the DynamoDB table and assign it to the EC2 instances…
You should use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use an IAM role, you don’t have to distribute long-term credentials (such as a user name and password or access keys) to an EC2 instance.
A startup company wants to launch a fleet of EC2 instances on AWS. Your manager wants to ensure that the Java programming language is installed automatically when the instance is launched. In which of the below configurations can you achieve this requirement?
- User data
- EC2Config Service
- AWS Config
- IAM Roles
User Data
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can write and run scripts that install new packages, software, or tools in your instance when it is launched.
What is AWS Config?
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWSresources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.
You are setting up the required compute resources in your VPC for your application which have workloads that require high, sequential read and write access to very large data sets on local storage. Which of the following instance type is the most suitable one to use in this scenario?
Storage Optimized Instances
Storage Optimized Instances is the correct answer. Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.
What are Memory Optimized Instances for?
Memory Optimized Instances are designed to deliver fast performance for workloads that process large data sets in memory, which is quite different from handling high read and write capacity on local storage (as Storage Optimized)
What are Compute Optimized Instances for?
Compute Optimized Instances are ideal for compute-bound applications that benefit from high-performance processors, such as batch processing workloads and media transcoding.
A startup is building an AI-based face recognition application in AWS, where they store millions of images in an S3 bucket. As the Solutions Architect, you have to ensure that each and every image uploaded to their system is stored without any issues. What is the correct indication that an object was successfully stored when you put objects in Amazon S3?
HTTP 200 result code and MD5 checksum
If you triggered an S3 API call and got HTTP 200 result code and MD5 checksum, then it is considered as a successful upload. The S3 API will return an error code in case the upload is unsuccessful.
You are working as a Solutions Architect in a well-funded financial startup. The CTO instructed you to launch a cryptocurrency mining server on a Reserved EC2 instance in us-east-1 region’s private subnet which is using IPv6. Due to the financial data that the server contains, the system should be secured to avoid any unauthorized access and to meet the regulatory compliance requirements. In this scenario, which VPC feature allows the EC2 instance to communicate to the Internet but prevents inbound traffic?
Egress-only Internet gateway
An egress-only Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances.
Take note that an egress-only Internet gateway is for use with IPv6 traffic only. To enable outbound-only Internet communication over IPv4, use a NAT gateway instead.
You are working as a Cloud Engineer in a leading technology consulting firm which is using a fleet of Windows-based EC2 instances with IPv4 addresses launched in a private subnet. Several software installed in the EC2 instances are required to be updated via the Internet. Which of the following services can provide you with a highly available solution to safely allow the instances to fetch the software patches from the Internet but prevent outside network from initiating a connection?
NAT Gateway
(Egress-Only Internet Gateway is incorrect because this is primarily used for VPCs that use IPv6 to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances, just like what NAT Instance and NAT Gateway do. The scenario explicitly says that the EC2 instances are using IPv4 addresses which is why Egress-only Internet gateway is invalid, even though it can provide the required high availability.)
A web application, which is hosted in your on-premises data center and uses a MySQL database, must be migrated to AWS Cloud. You need to ensure that the network traffic to and from your RDS database instance is encrypted using SSL. For improved security, you have to use the profile credentials specific to your EC2 instance to access your database, instead of a password. Which of the following should you do to meet the above requirement?
- launch a new RDS database instance with Backtrack feature enabled
- Launch the mysql client using the –ssl-ca parameter when connecting to the database
- set up an RDS database and enable the IAM DB Authentication
- configure your RDS database to enable encryption
set up an RDS database and enable the IAM DB Authentication
You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance. Instead, you use an authentication token.
You are instructed by your manager to create a publicly accessible EC2 instance by using an Elastic IP (EIP) address and also to give him a report on how much it will cost to use that EIP. Which of the following statements is correct regarding the pricing of EIP?
- There is no cost if the instance is stopped and it has only one associated EIP
- there is no cost if the instance is running and it has at least two associated EIP
- there is no cost if the instance is terminated and it has only one associated EIP
- there is no cost if the instance is running and it has only one associated EIP
there is no cost if the instance is running and it has only one associated EIP
An Elastic IP address doesn’t incur charges as long as the following conditions are true:
- -The Elastic IP address is associated with an Amazon EC2 instance.
- -The instance associated with the Elastic IP address is running.
- -The instance has only one Elastic IP address attached to it.
A fast food company is using AWS to host their online ordering system which uses an Auto Scaling group of EC2 instances deployed across multiple Availability Zones with an Application Load Balancer in front. To better handle the incoming traffic from various digital devices, you are planning to implement a new routing system where requests which have a URL of <server>/api/android are forwarded to one specific target group named "Android-Target-Group". Conversely, requests which have a URL of <server>/api/ios are forwarded to another separate target group named "iOS-Target-Group". How can you implement this change in AWS?</server></server>
use path conditions to define rules that forward request to different target groups based on the URL in the request
You can use path conditions to define rules that forward requests to different target groups based on the URL in the request (also known as path-based routing). This type of routing is the most appropriate solution for this scenario
OBS… only Application Load Balancer supports path-based routing
You are working for a global news network where you have set up a CloudFront distribution for your web application. However, you noticed that your application’s origin server is being hit for each request instead of the AWS Edge locations, which serve the cached objects. The issue occurs even for the commonly requested objects. What could be a possible cause of this issue?
- an object is only cached by Cloudfront once a succesful request has been made hence, the objects were not requested before, which is why the request is still directed to the origin server
- the Cache-control max-age directive is set to zero
- you did not add an SSL certificate
- the file sizes of the cached objects are too large for CloudFront to handle
In this scenario, the main culprit is that the Cache-Control max-age directive is set to a low value, which is why the request is always directed to your origin server. Hence the correct answer is the option that says: The Cache-Control max-age directive is set to zero.
The Cache-Control and Expires headers control how long objects stay in the cache. The Cache-Control max-age directive lets you specify how long (in seconds) you want an object to remain in the cache before CloudFront gets the object again from the origin server. The minimum expiration time CloudFront supports is 0 seconds for web distributions and 3600 seconds for RTMP distributions.
A company is planning to deploy a High Performance Computing (HPC) cluster in its VPC that requires a scalable, high-performance file system. The storage service must be optimized for efficient workload processing, and the data must be accessible via a fast and scalable file system interface. It should also work natively with Amazon S3 that enables you to easily process your S3 data with a high-performance POSIX interface. Which of the following is the MOST suitable service that you should use for this scenario?
Amazon FSx for Lustre
For compute-intensive and fast processing workloads, like high-performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that’s optimized for performance, with input and output stored on Amazon S3.
A game development company operates several virtual reality (VR) and augmented reality (AR) games which use various RESTful web APIs hosted on their on-premises data center. Due to the unprecedented growth of their company, they decided to migrate their system to AWS Cloud to scale out their resources as well to minimize costs. Which of the following should you recommend as the most cost-effective and scalable solution to meet the above requirement?
- set up a micro service architecture with ECS, ECR and Fargate
- Use AWS Lambda and Amazon API Gateway
- Host the APIs in a static S3 web hosting bucket behind a CloudFront web distribution
- use a spot fleet of amazon ec2 instances, each with an Elastic Fabric Adapter for more consistent latency and higher network throughput. Set up an ALB to distribute traffic to the instances.
Use AWS Lambda and Amazon API Gateway
The best possible answer here is to use Lambda and API Gateway because this solution is both scalable and cost-effective. You will only be charged when you use your Lambda function, unlike having an EC2 instance which always runs even though you don’t use it.
You are working as a Senior Solutions Architect in a digital media services startup. Your current project is about a movie streaming app where you are required to launch several EC2 instances on multiple availability zones. Which of the following will configure your load balancer to distribute incoming requests evenly to all EC2 instances across multiple Availability Zones?
Cross-Zone Load Balancing
A Solutions Architect is developing a three-tier cryptocurrency web application for a FinTech startup. The Architect has been instructed to restrict access to the database tier to only accept traffic from the application-tier and deny traffic from other sources. The application-tier is composed of application servers hosted in an Auto Scaling group of EC2 instances. Which of the following options is the MOST suitable solution to implement in this scenario?
- set up the NACL of the database subnet to deny all inbound non-database traffic from the subnet of the application-tier
- set up the security group of the database tier to allow database traffic from a specified list of application server IP addresses.
- set up the security group of the database tier to allow database traffic from the security of the application servers.
- set up the NACL of the database subnet to allow inbound database traffic from the subnet of the application tier.
set up the security group of the database tier to allow database traffic from the security of the application servers.
In the scenario, the servers of the application-tier are in an Auto Scaling group which means that the number of EC2 instances could grow or shrink over time. An Auto Scaling group could also cover one or more Availability Zones (AZ) which have their own subnets. Hence, the most suitable solution would be to set up the security group of the database tier to allow database traffic from the security group of the application servers since you can utilize the security group of the application-tier Auto Scaling group as the source for the security group rule in your database tier.
You are a Solutions Architect of a tech company. You are having an issue whenever you try to connect to your newly created EC2 instance using a Remote Desktop connection from your computer. Upon checking, you have verified that the instance has a public IP and the Internet gateway and route tables are in place. What else should you do for you to resolve this issue?
- you should adjust the security group to allow traffic from port 22
- you should restart the EC2 instance since there might some issue with the instance.
- you should adjust the security group to allow traffic from port 3389
- you should create a new instance since there might be some issue with the instance
Since you are using a Remote Desktop connection to access your EC2 instance, you have to ensure that the Remote Desktop Protocol is allowed in the security group. By default, the server listens on TCP port 3389 and UDP port 3389.
(the option with port 22 is incorrect as port 22 is used for SSH connections and not RDP)
A large Philippine-based Business Process Outsourcing company is building a two-tier web application in their VPC to serve dynamic transaction-based content. The data tier is leveraging an Online Transactional Processing (OLTP) database but for the web tier, they are still deciding what service they will use. What AWS services should you leverage to build an elastic and scalable web tier?
- Amazon RDS with Multi-AZ and Auto Scaling
- ELB, EC2 and Auto Scaling
- EC2, DynamoDB and S3
- ELB, RDS with Multi-AZ and S3
Amazon RDS is a suitable database service for online transaction processing (OLTP) applications. However, the question asks for a list of AWS services for the web tier and not the database tier. Also, when it comes to services providing scalability and elasticity for your web tier, Auto Scaling and Elastic Load Balancer should immediately come into mind. Therefore, Elastic Load Balancing, Amazon EC2, and Auto Scaling is the correct answer.
An application is using a Lambda function to process complex financial data which runs for about 10 to 15 minutes. You noticed that there are a few terminated invocations throughout the day, which caused data discrepancy in the application. Which of the following is the most likely cause of this issue?
- the failed Lambda invocations contain a “ServiceException” error which means that the AWS Lambda service encountered an internal error
- the Lambda function contains a recursive code and has been running for over 15 minutes
- the Failed Lambda functions have been running for over 15 minutes and reached the maximum execution time
- the concurrent execution limit has been reached
the Failed Lambda functions have been running for over 15 minutes and reached the maximum execution time
=> You pay for the AWS resources that are used to run your Lambda function. To prevent your Lambda function from running indefinitely, you specify a timeout. When the specified timeout is reached, AWS Lambda terminates execution of your Lambda function. It is recommended that you set this value based on your expected execution time. The default timeout is 3 seconds and the maximum execution duration per request in AWS Lambda is 900 seconds, which is equivalent to 15 minutes.
Hence, the correct answer is the option that says: The failed Lambda functions have been running for over 15 minutes and reached the maximum execution time.
You are working for a computer animation film studio that has a web application running on an Amazon EC2 instance. It uploads 5 GB video objects to an Amazon S3 bucket. Video uploads are taking longer than expected, which impacts the performance of your application. Which method will help improve the performance of your application?
Use S3 Multipart upload API
The main issue is the slow upload time of the video objects to Amazon S3. To address this issue, you can use Multipart upload in S3 to improve the throughput. It allows you to upload parts of your object in parallel thus, decreasing the time it takes to upload big objects. Each part is a contiguous portion of the object’s data.
A global medical research company has a molecular imaging system which provides each client with frequently updated images of what is happening inside the human body at the molecular and cellular level. The system is hosted in AWS and the images are hosted in an S3 bucket behind a CloudFront web distribution. There was a new batch of updated images that were uploaded in S3, however, the users were reporting that they were still seeing the old content. You need to control which image will be returned by the system even when the user has another version cached either locally or behind a corporate caching proxy. Which of the following is the most suitable solution to solve this issue?
- invalidate the files in your CloudFront web distribution
- add Cache-Control no-cache, no-store, or private directive in the S3 bucket
- add a separate cache behavior path for the content and configure a custom object caching with a minimum TTL of 0
- use versioned objects
use versioned objects..
to control the versions of files that are served from your distribution, you can either invalidate files or give them versioned file names. If you want to update your files frequently, AWS recommends that you primarily use file versioning:
- Versioning enables you to control which file a request returns even when the user has a version cached either locally or behind a corporate caching proxy. If you invalidate the file, the user might continue to see the old version until it expires from those caches.
- CloudFront access logs include the names of your files, so versioning makes it easier to analyze the results of file changes.
- Versioning provides a way to serve different versions of files to different users.
- Versioning simplifies rolling forward and back between file revisions.
- Versioning is less expensive. You still have to pay for CloudFront to transfer new versions of your files to edge locations, but you don’t have to pay for invalidating files
(Invalidating the files in your CloudFront web distribution is incorrect because even though using invalidation will solve this issue, this solution is more expensive as compared to using versioned objects.)
An online shopping platform has been deployed to AWS using Elastic Beanstalk. They simply uploaded their Node.js application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Since the entire deployment process is automated, the DevOps team is not sure where to get the application log files of their shopping platform. In Elastic Beanstalk, where does it store the application files and server log files?
The correct answer is the option that says: Application files are stored in S3. The server log files can also optionally be stored in S3 or in CloudWatch Logs. AWS Elastic Beanstalk stores your application files and optionally, server log files in Amazon S3.
You are planning to launch an application that tracks the GPS coordinates of delivery trucks in your country. The coordinates are transmitted from each delivery truck every five seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. The aggregated data will be analyzed in a separate reporting application.Which AWS service should you use for this scenario?
Amazon Kinesis
“You would like to share some documents with public users accessing an S3 bucket over the Internet. What are two valid methods of granting public read permissions so you can share the documents? (choose 2)
- Grant public read access to the objects when uploading
- Share the documents using CloudFront and a static website
- Use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket granting read access to public anonymous users
- Grant public read on all objects using the S3 bucket ACL
- Share the documents using a bastion host in a public subnet”
1,3
“Access policies define access to resources and can be associated with resources (buckets and objects) and users
You can use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket. Bucket policies can be used to grant permissions to objects”
“A Solutions Architect is designing an authentication solution using the AWS STS that will provide temporary, limited-privilege credentials for IAM users or for users that you authenticate (federated users). What supported sources are available to the Architect for users? (choose 2)
- OpenID Connect
- EC2 instance
- Cognito identity pool
- Another AWS account
- A local user on a user’s PC”
1 OpenID Connect + 4.Another AWS account
“The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for IAM users or for users that you authenticate (federated users)
Federation can come from three sources:
- Federation (typically AD)
- Federation with Mobile Apps (e.g. Facebook, Amazon, Google or other OpenID providers)
- Cross account access (another AWS account)
The question has asked for supported sources for users. Cognito user pools contain users, but identity pools do not”
“You are building an application that will collect information about user behavior. The application will rapidly ingest large amounts of dynamic data and requires very low latency. The database must be scalable without incurring downtime. Which database would you recommend for this scenario?
- RDS with MySQL
- DynamoDB
- RedShift
- RDS with Microsoft SQL”
DynamoDB
- “Amazon Dynamo DB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability
- Push button scaling means that you can scale the DB at any time without incurring downtime
- DynamoDB provides low read and write latency”
“An application tier of a multi-tier web application currently hosts two web services on the same set of instances. The web services each listen for traffic on different ports. Which AWS service should a Solutions Architect use to route traffic to the service based on the incoming request path?
- Application Load Balancer (ALB)
- Amazon Route 53
- Classic Load Balancer (CLB)
- Amazon CloudFront”
ALB…1
“An Application Load Balancer is a type of Elastic Load Balancer that can use layer 7 (HTTP/HTTPS) protocol data to make forwarding decisions. An ALB supports both path-based (e.g. /images or /orders) and host-based routing (e.g. example.com)”
“An application runs on two EC2 instances in private subnets split between two AZs. The application needs to connect to a CRM SaaS application running on the Internet. The vendor of the SaaS application restricts authentication to a whitelist of source IP addresses and only 2 IP addresses can be configured per customer. What is the most appropriate and cost-effective solution to enable authentication to the SaaS application?”
- “Use a Network Load Balancer and configure a static IP for each AZ
- Use multiple Internet-facing Application Load Balancers with Elastic IP addresses
- Configure a NAT Gateway for each AZ with an Elastic IP address
- Configure redundant Internet Gateways and update the routing tables for each subnet”
3…..NAT Gateway
“A NAT Gateway is created in a specific AZ and can have a single Elastic IP address associated with it. NAT Gateways are deployed in public subnets and the route tables of the private subnets where the EC2 instances reside are configured to forward Internet-bound traffic to the NAT Gateway. You do pay for using a NAT Gateway based on hourly usage and data processing, however this is still a cost-effective solution”