Missed Questions Flashcards
An online cryptocurrency exchange platform is hosted in AWS which uses ECS Cluster and RDS in Multi-AZ Deployments configuration. The application is heavily using the RDS instance to process complex read and write database operations. To maintain the reliability, availability, and performance of your systems, you have to closely monitor how the different processes or threads on a DB instance use the CPU, including the percentage of the CPU bandwidth and total memory consumed by each process.
Which of the following is the most suitable solution to properly monitor your database?
- Create a script that collects and publishes custom metrics to CloudWatch, which tracks the real-time CPU Utilization of the RDS instance and then set up a custom CloudWatch dashboard to view the metrics
- Enable Enhanced Monitoring in RDS
- Check the CPU% and MEM% metrics which are readily available in the Amazon RDS console that shows the percentage of the CPU bandwidth and total memory consumed by each database process of your RDS instance
- Use Amazon CloudWatch to monitor the CPU Utilization of your database
- Create a script that collects and publishes custom metrics to CloudWatch, which tracks the real-time CPU Utilization of the RDS instance and then set up a custom CloudWatch dashboard to view the metrics (X)
- Enable Enhanced Monitoring in RDS
- Check the CPU% and MEM% metrics which are readily available in the Amazon RDS console that shows the percentage of the CPU bandwidth and total memory consumed by each database process of your RDS instance
- Use Amazon CloudWatch to monitor the CPU Utilization of your database
You are an AWS Network Engineer working for a utility provider where you are managing a monolithic application with an EC2 instance using a Windows AMI. The legacy application must maintain the same private IP address and MAC address in order for it to work. You want to implement a cost-effective and highly available architecture for your application by launching a standby EC2 instance that is an exact replica of the Windows server. If the primary instance terminates, you can attach the ENI to the standby secondary instance, which allows the traffic flow to resume within a few seconds.
When it comes to the ENI attachment to an EC2 instance, what does ‘warm attach’ refer to?
- Attaching an ENI to an instance when it is stopped.
- Attaching an ENI to an instance when it is idle.
- Attaching an ENI to an instance during the launch process.
- Attaching an ENI to an instance when it is running.
- Attaching an ENI to an instance when it is stopped.
- Attaching an ENI to an instance when it is idle.
- Attaching an ENI to an instance during the launch process. (X)
- Attaching an ENI to an instance when it is running.
You are a Solutions Architect working for a large multinational investment bank. They have a web application that requires a minimum of 4 EC2 instances to run to ensure that it can cater to its users across the globe. You are instructed to ensure fault tolerance of this system.
Which of the following is the best option?
- Deploy an Auto Scaling group with 4 instances in one Availability Zone behind an Application Load Balancer.
- Deploy an Auto Scaling group with 1 instance in each of 4 Availability Zones behind an Application Load Balancer.
- Deploy an Auto Scaling group with 2 instances in each of 2 Availability Zones behind an Application Load Balancer.
- Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an Application Load Balancer.
- Deploy an Auto Scaling group with 4 instances in one Availability Zone behind an Application Load Balancer.
- Deploy an Auto Scaling group with 1 instance in each of 4 Availability Zones behind an Application Load Balancer.
- Deploy an Auto Scaling group with 2 instances in each of 2 Availability Zones behind an Application Load Balancer. (X)
- Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an Application Load Balancer.
You have a data analytics application that updates a real-time, foreign exchange dashboard and another separate application that archives data to Amazon Redshift. Both applications are configured to consume data from the same stream concurrently and independently by using Amazon Kinesis Data Streams. However, you noticed that there are a lot of occurrences where a shard iterator expires unexpectedly. Upon checking, you found out that the DynamoDB table used by Kinesis does not have enough capacity to store the lease data.
Which of the following is the most suitable solution to rectify this issue?
- Upgrade the storage capacity of the DynamoDB table.
- Enable In-Memory Acceleration with DynamoDB Accelerator (DAX).
- Increase the write capacity assigned to the shard table
- Use Amazon Kinesis Data Analytics to properly support the data analytics application instead of Kinesis Data Stream
- Upgrade the storage capacity of the DynamoDB table.
- Enable In-Memory Acceleration with DynamoDB Accelerator (DAX).
- Increase the write capacity assigned to the shard table
- Use Amazon Kinesis Data Analytics to properly support the data analytics application instead of Kinesis Data Stream (X)
You are leading a software development team which uses serverless computing with AWS Lambda to build and run applications without having to set up or manage servers. You have a Lambda function that connects to a MongoDB Atlas, which is a popular Database as a Service (DBaaS) platform and also uses a third party API to fetch certain data for your application. You instructed one of your junior developers to create the environment variables for the MongoDB database hostname, username, and password as well as the API credentials that will be used by the Lambda function for DEV, SIT, UAT and PROD environments.
Considering that the Lambda function is storing sensitive database and API credentials, how can you secure this information to prevent other developers in your team, or anyone, from seeing these credentials in plain text? Select the best option that provides the maximum security.
- Enable SSL encryption that leverages on AWS CloudHSM to store and encrypt the sensitive information
- Create a new KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information
- AWS Lambda does not provide encryption for the environment variables. Deploy your code to an EC2 instance instead
- There is no need to do anything because, by default, AWS Lambda already encrypts the environment variables using the AWS Key Management Service
- Enable SSL encryption that leverages on AWS CloudHSM to store and encrypt the sensitive information
- Create a new KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information
- AWS Lambda does not provide encryption for the environment variables. Deploy your code to an EC2 instance instead
- There is no need to do anything because, by default, AWS Lambda already encrypts the environment variables using the AWS Key Management Service (X)
You recently launched a new FTP server using an On-Demand EC2 instance in a newly created VPC with default settings. The server should not be accessible publicly but only through your IP address 175.45.116.100 and nowhere else.
Which of the following is the most suitable way to implement this requirement?
- Create a new inbound rule in the security group of the EC2 instance with the following details:
- Protocol: UDP
- Port Range: 20 - 21
- Source: 175.45.116.100/32
- Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details:
- Protocol: TCP
- Port Range: 20 - 21
- Source: 175.45.116.100/0
- Allow/Deny: ALLOW
- Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details:
- Protocol: UDP
- Port Range: 20 - 21
- Source: 175.45.116.100/0
- Allow/Deny: ALLOW
- Create a new inbound rule in the security group of the EC2 instance with the following details:
- Protocol: TCP
- Port Range: 20 - 21
- Source: 175.45.116.100/32
- Create a new inbound rule in the security group of the EC2 instance with the following details:
- Protocol: UDP
- Port Range: 20 - 21
- Source: 175.45.116.100/32
- Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details: (X)
- Protocol: TCP
- Port Range: 20 - 21
- Source: 175.45.116.100/0
- Allow/Deny: ALLOW
- Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details:
- Protocol: UDP
- Port Range: 20 - 21
- Source: 175.45.116.100/0
- Allow/Deny: ALLOW
-
Create a new inbound rule in the security group of the EC2 instance with the following details:
- Protocol: TCP
- Port Range: 20 - 21
- Source: 175.45.116.100/32
A media company has two VPCs: VPC-1 and VPC-2 with peering connection between each other. VPC-1 only contains private subnets while VPC-2 only contains public subnets. The company uses a single AWS Direct Connect connection and a virtual interface to connect their on-premises network with VPC-1.
Which of the following options increase the fault tolerance of the connection to VPC-1? (Select TWO.)
- Use the AWS VPN CloudHub to create a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
- Establish another AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1.
- Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
- Establish a hardware VPN over the Internet between VPC-2 and the on-premises network.
- Establish a hardware VPN over the Internet between VPC-1 and the on-premises network.
- Use the AWS VPN CloudHub to create a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
- Establish another AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1.
- Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
- Establish a hardware VPN over the Internet between VPC-2 and the on-premises network. (X)
- Establish a hardware VPN over the Internet between VPC-1 and the on-premises network.
You work for a leading university as an AWS Infrastructure Engineer and also as a professor to aspiring AWS architects. As a way to familiarize your students with AWS, you gave them a project to host their applications to an EC2 instance. One of your students created an instance to host their online enrollment system project but is having a hard time connecting to their newly created EC2 instance. Your students have explored all of the troubleshooting guides by AWS and narrowed it down to login issues.
Which of the following can you use to log into an EC2 instance?
- Custom EC2 password
- Access Keys
- EC2 Connection Strings
- Key Pairs
- Custom EC2 password
- Access Keys (X)
- EC2 Connection Strings
- Key Pairs
An online trading platform with thousands of clients across the globe is hosted in AWS. To reduce latency, you have to direct user traffic to the nearest application endpoint to the client. The traffic should be routed to the closest edge location via an Anycast static IP address. AWS Shield should also be integrated into the solution for DDoS protection.
Which of the following is the MOST suitable service that the Solutions Architect should use to satisfy the above requirements?
- AWS PrivateLink
- AWS WAF
- Amazon CloudFront
- AWS Global Accelerator
- AWS PrivateLink
- AWS WAF
- Amazon CloudFront (X)
- AWS Global Accelerator
You are working as an IT Consultant for a large investment bank that generates large financial datasets with millions of rows. The data must be stored in a columnar fashion to reduce the number of disk I/O requests and reduce the amount of data needed to load from the disk. The bank has an existing third-party business intelligence application which will connect to the storage service and then generate daily and monthly financial reports for its clients around the globe.
In this scenario, which is the best storage service to use to meet the requirement?
- Amazon RDS
- DynamoDB
- Amazon Aurora
- Amazon Redshift
- Amazon RDS
- DynamoDB (X)
- Amazon Aurora
- Amazon Redshift
A data analytics company is setting up an innovative checkout-free grocery store. Their Solutions Architect developed a real-time monitoring application that uses smart sensors to collect the items that the customers are getting from the grocery’s refrigerators and shelves then automatically deduct it from their accounts. The company wants to analyze the items that are frequently being bought and store the results in S3 for durable storage to determine the purchase behavior of its customers.
What service must be used to easily capture, transform, and load streaming data into Amazon S3, Amazon Elasticsearch Service, and Splunk?
- Amazon SQS
- Amazon Kinesis
- Amazon Kinesis Data Firehose
- Amazon Redshift
- Amazon SQS
- Amazon Kinesis (X)
- Amazon Kinesis Data Firehose
- Amazon Redshift
You are designing a banking portal which uses Amazon ElastiCache for Redis as its distributed session management component. Since the other Cloud Engineers in your department have access to your ElastiCache cluster, you have to secure the session data in the portal by requiring them to enter a password before they are granted permission to execute Redis commands.
As the Solutions Architect, which of the following should you do to meet the above requirement?
- Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the –transit-encryption-enabled and –auth-token parameters enabled
- Set up an IAM Policy and MFA which requires the Cloud Engineers to enter their IAM credentials and token before they can access the ElastiCache cluster
- Enable the in-transit encryption for Redis replication groups
- Set up a Redis replication group and enabling the AtRestEncryptionEnabled parameter
- Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the –transit-encryption-enabled and –auth-token parameters enabled
- Set up an IAM Policy and MFA which requires the Cloud Engineers to enter their IAM credentials and token before they can access the ElastiCache cluster (X)
- Enable the in-transit encryption for Redis replication groups
- Set up a Redis replication group and enabling the AtRestEncryptionEnabled parameter
A Fortune 500 company which has numerous offices and customers around the globe has hired you as their Principal Architect. You have staff and customers that upload gigabytes to terabytes of data to a centralized S3 bucket from the regional data centers, across continents, all over the world on a regular basis. At the end of the financial year, there are thousands of data being uploaded to the central S3 bucket which is in ap-southeast-2 (Sydney) region and a lot of employees are starting to complain about the slow upload times. You were instructed by the CTO to resolve this issue as soon as possible to avoid any delays in processing their global end of financial year (EOFY) reports.
Which feature in Amazon S3 enables fast, easy, and secure transfer of your files over long distances between your client and your Amazon S3 bucket?
- AWS Global Accelerator
- Multipart Upload
- Cross-Region Replication
- Transfer Acceleration
- AWS Global Accelerator (X)
- Multipart Upload
- Cross-Region Replication
- Transfer Acceleration
An organization needs to control the access for several S3 buckets. They plan to use a gateway endpoint to allow access to trusted buckets.
Which of the following could help you achieve this requirement?
- Generate an endpoint policy for trusted VPCs.
- Generate a bucket policy for trusted S3 buckets.
- Generate an endpoint policy for trusted S3 buckets.
- Generate a bucket policy for trusted VPCs.
- Generate an endpoint policy for trusted VPCs.
- Generate a bucket policy for trusted S3 buckets. (X)
- Generate an endpoint policy for trusted S3 buckets.
- Generate a bucket policy for trusted VPCs.
A web application is using CloudFront to distribute their images, videos, and other static contents stored in their S3 bucket to its users around the world. The company has recently introduced a new member-only access to some of its high quality media files. There is a requirement to provide access to multiple private media files only to their paying subscribers without having to change their current URLs.
Which of the following is the most suitable solution that you should implement to satisfy this requirement?
- Create a Signed URL with a custom policy which only allows the members to see the private files
- Configure your CloudFront distribution to use Field-Level Encryption to protect your private data and only allow access to members
- Configure your CloudFront distribution to use Match Viewer as its Origin Protocol Policy which will automatically match the user request. This will allow access to the private content if the request is a paying member and deny it if it is not a member
- Use Signed Cookies to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them
- Create a Signed URL with a custom policy which only allows the members to see the private files (X)
- Configure your CloudFront distribution to use Field-Level Encryption to protect your private data and only allow access to members
- Configure your CloudFront distribution to use Match Viewer as its Origin Protocol Policy which will automatically match the user request. This will allow access to the private content if the request is a paying member and deny it if it is not a member
- Use Signed Cookies to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them
You are working for a large telecommunications company where you need to run analytics against all combined log files from your Application Load Balancer as part of the regulatory requirements.
Which AWS services can be used together to collect logs and then easily perform log analysis?
- Amazon EC2 with EBS volumes for storing and analyzing the log files.
- Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log files using a custom-built application.
- Amazon DynamoDB for storing and EC2 for analyzing the logs.
- Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files.
- Amazon EC2 with EBS volumes for storing and analyzing the log files.
- Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log files using a custom-built application. (X)
- Amazon DynamoDB for storing and EC2 for analyzing the logs.
- Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files.
An application hosted in EC2 consumes messages from an SQS queue and is integrated with SNS to send out an email to you once the process is complete. The Operations team received 5 orders but after a few hours, they saw 20 email notifications in their inbox.
Which of the following could be the possible culprit for this issue?
- The web application does not have permission to consume messages in the SQS queue
- The web application is set to short polling so some messages are not being picked up
- The web application is set for long polling so the messages are being sent twice
- The web application is not deleting the messages in the SQS queue after it has processed them
- The web application does not have permission to consume messages in the SQS queue
- The web application is set to short polling so some messages are not being picked up
- The web application is set for long polling so the messages are being sent twice (X)
- The web application is not deleting the messages in the SQS queue after it has processed them
You currently have an Augment Reality (AR) mobile game which has a serverless backend. It is using a DynamoDB table which was launched using the AWS CLI to store all the user data and information gathered from the players and a Lambda function to pull the data from DynamoDB. The game is being used by millions of users each day to read and store data.
How would you design the application to improve its overall performance and make it more scalable while keeping the costs low? (Select TWO.)
- Since Auto Scaling is enabled by default, the provisioned read and write capacity will adjust automatically. Also enable DynamoDB Accelerator (DAX) to improve the performance from milliseconds to microseconds.
- Use AWS SSO and Cognito to authenticate users and have them directly access DynamoDB using single-sign on. Manually set the provisioned read and write capacity to a higher RCU and WCU.
- Configure CloudFront with DynamoDB as the origin; cache frequently accessed data on client device using ElastiCache.
- Enable DynamoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity.
- Use API Gateway in conjunction with Lambda and turn on the caching on frequently accessed data and enable DynamoDB global replication.
- Since Auto Scaling is enabled by default, the provisioned read and write capacity will adjust automatically. Also enable DynamoDB Accelerator (DAX) to improve the performance from milliseconds to microseconds.
- Use AWS SSO and Cognito to authenticate users and have them directly access DynamoDB using single-sign on. Manually set the provisioned read and write capacity to a higher RCU and WCU. (X)
- Configure CloudFront with DynamoDB as the origin; cache frequently accessed data on client device using ElastiCache.
- Enable DynamoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity. (-)
- Use API Gateway in conjunction with Lambda and turn on the caching on frequently accessed data and enable DynamoDB global replication. (+)
A financial application is composed of an Auto Scaling group of EC2 instances, an Application Load Balancer, and a MySQL RDS instance in a Multi-AZ Deployments configuration. To protect the confidential data of your customers, you have to ensure that your RDS database can only be accessed using the profile credentials specific to your EC2 instances via an authentication token.
As the Solutions Architect of the company, which of the following should you do to meet the above requirement?
- Using a combination of IAM and STS to restrict access to your RDS instance via a temporary token
- Configuring SSL in your application to encrypt the database connection to RDS
- Creating an IAM Role and assigning it to your EC2 instances which will grant exclusive access to your RDS instance
- Enable the IAM DB Authentication
- Using a combination of IAM and STS to restrict access to your RDS instance via a temporary token (X)
- Configuring SSL in your application to encrypt the database connection to RDS
- Creating an IAM Role and assigning it to your EC2 instances which will grant exclusive access to your RDS instance
- Enable the IAM DB Authentication
A newly hired Solutions Architect is assigned to manage a set of CloudFormation templates that is used in the company’s cloud architecture in AWS. The Architect accessed the templates and tried to analyze the configured IAM policy for an S3 bucket. (SELECT THREE)
- An IAM user with this IAM policy is allowed to read objects in the ‘tutorialsdojo’ S3 bucket but not allowed to list the objects in the bucket
- An IAM user with this IAM policy is allowed to write objects into the ‘tutorialsdojo’ S3 bucket
- An IAM user with this IAM policy is allowed to read objects from the ‘tutorialsdojo’ S3 bucket
- An IAM user with this IAM policy is allowed to change access rights for the ‘tutorialsdojo’ S3 bucket
- An IAM user with this IAM policy is allowed to read and delete objects from the ‘tutorialsdojo’ S3 bucket
- An IAM user with this IAM policy is allowed to read objects from all S3 buckets owned by the account
- An IAM user with this IAM policy is allowed to read objects in the ‘tutorialsdojo’ S3 bucket but not allowed to list the objects in the bucket
- An IAM user with this IAM policy is allowed to write objects into the ‘tutorialsdojo’ S3 bucket (+)
- An IAM user with this IAM policy is allowed to read objects from the ‘tutorialsdojo’ S3 bucket (+)
- An IAM user with this IAM policy is allowed to change access rights for the ‘tutorialsdojo’ S3 bucket
- An IAM user with this IAM policy is allowed to read and delete objects from the ‘tutorialsdojo’ S3 bucket (X)
- An IAM user with this IAM policy is allowed to read objects from all S3 buckets owned by the account (-)
A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage.
Which of the following is the best approach to meet this requirement?
- Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage.
- Use Automated snapshots of your Redshift Cluster.
- Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region.
- Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster.
- Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage.
- Use Automated snapshots of your Redshift Cluster.
- Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region. (X)
- Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster.
Both historical records and frequently accessed data are stored on an on-premises storage system. The amount of current data is growing at an exponential rate. As the storage’s capacity is nearing its limit, the company’s Solutions Architect has decided to move the historical records to AWS to free up space for the active data.
Which of the following architectures deliver the best solution in terms of cost and operational management?
- Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
- Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
- Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
- Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
- Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
- Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
- Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data. (X)
- Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
The company that you are working for has a highly available architecture consisting of an elastic load balancer and several EC2 instances configured with auto-scaling in three Availability Zones. You want to monitor your EC2 instances based on a particular metric, which is not readily available in CloudWatch.
Which of the following is a custom metric in CloudWatch which you have to manually set up?
- Network packets out of an EC2 instance
- Disk Reads Activity of an EC2 instance
- Memory Utilization of an EC2 instance
- CPU Utilization of an EC2 instance
- Network packets out of an EC2 instance (X)
- Disk Reads Activity of an EC2 instance
- Memory Utilization of an EC2 instance
- CPU Utilization of an EC2 instance
Your web application is relying entirely on slower disk-based databases, causing it to perform slowly. To improve its performance, you integrated an in-memory data store to your web application using ElastiCache. How does Amazon ElastiCache improve database performance?
- It securely delivers data to customers globally with low latency and high transfer speeds.
- By caching database query results.
- It reduces the load on your database by routing read queries from your applications to the Read Replica.
- It provides an in-memory cache that delivers up to 10x performance improvement from milliseconds to microseconds or even at millions of requests per second.
- It securely delivers data to customers globally with low latency and high transfer speeds.
- By caching database query results.
- It reduces the load on your database by routing read queries from your applications to the Read Replica.
- It provides an in-memory cache that delivers up to 10x performance improvement from milliseconds to microseconds or even at millions of requests per second. (X)
A Solutions Architect is hosting a website in an Amazon S3 bucket named tutorialsdojo. The users load the website using the following URL: http://tutorialsdojo.s3-website-us-east-1.amazonaws.com and there is a new requirement to add a JavaScript on the webpages in order to make authenticated HTTP GET requests against the same bucket by using the Amazon S3 API endpoint (tutorialsdojo.s3.amazonaws.com). Upon testing, you noticed that the web browser blocks JavaScript from allowing those requests.
Which of the following options is the MOST suitable solution that you should implement for this scenario?
- Enable cross-account access
- Enable Cross-Region Replication (CRR)
- Enable Cross-Zone Load Balancing
- Enable Cross-origin resource sharing (CORS) configuration in the bucket
- Enable cross-account access
- Enable Cross-Region Replication (CRR) (X)
- Enable Cross-Zone Load Balancing
- Enable Cross-origin resource sharing (CORS) configuration in the bucket
You are a Big Data Engineer who is assigned to handle the online enrollment system database of a prestigious university, which is hosted in RDS. You are required to monitor the database metrics in Amazon CloudWatch to ensure the availability of the enrollment system.
What are the enhanced monitoring metrics that Amazon CloudWatch gathers from Amazon RDS DB instances which provide a more accurate information? (Select TWO.)
- Freeable Memory
- RDS child processes.
- CPU Utilization
- Database Connections
- OS processes
- Freeable Memory
- RDS child processes. (-)
- CPU Utilization
- Database Connections (X)
- OS processes (+)
You are working as an IT Consultant for a large investment bank that generates large financial datasets with millions of rows. The data must be stored in a columnar fashion to reduce the number of disk I/O requests and reduce the amount of data needed to load from the disk. The bank has an existing third-party business intelligence application which will connect to the storage service and then generate daily and monthly financial reports for its clients around the globe.
In this scenario, which is the best storage service to use to meet the requirement?
- Amazon RDS
- DynamoDB
- Amazon Aurora
- Amazon Redshift
- Amazon RDS
- DynamoDB (X)
- Amazon Aurora
- Amazon Redshift
You are a Solutions Architect working for an aerospace engineering company which recently adopted a hybrid cloud infrastructure with AWS. One of your tasks is to launch a VPC with both public and private subnets for their EC2 instances as well as their database instances respectively.
Which of the following statements are true regarding Amazon VPC subnets? (Select TWO.)
- Each subnet maps to a single Availability Zone.
- Each subnet spans to 2 Availability Zones.
- The allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /27 netmask (16 IP addresses).
- Every subnet that you create is automatically associated with the main route table for the VPC.
- EC2 instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
- Each subnet maps to a single Availability Zone. (+)
- Each subnet spans to 2 Availability Zones.
- The allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /27 netmask (16 IP addresses). (X)
- Every subnet that you create is automatically associated with the main route table for the VPC. (-)
- EC2 instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
You are using a combination of API Gateway and Lambda for the web services of your online web portal that is being accessed by hundreds of thousands of clients each day. Your company will be announcing a new revolutionary product and it is expected that your web portal will receive a massive number of visitors all around the globe. How can you protect your backend systems and applications from traffic spikes?
- Manually upgrading the EC2 instances being used by API Gateway
- API Gateway will automatically scale and handle massive traffic spikes so you do not have to do anything
- Deploying Multi-AZ in API Gateway with Read Replica
- Use throttling limits in API Gateway
- Manually upgrading the EC2 instances being used by API Gateway
- API Gateway will automatically scale and handle massive traffic spikes so you do not have to do anything (X)
- Deploying Multi-AZ in API Gateway with Read Replica
- Use throttling limits in API Gateway
A cryptocurrency trading platform is using an API built in AWS Lambda and API Gateway. Due to the recent news and rumors about the upcoming price surge of Bitcoin, Ethereum and other cryptocurrencies, it is expected that the trading platform would have a significant increase in site visitors and new users in the coming days ahead. In this scenario, how can you protect the backend systems of the platform from traffic spikes?
- Enable throttling limits and result caching in API Gateway
- Move the Lambda function to a VPC
- Use CloudFront in front of the API Gateway to act as a cache
- Switch from using AWS Lambda and API Gateway to a more scalable and highly available architecture using EC2 instances, ELB, and Auto Scaling
- Enable throttling limits and result caching in API Gateway
- Move the Lambda function to a VPC
- Use CloudFront in front of the API Gateway to act as a cache (X)
- Switch from using AWS Lambda and API Gateway to a more scalable and highly available architecture using EC2 instances, ELB, and Auto Scaling
You are working as a Cloud Engineer for a top aerospace engineering firm. One of your tasks is to set up a document storage system using S3 for all of the engineering files. In Amazon S3, which of the following statements are true? (Select TWO.)
- You can only store ZIP or TAR files in S3.
- S3 is an object storage service that provides file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage.
- The largest object that can be uploaded in a single PUT is 5 GB.
- The largest object that can be uploaded in a single PUT is 5 TB.
- The total volume of data and number of objects you can store are unlimited.
- You can only store ZIP or TAR files in S3.
- S3 is an object storage service that provides file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage.
- The largest object that can be uploaded in a single PUT is 5 GB. (-)
- The largest object that can be uploaded in a single PUT is 5 TB. (X)
- The total volume of data and number of objects you can store are unlimited. (+)