Practice Test 1 Flashcards
Which of the following feature of an Amazon S3 bucket can only be suspended and not disabled once it have been enabled?
A) Versioning
B) Static Website Hosting
C) Requester Pays
D) Server Access Logging
A) Versioning
The engineering team at an e-commerce company wants to establish a dedicated, encrypted, low latency, and high throughput connection between its data center and AWS Cloud. The engineering team has set aside sufficient time to account for the operational overhead of establishing this connection.
As a solutions architect, which of the following solutions would you recommend to the company?
A. Use site-to-site VPN to establish a connection between the data center and AWS Cloud
B. Use VPC transit gateway to establish a connection between the data center and AWS Cloud
C. Use AWS Direct Connect to establish a connection between the data center and AWS Cloud
D. Use AWS Direct Connect plus VPN to establish a connection between the data center and AWS Cloud
D. Use AWS Direct Connect plus VPN to establish a connection between the data center and AWS Cloud
A file-hosting service uses Amazon Simple Storage Service (Amazon S3) under the hood to power its storage offerings. Currently all the customer files are uploaded directly under a single Amazon S3 bucket. The engineering team has started seeing scalability issues where customer file uploads have started failing during the peak access hours with more than 5000 requests per second.
Which of the following is the MOST resource efficient and cost-optimal way of addressing this issue?
A. Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations.
B. Change the application architecture to create a new S3 bucket for each customer and then upload each customer’s files directly under the respective buckets.
C. Change the application architecture to create a new S3 bucket for each day’s data and then upload the daily files directly under that day’s bucket.
D. Change the application architecture to use EFS instead of Amazon S3 for storing the customers’ uploaded files.
A. Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations.
A new DevOps engineer has joined a large financial services company recently. As part of his onboarding, the IT department is conducting a review of the checklist for tasks related to AWS Identity and Access Management (AWS IAM).
As an AWS Certified Solutions Architect – Associate, which best practices would you recommend (Select two)?
A. Enable MFA for privileged users
B. Configure AWS CloudTrail to record all account activity
C.Create a minimum number of accounts and share these account credentials among employees
D. Grant maximum privileges to avoid assigning privileges again
E. Use user credentials to provide access specific permissions for Amazon EC2 instances
A. Enable MFA for privileged users
B. Configure AWS CloudTrail to record all account activity
A junior scientist working with the Deep Space Research Laboratory at NASA is trying to upload a high-resolution image of a nebula into Amazon S3. The image size is approximately 3 gigabytes. The junior scientist is using Amazon S3 Transfer Acceleration (Amazon S3TA) for faster image upload. It turns out that Amazon S3TA did not result in an accelerated transfer.
Given this scenario, which of the following is correct regarding the charges for this image transfer?
A. Store the intermediary query results in S3 Intelligent-Tiering storage class
B. Store the intermediary query results in S3 Standard-Infrequent Access storage class
C. Store the intermediary query results in S3 One Zone-Infrequent Access storage class
D. Store the intermediary query results in S3 Standard storage class
D. Store the intermediary query results in S3 Standard storage class
A geological research agency maintains the seismological data for the last 100 years. The data has a velocity of 1GB per minute. You would like to store the data with only the most relevant attributes to build a predictive model for earthquakes.
What AWS services would you use to build the most cost-effective solution with the LEAST amount of infrastructure maintenance?
A. Ingest the data in Kinesis Data Analytics and use SQL queries to filter and transform the data before writing to S3
B. Ingest the data in AWS Glue job and use Spark transformations before writing to S3
C. Ingest the data in a Spark Streaming Cluster on EMR use Spark Streaming transformations before writing to S3
D. Ingest the data in Kinesis Data Firehose and use a Lambda function to filter and transform the incoming stream before the output is dumped on S3
D. Ingest the data in Kinesis Data Firehose and use a Lambda function to filter and transform the incoming stream before the output is dumped on S3
A US-based healthcare startup is building an interactive diagnostic tool for COVID-19 related assessments. The users would be required to capture their personal health records via this tool. As this is sensitive health information, the backup of the user data must be kept encrypted in Amazon Simple Storage Service (Amazon S3). The startup does not want to provide its own encryption keys but still wants to maintain an audit trail of when an encryption key was used and by whom.
Which of the following is the BEST solution for this use-case?
A. Use SSE-KMS to encrypt the user data on S3.
B. Use client-side encryption with client provided keys and then upload the encrypted user data to S3.
C. Use SSE-C to encrypt the user data on S3.
D. Use SSE-S3 to encrypt the user data on S3.
A. Use SSE-KMS to encrypt the user data on S3.
A new DevOps engineer has just joined a development team and wants to understand the replication capabilities for Amazon RDS Multi-AZ deployment as well as Amazon RDS Read-replicas.
Which of the following correctly summarizes these capabilities for the given database?
A. Multi-AZ follows asynchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region.
B. Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region.
C. Multi-AZ follows asynchronous replication and spans one Availability Zone within a single region. Read replicas follow synchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region.
D. Multi-AZ follows asynchronous replication and spans at least two Availability Zones within a single region. Read replicas follow synchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region.
B. Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region.
As part of a pilot program, a biotechnology company wants to integrate data files from its on-premises analytical application with AWS Cloud via an NFS interface.
Which of the following AWS service is the MOST efficient solution for the given use-case?
A. AWS Storage Gateway Tape Gateway
B. AWS Storage Gateway Volume Gateway
C. AWS Storage Gateway – File Gateway
D. AWS Site-to-Site VPN
C. AWS Storage Gateway – File Gateway
A company runs a data processing workflow that takes about 60 minutes to complete. The workflow can withstand disruptions and it can be started and stopped multiple times.
Which is the most cost-effective solution to build a solution for the workflow?
A) Use EC2 on-demand instances to run the workflow processes
B) Use EC2 reserved instances to run the workflow processes
C) Use AWS Lambda function to run the workflow processes
D) Use EC2 spot instances to run the workflow processes
D) Use EC2 spot instances to run the workflow processes
A software engineering intern at an e-commerce company is documenting the process flow to provision Amazon EC2 instances via the Amazon EC2 API. These instances are to be used for an internal application that processes Human Resources payroll data. He wants to highlight those volume types that cannot be used as a boot volume.
Can you help the intern by identifying those storage volume types that CANNOT be used as boot volumes while creating the instances? (Select two)
A. Provisioned IOPS SSD (io1).
B. General Purpose SSD (gp2).
C. Throughput Optimized HDD (st1).
D. Cold HDD (sc1).
E. Instance Store.
C. Throughput Optimized HDD (st1).
D. Cold HDD (sc1).
One of the biggest football leagues in Europe has granted the distribution rights for live streaming its matches in the USA to a silicon valley based streaming services company. As per the terms of distribution, the company must make sure that only users from the USA are able to live stream the matches on their platform. Users from other countries in the world must be denied access to these live-streamed matches.
Which of the following options would allow the company to enforce these streaming restrictions? (Select two)
A. Use Route 53 based latency routing policy to restrict distribution of content to only the locations in which you have distribution rights.
B. Use georestriction to prevent users in specific geographic locations from accessing content that you’re distributing through a CloudFront web distribution.
C. Use Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights.
D. Use Route 53 based failover routing policy to restrict distribution of content to only the locations in which you have distribution rights.
E. Use Route 53 based weighted routing policy to restrict distribution of content to only the locations in which you have distribution rights.
B. Use georestriction to prevent users in specific geographic locations from accessing content that you’re distributing through a CloudFront web distribution.
C. Use Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights.
An organization wants to delegate access to a set of users from the development environment so that they can access some resources in the production environment which is managed under another AWS account.
As a solutions architect, which of the following steps would you recommend?
A. Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment
B. Create new IAM user credentials for the production environment and share these credentials with the set of users from the development environmen
C. It is not possible to access cross-account resources
D. Both IAM roles and IAM users can be used interchangeably for cross-account access
A. Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment
A company is in the process of migrating its on-premises SMB file shares to AWS so the company can get out of the business of managing multiple file servers across dozens of offices. The company has 200 terabytes of data in its file servers. The existing on-premises applications and native Windows workloads should continue to have low latency access to this data which needs to be stored on a file system service without any disruptions after the migration. The company also wants any new applications deployed on AWS to have access to this migrated data.
Which of the following is the best solution to meet this requirement?
A) Use Amazon Storage Gateway’s File Gateway to provide low-latency, on-premises access to fully managed file shares in Amazon FSx for Windows File Server. The applications deployed on AWS can access this data directly from Amazon FSx in AWS
B) Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares in Amazon EFS. The applications deployed on AWS can access this data directly from Amazon EFS
C) Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares in Amazon FSx for Windows File Server. The applications deployed on AWS can access this data directly from Amazon FSx in AWS
D) Use Amazon Storage Gateway’s File Gateway to provide low-latency, on-premises access to fully managed file shares in Amazon S3. The applications deployed on AWS can access this data directly from Amazon S3
C) Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares in Amazon FSx for Windows File Server. The applications deployed on AWS can access this data directly from Amazon FSx in AWS
A major bank is using Amazon Simple Queue Service (Amazon SQS) to migrate several core banking applications to the cloud to ensure high availability and cost efficiency while simplifying administrative complexity and overhead. The development team at the bank expects a peak rate of about 1000 messages per second to be processed via SQS. It is important that the messages are processed in order.
Which of the following options can be used to implement this system?
A. Use Amazon SQS FIFO queue in batch mode of 2 messages per operation to process the messages at the peak rate.
B. Use Amazon SQS FIFO queue to process the messages.
C. Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to process the messages at the peak rate.
D. Use Amazon SQS standard queue to process the messages.
C. Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to process the messages at the peak rate.
The payroll department at a company initiates several computationally intensive workloads on Amazon EC2 instances at a designated hour on the last day of every month. The payroll department has noticed a trend of severe performance lag during this hour. The engineering team has figured out a solution by using Auto Scaling Group for these Amazon EC2 instances and making sure that 10 Amazon EC2 instances are available during this peak usage hour. For normal operations only 2 Amazon EC2 instances are enough to cater to the workload.
As a solutions architect, which of the following steps would you recommend to implement the solution?
A. Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour .
B. Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the min count as well as the max count of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour.
C. Configure your Auto Scaling group by creating a target tracking policy and setting the instance count to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the designated hour.
D. Configure your Auto Scaling group by creating a simple tracking policy and setting the instance count to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the designated hour.
A. Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour .
A retail company has developed a REST API which is deployed in an Auto Scaling group behind an Application Load Balancer. The REST API stores the user data in Amazon DynamoDB and any static content, such as images, are served via Amazon Simple Storage Service (Amazon S3). On analyzing the usage trends, it is found that 90% of the read requests are for commonly accessed data across all users.
As a Solutions Architect, which of the following would you suggest as the MOST efficient solution to improve the application performance?
A: Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3
B: Enable ElastiCache Redis for DynamoDB and CloudFront for S3
C: Enable DAX for DynamoDB and ElastiCache Memcached for S3
D: Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for S3
A: Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3
The flagship application for a gaming company connects to an Amazon Aurora database and the entire technology stack is currently deployed in the United States. Now, the company has plans to expand to Europe and Asia for its operations. It needs the games table to be accessible globally but needs the users and games_played tables to be regional only.
How would you implement this with minimal application refactoring?
A: Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for the users and games_played tables
B: Use an Amazon Aurora Global Database for the games table and use DynamoDB tables for the users and games_played tables
C: Use a DynamoDB global table for the games table and use Amazon Aurora for the users and games_played tables
D: Use a DynamoDB global table for the games table and use DynamoDB tables for the users and games_played tables
A: Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for the users and games_played tables
The engineering team at a Spanish professional football club has built a notification system for its website using Amazon Simple Notification Service (Amazon SNS) notifications which are then handled by an AWS Lambda function for end-user delivery. During the off-season, the notification systems need to handle about 100 requests per second. During the peak football season, the rate touches about 5000 requests per second and it is noticed that a significant number of the notifications are not being delivered to the end-users on the website.
As a solutions architect, which of the following would you suggest as the BEST possible solution to this issue?
A: Amazon SNS has hit a scalability limit, so the team needs to contact AWS support to raise the account limit
B: Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for Lambda, so the team needs to contact AWS support to raise the account limit
C: The engineering team needs to provision more servers running the SNS service
D: The engineering team needs to provision more servers running the Lambda service
B: Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for Lambda, so the team needs to contact AWS support to raise the account limit
A video analytics organization has been acquired by a leading media company. The analytics organization has 10 independent applications with an on-premises data footprint of about 70TB for each application. The CTO of the media company has set a timeline of two weeks to carry out the data migration from on-premises data center to AWS Cloud and establish connectivity.
Which of the following are the MOST cost-effective options for completing the data transfer and establishing connectivity? (Select two)
A: Order 10 Snowball Edge Storage Optimized devices to complete the one-time data transfer
B: Setup AWS direct connect to establish connectivity between the on-premises data center and AWS Cloud
C: Setup Site-to-Site VPN to establish on-going connectivity between the on-premises data center and AWS Cloud
D: Order 1 Snowmobile to complete the one-time data transfer
E: Order 70 Snowball Edge Storage Optimized devices to complete the one-time data transfer
A: Order 10 Snowball Edge Storage Optimized devices to complete the one-time data transfer
C: Setup Site-to-Site VPN to establish on-going connectivity between the on-premises data center and AWS Cloud
The engineering team at a data analytics company has observed that its flagship application functions at its peak performance when the underlying EC2 instances have a CPU utilization of about 50%. The application is built on a fleet of EC2 instances managed under an Auto Scaling group. The workflow requests are handled by an internal Application Load Balancer that routes the requests to the instances. As a solutions architect, what would you recommend so that the application runs near its peak performance state?
A. Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%.
B. Configure the Auto Scaling group to use simple scaling policy and set the CPU utilization as the target metric with a target value of 50%.
C. Configure the Auto Scaling group to use a Cloudwatch alarm triggered on a CPU utilization threshold of 50%.
D. Configure the Auto Scaling group to use step scaling policy and set the CPU utilization as the target metric with a target value of 50%.
A. Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%
A healthcare company uses its on-premises infrastructure to run legacy applications that require specialized customizations to the underlying Oracle database as well as its host operating system (OS). The company also wants to improve the availability of the Oracle database layer. The company has hired you as an AWS Certified Solutions Architect Associate to build a solution on AWS that meets these requirements while minimizing the underlying infrastructure maintenance effort.
Which of the following options represents the best solution for this use case?
A: Deploy the Oracle database layer on multiple EC2 instances spread across two Availability Zones (AZ). This deployment configuration guarantees high availability and also allows the database administrators to access and customize the database environment and the underlying operating system
B: Leverage multi-AZ configuration of RDS Custom for Oracle that allows the database administrators to access and customize the database environment and the underlying operating system
C: Leverage multi-AZ configuration of RDS for Oracle that allows the database administrators to access and customize the database environment and the underlying operating system
D: Leverage cross AZ read-replica configuration of RDS for Oracle that allows the database administrators to access and customize the database environment and the underlying operating system
B: Leverage multi-AZ configuration of RDS Custom for Oracle that allows the database administrators to access and customize the database environment and the underlying operating system
An e-commerce company is looking for a solution with high availability, as it plans to migrate its flagship application to a fleet of Amazon EC2 instances. The solution should allow for content-based routing as part of the architecture.
As a Solutions Architect, which of the following will you suggest for the company?
A. Use a Network Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure a Private IP address to mask any failure of an instance
B. Use an Application Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure Auto Scaling group to mask any failure of an instance
C. Use an Auto Scaling group for distributing traffic to the EC2 instances spread across different Availability Zones. Configure an Elastic IP address to mask any failure of an instance
D. Use an Auto Scaling group for distributing traffic to the EC2 instances spread across different Availability Zones. Configure a Public IP address to mask any failure of an instance
B. Use an Application Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure Auto Scaling group to mask any failure of an instance
An IT consultant is helping the owner of a medium-sized business set up an AWS account. What are the security recommendations he must follow while creating the AWS account root user? (Select two)
A. Create AWS account root user access keys and share those keys only with the business owner.
B. Enable Multi Factor Authentication (MFA) for the AWS account root user account.
C. Create a strong password for the AWS account root user.
D. Send an email to the business owner with details of the login username and password for the AWS root user. This will help the business owner to troubleshoot any login issues in future.
E. Encrypt the access keys and save them on Amazon S3.
B. Enable Multi Factor Authentication (MFA) for the AWS account root user account.
C. Create a strong password for the AWS account root user.
A technology blogger wants to write a review on the comparative pricing for various storage types available on AWS Cloud. The blogger has created a test file of size 1GB with some random data. Next he copies this test file into AWS S3 Standard storage class, provisions an EBS volume (General Purpose SSD (gp2)) with 100GB of provisioned storage and copies the test file into the EBS volume, and lastly copies the test file into an EFS Standard Storage filesystem. At the end of the month, he analyses the bill for costs incurred on the respective storage types for the test file. What is the correct order of the storage charges incurred for the test file on these three storage types?
A. Cost of test file storage on EFS < Cost of test file storage on S3 Standard < Cost of test file storage on EBS.
B. Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS.
C. Cost of test file storage on S3 Standard < Cost of test file storage on EBS < Cost of test file storage on EFS.
D. Cost of test file storage on EBS < Cost of test file storage on S3 Standard < Cost of test file storage on EFS.
B. Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS.
A data analytics company measures what the consumers watch and what advertising they’re exposed to. This real-time data is ingested into its on-premises data center and subsequently, the daily data feed is compressed into a single file and uploaded on Amazon S3 for backup. The typical compressed file size is around 2 GB.
Which of the following is the fastest way to upload the daily compressed file into S3?
A: Upload the compressed file in a single operation
B: Upload the compressed file using multipart upload
C: FTP the compressed file into an EC2 instance that runs in the same region as the S3 bucket. Then transfer the file from the EC2 instance into the S3 bucket
D: Upload the compressed file using multipart upload with S3 transfer acceleration
D: Upload the compressed file using multipart upload with S3 transfer acceleration