AWS Flashcards
You are migrating a legacy client-server application to AWS. The application responds to a specific DNS domain (e.g. www.example.com) and has a 2-tier architecture, with multiple application servers and a database server. Remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently takin that information from the TCP socket. A Multi-AZ RDS MySQL instance will be used for the database.
During the migration you change the application code, but you have to file a change request.
How would you implement the architecture on AWS in order to maximize scalability and high availability?
A. File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs.
B. File a change request to implement Cross-Zone support in the application. Use an ELB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs.
C. File a change request to implement Latency Based Routing support in the application. Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZs.
D. File a change request to implement Alias Resource support in the application. Use the Route 53 Alias Resource Record to distribute load on two application servers in different AZs.
A. File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs.
To serve Web traffic for a popular product, your chief financial officer and IT director have purchased 10 m1.large heavy utilization Reserved Instances (RIs) evenly spread across two availability zones; Route 53 is used to deliver the traffic to an Elastic Load Balancer (ELB). After several months, the product grows even more popular and you need additional capacity. As a result, your company purchased two c3.2xlarge medium utilization RIs. You register the two c3.2xlarge instances with your ELB and quickly find that the m1.large instances are at 100% of capacity and the c3.2xlarge instances have significant capacity that’s unused. Which option is the most cost effective and uses EC2 capacity most effectively?
A. Configure ELB with two c3.2xlarge instances and use on-demand Autoscaling group for up to two additional c3.2xlarge instances. Shut off m1.large instances.
B. Configure Autoscaling group and Launch Configuration with ELB to add up to 10 more on-demand m1.large instances when triggered by Cloudwatch. Shut off c3.2xlarge instances.
C. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin.
D. Route traffic to EC2 m1.large and c3.2xlarge instances directly using Route 53 latency based routing and health checks. Shut off ELB.
C. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin.
Your company produces customer commissioned one-of-a-kind skiing helmets, combining high fashion with custom technical enhancements. Customers can show off their individuality on the ski slopes and have access to head-up displays, GPS, rear-view cams and any other technical innovation they wish to embed in the helmet.
The current manufacturing process is data rich and complex, including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards. Assessments are a mixture of human and automated assessments. You need to add a new set of assessment to model the failure modes of the custom electronics using GPUs and CUDA across a cluster of servers with low latency networking.
What architecture would allow you to automate the existing process using a hybrid approach, and ensure that the architecture can support the evolution of processes over time.
A. Use Amazon Simple Workflow (SWF) to manage assessments, movement of data and metadata. Use an auto-scaling group of C3 instances with SR-IOV (Single Root I/O Virtualization)
B. Use Amazon Simple Workflow (SWF) to manage assessments, movement of data and metadata. Use an auto-scaling group of G2 instances in a placement group.
C. Use AWS Data Pipeline to manage movement of data and meta-data and assessments. Use auto-scaling group of C3 with SR-IOV (Single Root I/O Virtualization)
D. Use AWS Data Pipeline to manage movement of data and meta-data and assessments. Use an auto-scaling group of G2 instances in a placement group.
B. Use Amazon Simple Workflow (SWF) to manage assessments, movement of data and metadata. Use an auto-scaling group of G2 instances in a placement group.
A newspaper organization has a on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java. They have scanned the old newspapers into JPEG (approx. 17TB) and used Optical character Recognition (OCR) to populate a commercial search product. The hosting platform and software are now end of life and the organization wants to migrate its archive to AWS and process efficient architecture and still be designed for availability and durability. Which is the most appropriate?
A. Model the environment using CloudFormation, use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index
B. Use S3 with reduced redundancy to store and serve the scanned files, install the commercial search application on EC2 instances and configure with auto-scaling and an Elastic Load Balancer.
C. Use a single-AZ RDS MySQL instance to store the search index and the JPEG images, use an EC2 instance to serve the website and translate user queries into SQL.
D. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.
E. Use a CloudFront download distribution to serve the JPEGs to the end users and install the current commercial search product, along with a Java Container for the website on EC2 instances and use Route53 with DNS round-robin.
D. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.
A 3-tier e-commerce web application is currently deployed on-premises, and will be migrated to AWS for greater scalability and elasticity. The webtier currently shares read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast. The database tier uses shared-storage clustering to provide database failover capability, and uses several read salves for scaling. Data on all servers and the distributed file system directory is backed up weekly to off-site tapes.
Which AWS storage and database architecture meets the requirement of the application?
A. Web servers: store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
B. Web servers: store read-only data in S3 and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment. Backup: web and app servers backed up weely via AMIs, database backed up via DB snapshots.
C. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
D. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and databased backed up weekly to Glacier using snapshots.
D. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and databased backed up weekly to Glacier using snapshots.
You are designing the network infrastructure for an application server in Amazon VPC. Users will access all the application instances from the Internet, as well as from an on-premises network. The on-premises network is connected to your VPC over an AWS Direct Connect link.
How would you design routing to meet above requirements?
A. Configure a single routing table with two default routes; one to the internet via an internet gateway, the other to the on-premises network via the VPN gateway. Use the routing table across all subnets in your VPC.
B. Configure two routing tables, one that has a default route via the internet gateway, and another that has a default route vi9a the VPN gateway. Associate both routing tables with each VPS subnet.
C. Configure a single routing table with a default route via the Internet gateway. Propagate a default route via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.
D. Configure a single routing table with a default route via the internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.
D. Configure a single routing table with a default route via the internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.
Q7) Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe, and USA. The logistic software has a 3-tier architecture and currently using MySQL 5.6 for data persistence. Each region has deployed its own database.
In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices. This batch process must be completed as fast as possible to quickly optimize logistics.
How do you build the database architecture in order to meet the requirements?
A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region.
B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region.
C. Use Direct Connect to connect all regional MySQL on EC2 with a master to the HQ region and reduce network latency for the batch process.
D. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region.
E. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region.
A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region.
You have an application running an EC2 instance which will allow users to download files from a private S3 bucket using a pre-signaled URL. Before generating the URL, the application should verify the existence of the file in S3.
How should the application use AWS credentials to access the S3 bucket securely?
A. Use the AWS account access keys: the application retrieves the credentials from the source code of the application.
B. Create an IAM user for the application with permissions that allow list access to the S3 bucket; the application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.
C. Create an IAM role for EC2 that allows list access to objects in the S3 bucket; launch the instance with the role, and retrieve the role credentials from the EC2 instance metadata.
D. Create an IAM user for the application with permissions that allow list access to the S3 bucket; launch the instance as the IAM user, and retrieve the IAM user’s credentials from the EC2 instance user data.
C. Create an IAM role for EC2 that allows list access to objects in the S3 bucket; launch the instance with the role, and retrieve the role credentials from the EC2 instance metadata.
Your company hosts a social media website for storing and sharing documents. The web application allows users to upload large files while resuming and pausing the upload as needed. Currently, files are uploaded to your PHP front end backed by Elastic Load Balancing and an autoscaling fleet of Amazon Elastic Compute Cloud (EC2) instances that scale upon average of bytes received (NetworkIn). After a file has been uploaded, it is copied to the Amazon Simple Storage Service (S3). Amazon Ec2 instances use an AWS Identity and Access Management (IAM) role that allows Amazon S3 uploads. Over the last six months, your user base and scale has increased significantly, forcing you to increase the Auto Scaling group’s Max parameter a few times Your CFO is concerned about rising costs and has asked you to adjust the architecture where needed to better optimize costs.
Which architecture change could you introduce to reduce costs and still keep your web application secure and scalable?
A. Replace the Auto Scaling launch configuration to include c3.8xlarge instances: those instances can potentially yield a network throughput of 10gpbs.
B. Re-architect your ingest pattern, have the app authenticate against your identity provider, and use your identity provider as a broker fetching temporary AWS credentials from AWS Secure Token Service (GetFederationToken). Securely pass the credentials and S3 endpoint/prefix to your app. Implement client-side logic to directly upload the file to Amazon S3 using the given credentials and S3 prefix.
C. Re-architect your ingest patterns, have the app authenticate against your identity provider, and use your identity provider as a broker fetching temporary AWS credentials from AWS Secure Token Service (GetFederationToken). Securely pass the credentials and S3 endpoint/prefix to your app. Implement client-side logic that uses the S3 multipart upload API to directly upload the file to Amazon S3 using the given credentials and S3 prefix.
D. Re-architect your ingest pattern, and move your web application instances into a VPC public subnet. Attach a public IP address for each EC2 instance (using the Auto Scaling launch configuration settings). Use Amazon Route 53 Round Robin records set an HTTP health check to DNS load balance the app requests; this approach will significantly reduce the cost by bypassing Elastic Load Balancing.
D. Re-architect your ingest pattern, and move your web application instances into a VPC public subnet. Attach a public IP address for each EC2 instance (using the Auto Scaling launch configuration settings). Use Amazon Route 53 Round Robin records set an HTTP health check to DNS load balance the app requests; this approach will significantly reduce the cost by bypassing Elastic Load Balancing.
Your system recently experienced down time. During the troubleshooting process you found that a new administrator mistakenly terminated several production EC2 instances.
Which of the following strategies will help prevent a similar situation in the future?
The administrator still must be able to:
- Launch, start, stop, and terminate development resources,
- Launch and start production instances
A. Create an IAM user and apply an IAM role which prevents users from terminated production EC2 instances.
B. Leverage EC2 terminated protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances.
C. Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production EC2 resources.
D. Create an IAM user which is not allowed to terminate instances by leveraging production EC2 termination protection.
C. Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production EC2 resources.
A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC), and is connected to the corporate data center via an IPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) key space specific to that user.
Which two approaches can satisfy these objectives? (Select TWO.)
A. The application authenticates against LDAP. The application then calls the AWS Identity and Access Management (IAM) Security Service to log in to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.
B. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket.
C. The application authenticates against IAM Security Token Service using the LDAP credentials. The applications uses those temporary AWS security credentials to access the appropriate S3 bucket.
D. Develop an identity broker that authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated users credentials with access to the appropriate S3 bucket.
E. Develop an identity broker that authenticates against IAM Security Token Service to assume an IAM role in order to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket.
B. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket.
E. Develop an identity broker that authenticates against IAM Security Token Service to assume an IAM role in order to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket.
You are designing a photo-sharing mobile app. The application will store all pictures in a single Amazon S3 bucket. Users will upload pictures from their mobile device directly to Amazon S3 and will be able to view and download their own pictures directly from Amazon S3. You want to configure security to handle potentially millions of users in the most secure manner possible. What should your server-side application do when a new user registers on the photo-sharing mobile application.
A. Record the user’s information in Amazon DynamoDB. When the user uses their mobile app, create temporary credentials using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app’s memory and use them to access Amazon S3. Generate new crendetials the next time the user runs the mobile app.
B. Create an IAM user. Assign appropriate permissions to the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3.
C. Create an IAM user. Update the bucket policy with appropriate permissions for the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amason S3.
D. Create a set of long-term credentials using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app and use them to access Amazon S3.
E. Record the user’s information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app, create temporary credentials using the AWS Security Token Service “AssumeRole” function. Store these credentials in the mobile app’s memory and use them to access Amazon S3. Generate new credentials the next time the user runs the mobile app.
E. Record the user’s information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app, create temporary credentials using the AWS Security Token Service “AssumeRole” function. Store these credentials in the mobile app’s memory and use them to access Amazon S3. Generate new credentials the next time the user runs the mobile app.
You are running a successful multitier web-application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web application’s database. You are currently running a Multi-AZ RDS MySQL instance for the database tier. You also have implemented ElastiCache as a database caching layer between the application tier and database tier.
Please select the answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database.
A. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ.
B. Generate the reports by querying the ElastiCache database caching tier.
C. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests.
D. Launch a RDS Read Replica connected to your Multi-AZ master database and generate reports by querying the Read Replica.
D. Launch a RDS Read Replica connected to your Multi-AZ master database and generate reports by querying the Read Replica.
Your website is serving on-demand training videos to your workforce. Videos are uploaded mostly in high resolution MP4 format. Your workforce is distributed globally, often on the move and using company-provided tablets that require that HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and if required you may need to pay for a consultant.
How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery?
A. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS, S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3.
B. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2.
C. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days. CloudFront to serve HLS transcended videos from Glacier.
D. A video transcoding pipeline on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2.
A. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS, S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3.
Company B is launching a new game app for mobile devices. Users will log into the game using their existing social media account. To streamline data capture, Company B would like to directly save player data and scoring information from the mobile app to DynamoDB named ScoreData. When a user saves their game, the progress data will be stored to the GameState S3 bucket. What is the best approach for storing data to DynamoDB and S3.
A. Use an IAM user with access credentials assigned a role providing access to the ScoreData DynamoDB table and the GameState S3 bucket for distribution with the mobile app.
B. Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the ScoreData DynamoDB table and the GameState S3 bucket.
C. Use temporary security credentials that assume a role providing access to the ScoreData DynamoDB table and the GameState S3 bucket using web identity federation.
D. Use an EC2 instance that is launched with an EC2 role providing access to the ScoreData DynamoDB and the GameState S3 bucket that communicates with the mobile app via web services.
C. Use temporary security credentials that assume a role providing access to the ScoreData DynamoDB table and the GameState S3 bucket using web identity federation.
You are designating an intrusion detection/prevention (IDS/IPS) solution for a customer web application in a single VPC. You are considering the options for implementing IDS/IPS protection for traffic coming from the internet.
Which of the following options would you consider? (Select TWO.)
A. Implement IDS/IPS agents on each instance running in VPC.
B. Implement Elastic Load Balancing with SSL listeners in front of the web applications.
C. Configure an instance in each subnets to switch its network interface card to promiscuous mode and analyze network traffic.
D. Implement a reverse proxy layer in front of web servers, and configure IDS/IPS agents on each reverse proxy server.
A. Implement IDS/IPS agents on each instance running in VPC.
D. Implement a reverse proxy layer in front of web servers, and configure IDS/IPS agents on each reverse proxy server.
Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months. You expect 10 orders per day on your first day, 1000 orders per day after 6 months and 10,000 orders after 12 months.
Orders coming in are checked for consistency, then dispatched to your manufacturing plant for production, quality control, packaging, shipment and payment processing. If the product does not meet the quality standards at any stage of the process, employees may force the process to repeat a step. Customers are notified via email about order status and any critical issues with their orders such as payment failure.
Your base architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders.
How can you implement the order fulfillment process while making sure that the emails are delivered reliably?
A. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1. Use SES to send emails to customers
B. Use an SQS queue to manage all process tasks. Use an Auto Scaling group of EC2 instances that poll the tasks and execute them. Use SES to send emails to customers.
C. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1. Use the decider instance to send emails to customers.
D. Add a business process management application to your Elastic Beanstalk app servers and re-use the RDS database for tracking order status. Use one of the Elastic Beanstalk instances to send emails to customers.
A. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1. Use SES to send emails to customers
You’ve been hired to enhance the overall security posture for a very large e-commerce site. They have a well architected, multi-tier application running in a VPC that uses ELBs in front of both the web and the app tier with static assets served from S3. They are using a combination of RDS and DynamoDB for their dynamic data and then archiving into S3 for further processing with EMR. They are concerned because they found questionable log entries and suspect someone is attempting to gain unauthorized access.
Which approcac provides a cost effective, scalable mitigation to this kind of attack?
A. Recommend that they lease space at a DirectConnect partner location and establish a 1G DirectConnect connection to their VPC. They would then establish Internet connectivity into their space, filter the traffic in a hardware Web Application Firewall (WAF), and then pass the traffic through the DirectConnect connection into their application running in their VPC.
B. Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet.
C. Add a WAF tier by creating a new ELB and an AutoScaling group of EC2 Instances running a host-based WAF. They would redirect Route 53 to resolve to the new WAF tier ELB. The WAF tier would then pass the traffic to the current web tier. The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group.
D. Remove all but TLS 1.2 from the web tier ELB and enable Advanced Protocol Filtering. This will enable the ELB itself to perform WAF functionality.
C. Add a WAF tier by creating a new ELB and an AutoScaling group of EC2 Instances running a host-based WAF. They would redirect Route 53 to resolve to the new WAF tier ELB. The WAF tier would then pass the traffic to the current web tier. The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group.
You have been asked to design the storage layer for an application. The application requires disk performance of at least 100,000 IOPS. In addition, the storage layer must be able to survive the loss of an individual disk, EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB. Which of the following designs will meet these objectives?
A. Instantiate a c3 8xlarge instance in us-east-1. Provision 4x1TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volume. Ensure that EBS snapshots are performed every 15 minutes.
B. Instantiate a c3 8xlarge instance in us-east-1. Provision 3xlTB EBS volumes, attach them to the Instance, and configure them as a single RAID 0 volume. Ensure that EBS snapshots are performed every 15 minutes.
C. Instantiate an 12 8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance. Provision 3x1TB EBS volumes, attach them to the instance, and configure them as a second RAID 0 volume. Configure synchronous, block-level replication from the ephemeral-backed volume to the EBSbacked volume.
D. Instantiate a c3 8xlarge instance in us-east-1. Provision an AWS Storage Gateway and configure it for 3 TB of storage and 100,000 IOPS. Attach the volume to the instance.
E. Instantiate an 12 8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance. Configure synchronous, block-level replication to an identically configured instance in us-east-1b.
E. Instantiate an 12 8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance. Configure synchronous, block-level replication to an identically configured instance in us-east-1b.
You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An Elastic MapReduce Job is periodically analyzing the logs on S3 to build a usage dashboard that you share with your CIO. You recently improved overall performance of the website using CloudFront for dynamic content delivery and your website as the origin. After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order of magnitude. How do you fix your usage dashboard?
A. Change your log collection process to use CloudWatch ELB metrics as input of the Elastic MapReduce Job.
B. Turn on CloudTrail and use trail log files on S3 as input of the Elastic MapReduce job.
C. Enable CloudFront to deliver access logs to S3 and use them as input of the Elastic MapReduce job.
D. Use Elastic Beanstalk “Restart App Server(s)” option to update log delivery to the Elastic MapReduce job.
E. Use Elastic Beanstalk “Rebuild Environment” option to update log delivery to the Elastic MapReduce job.
C. Enable CloudFront to deliver access logs to S3 and use them as input of the Elastic MapReduce job.
You are designing an SSL/TLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication. The solution must be resilient. Which of the following options would you consider for configuring the Web server infrastructure? Choose 2 answers
A. Configure your Web servers as the origins for a CloudFront distribution. Use custom SSL certificates on your CloudFront distribution.
B. Configure ELB with TCP listeners on TCP/443, and place the Web servers behind it.
C. Configure your Web servers with EIPs. Place the Web servers in a Route53 Record Set, and configure health checks against all Web servers.
D. Configure ELB with HTTPS listeners, and place the Web servers behind it.
A. Configure your Web servers as the origins for a CloudFront distribution. Use custom SSL certificates on your CloudFront distribution.
D. Configure ELB with HTTPS listeners, and place the Web servers behind it.
A large real-estate brokerage is exploring the option of adding a cost-effective location based alert to their existing mobile application. The application backend infrastructure currently runs on AWS. Users who opt in to this service will receive alerts on their mobile device regarding real estate offers in proximity to their location. For the alerts to be relevant delivery time needs to be in the low minute count. The existing mobile app has 5 million users across the US. Which one of the following architectural suggestions would you make to the customer?
A. The mobile application will send device location using SQS, EC2 instances will retrieve the relevant offers from DynamoDB. AWS Mobile Push will be used to send offers to the mobile application.
B. Use AWS DirectConnect or VPN to establish connectivity with mobile carriers. EC2 instances will receive the mobile applications location through earner connection; RDS will be used to store and retrieve relevant offers. EC2 instances will communicate with mobile carriers to push alerts back to the mobile application.
C. The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances; DynamoDB will be used to store and retrieve relevant offers. EC2 instances will communicate with mobile carriers/device providers to push alerts back to mobile application.
D. The mobile application will send device location using AWS Mobile Push, EC2 instances will retrieve the relevant offers from DynamoDB. EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.
A. The mobile application will send device location using SQS, EC2 instances will retrieve the relevant offers from DynamoDB. AWS Mobile Push will be used to send offers to the mobile application.
Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with an Multi-AZ RDS MySQL Extra Large DB Instance backend. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? Choose 2 answers
A. Add an RDS MySQL read replica in each availability zone.
B. Deploy ElastiCache in-memory cache running in each availability zone.
C. Increase the RDS MySQL instance size and implement provisioned IOPS.
D. Implement sharding to distribute load to multiple RDS MySQL Instances.
A. Add an RDS MySQL read replica in each availability zone.
B. Deploy ElastiCache in-memory cache running in each availability zone.
Q24) A company is running a batch analysis every hour on their main transactional DB. running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?
A. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
B. Replace ROS with Redsnift for the oaten analysis and SQS to send a message to the onpremises system to update the dashboard
C. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard.
D. Create an RDS Read Replica for the batch analysis and SQS to send a message to the onpremises system to update the dashboard.
C. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard.
Q25) A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?
A. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch, and RDS with read replicas
B. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch, and multi-AZ RDS
C. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch, and RDS with read replicas
D. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch, and multi-AZ RDS
A. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch, and RDS with read replicas
You are designing a data leak prevention solution for your VPC environment. You want your VPC instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CDNs by their URLs. You want to explicitly deny any other outbound connections from your VPC instances to hosts on the Internet. Which of the following options would you consider?
A. Implement security groups and configure outbound rules to only permit traffic to software depots.
B. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access. Remove default routes.
C. Implement network access control lists to allow specific destinations, with an implicit deny all rule.
D. Move all your instances into private VPC subnets. Remove default routes from all routing tables and add specific routes to the software depots and distributions only.
B. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access. Remove default routes.
) Refer to the architecture diagram above of a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances which are used as batch processors. CloudWatch monitors the number of job requests (queued messages) and an Auto Scaling group adds or deletes batch servers automatically based on parameters set in CloudWatch alarms. You can use this architecture to implement which of the following features in a cost effective and efficient manner?
A. Coordinate number of EC2 instances with number of Job requests automatically, thus improving cost effectiveness.
B. Reduce the overall time for executing Jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup.
C. Implement fault tolerance against EC2 instance failure since messages would remain in SQS and work can continue with recovery of EC2 instances. Implement fault tolerance against SQS failure by backing up messages to S3.
D. Handle high priority Jobs before lower priority Jobs by assigning a priority metadata field to SQS messages.
E. Implement message passing between EC2 instances within a batch by exchanging messages through SQS.
A. Coordinate number of EC2 instances with number of Job requests automatically, thus improving cost effectiveness.
Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS, which service should you use?
A. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database
B. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.
C. Amazon ElastiCache to store the writes until the writes are committed to the database.
D. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
A. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database
You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced, auto scaled Ingestion layer using EC2 instances, and a PostgreSQL RDS database with 500GB standard storage The pilot is considered a success and your CEO has managed to get the attention of some potential Investors. The business plan requires a deployment of at least 100k sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements?
A. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
B. Keep the current architecture, but upgrade RDS storage to 3TB and 10k provisioned IOPS
C. Ingest data into a DynamoDB table and move old data to a Redshift cluster
D. Add an SQS queue to the ingestion layer to buffer writes to the RDS Instance
C. Ingest data into a DynamoDB table and move old data to a Redshift cluster
You are designing Internet connectivity for your VPC. The Web servers must be available on the Internet. The application must have a highly available architecture. Which alternatives should you consider? Choose 2 answers
A. Assign EIPs to all Web servers. Configure a Route53 record set with all EIPs, with health checks and DNS failover.
B. Configure a NAT instance in your VPC. Create a default route via the NAT Instance and associate it with all subnets. Configure a DNS A record that points to the NAT Instance public IP address.
C. Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers. Configure a Route53 CNAME record to your CloudFront distribution.
D. Place all your Web servers behind ELB. Configure a Route53 CNAME to point to the ELB DNS name.
E. Configure ELB with an EIP. Place all your Web servers behind ELB. Configure a Route53 A record that points to the EIP.
A. Assign EIPs to all Web servers. Configure a Route53 record set with all EIPs, with health checks and DNS failover.
D. Place all your Web servers behind ELB. Configure a Route53 CNAME to point to the ELB DNS name.