Practice Exam 4 Flashcards

1
Q

An IT company has built a custom data warehousing solution for a retail organization by using Amazon Redshift. As part of the cost optimizations, the company wants to move any historical data (any data older than a year) into S3, as the daily analytical reports consume data for just the last one year. However the analysts want to retain the ability to cross-reference this historical data along with the daily reports.

The company wants to develop a solution with the LEAST amount of effort and MINIMUM cost. As a solutions architect, which option would you recommend to facilitate this use-case?

A
  • Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3.
  • The analytics team can then query this historical data to cross-reference with the daily reports from Redshift
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

An IT company provides S3 bucket access to specific users within the same account for completing project specific work. With changing business requirements, cross-account S3 access requests are also growing every month. The company is looking for a solution that can offer user level as well as account-level access permissions for the data stored in S3 buckets.

As a Solutions Architect, which of the following would you suggest as the MOST optimized way of controlling access for this use-case?

A

Use Amazon S3 Bucket Policies

  • Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects within a single bucket.
  • Policies can be attached to users, groups, or Amazon S3 buckets, enabling centralized management of permissions.
  • With bucket policies, you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources.
  • You can further restrict access to specific resources based on certain conditions.
  • For example, you can restrict access based on request time (Date Condition),
  • whether the request was sent using SSL (Boolean Conditions),
  • a requester’s IP address (IP Address Condition), or
  • based on the requester’s client application (String Conditions). To identify these conditions, you use policy keys.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company has moved its business critical data to Amazon EFS file system which will be accessed by multiple EC2 instances.

As an AWS Certified Solutions Architect Associate, which of the following would you recommend to exercise access control such that only the permitted EC2 instances can read from the EFS file system? (Select three)

A
  1. Use VPC security groups to control the network traffic to and from your file system
  2. Attach an IAM policy to your file system to control clients who can mount your file system with the required permissions
  3. Use EFS Access Points to manage application access
  • You control which EC2 instances can access your EFS file system by using VPC security group rules and AWS Identity and Access Management (IAM) policies.
  • Use VPC security groups to control the network traffic to and from your file system.
  • Attach an IAM policy to your file system to control which clients can mount your file system and with what permissions,
  • Use EFS Access Points to manage application access.
  • Control access to files and directories with POSIX-compliant user and group-level permissions.
  • Files and directories in an Amazon EFS file system support standard Unix-style read, write, and execute permissions based on the user ID and group IDs.
  • When an NFS client mounts an EFS file system without using an access point, the user ID and group ID provided by the client is trusted.
  • You can use EFS access points to override user ID and group IDs used by the NFS client.
  • When users attempt to access files and directories, Amazon EFS checks their user IDs and group IDs to verify that each user has permission to access the objects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A cybersecurity company uses a fleet of EC2 instances to run a proprietary application. The infrastructure maintenance group at the company wants to be notified via an email whenever the CPU utilization for any of the EC2 instances breaches a certain threshold.

Which of the following services would you use for building a solution with the LEAST amount of development effort? (Select two)

A

Amazon SNS - Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging.

Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Amazon CloudWatch allows you to monitor AWS cloud resources and the applications you run on AWS.

You can use CloudWatch Alarms to send an email via SNS whenever any of the EC2 instances breaches a certain threshold. Hence both these options are correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your company is deploying a website running on Elastic Beanstalk. The website takes over 45 minutes for the installation and contains both static as well as dynamic files that must be generated during the installation process.

As a Solutions Architect, you would like to bring the time to create a new Instance in your Elastic Beanstalk deployment to be less than 2 minutes. What do you recommend? (Select two)

A
  • AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
  • You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring.
  • At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.
  • When you create an AWS Elastic Beanstalk environment, you can specify an Amazon Machine Image (AMI) to use instead of the standard Elastic Beanstalk AMI included in your platform version.
  • A custom AMI can improve provisioning times when instances are launched in your environment if you need to install a lot of software that isn’t included in the standard AMIs.
  • Create a Golden AMI with the static installation components already setup - A Golden AMI is an AMI that you standardize through configuration, consistent security patching, and hardening.
  • It also contains agents you approve for logging, security, performance monitoring, etc. For the given use-case, you can have the static installation components already setup via the golden AMI.
  • Use EC2 user data to customize the dynamic installation parts at boot time - EC2 instance user data is the data that you specified in the form of a configuration script while launching your instance.
  • You can use EC2 user data to customize the dynamic installation parts at boot time, rather than installing the application itself at boot time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The development team at an e-commerce startup has set up multiple microservices running on EC2 instances under an Elastic Load Balancer. The team wants to route traffic to multiple back-end services based on the content of the request.

Which of the following types of load balancers would allow routing based on the content of the request?

A

An Application Load Balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model.

  • After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply and then selects a target from the target group for the rule action.
  • You can configure listener rules to route requests to different target groups based on the content of the application traffic.
  • Each target group can be an independent microservice, therefore this option is correct.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

An engineering team wants to examine the feasibility of the user data feature of Amazon EC2 for an upcoming project.

Which of the following are true about the EC2 user data configuration? (Select two)

A
  • User Data is generally used to perform common automated configuration tasks and even run scripts after the instance starts.
  • When you launch an instance in Amazon EC2, you can pass two types of user data - shell scripts and cloud-init directives.
  • You can also pass this data into the launch wizard as plain text or as a file.
  • By default, scripts entered as user data are executed with root user privileges - Scripts entered as user data are executed as the root user, hence do not need the sudo command in the script.
  • Any files you create will be owned by root; if you need non-root users to have file access, you should modify the permissions accordingly in the script.
  • By default, user data runs only during the boot cycle when you first launch an instance - By default, user data scripts and cloud-init directives run only during the boot cycle when you first launch an instance.
  • You can update your configuration to ensure that your user data scripts and cloud-init directives run every time you restart your instance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

For security purposes, a team has decided to put their instances in a private subnet. They plan to deploy a VPC endpoint to access these services. The members of the team would like to know about the only two AWS services that require a Gateway Endpoint instead of an Interface Endpoint.

As a solutions architect, which of the following services would you suggest for this requirement? (Select two)

A
  1. Amazon S3
  2. DynamoDB
  • A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
  • Instances in your VPC do not require public IP addresses to communicate with resources in the service.
  • Traffic between your VPC and the other service does not leave the Amazon network.
  • Endpoints are virtual devices.
  • They are horizontally scaled, redundant, and highly available VPC components.
  • They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
  • There are two types of VPC endpoints:
  1. Interface Endpoints - An Interface Endpoint is an Elastic Network Interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service.
  2. Gateway Endpoints - A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3 and DynamoDB.
  • You must remember that only these two services use a VPC gateway endpoint.
  • The rest of the AWS services use VPC interface endpoints.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A media agency stores its re-creatable assets on Amazon S3 buckets. The assets are accessed by a large number of users for the first few days and the frequency of access falls down drastically after a week. Although the assets would be accessed occasionally after the first week, but they must continue to be immediately accessible when required. The cost of maintaining all the assets on S3 storage is turning out to be very expensive and the agency is looking at reducing costs as much as possible.

As a Solutions Architect, can you suggest a way to lower the storage costs while fulfilling the business requirements?

A

Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days - S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed.

  • Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA.
  • S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed and re-creatable data but do not require the availability and resilience of S3 Standard or S3 Standard-IA.
  • The minimum storage duration is 30 days before you can transition objects from S3 Standard to S3 One Zone-IA.
  • S3 One Zone-IA offers the same high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee.
  • S3 Storage Classes can be configured at the object level, and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.
  • You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The engineering team at an e-commerce company is working on cost optimizations for EC2 instances. The team wants to manage the workload using a mix of on-demand and spot instances across multiple instance types. They would like to create an Auto Scaling group with a mix of these instances.

Which of the following options would allow the engineering team to provision the instances for this use-case?

A

You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost

A launch template is similar to a launch configuration, in that it specifies instance configuration information such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances.

Also, defining a launch template instead of a launch configuration allows you to have multiple versions of a template.

With launch templates, you can provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Your company runs a website for evaluating coding skills. As a Solutions Architect, you’ve designed the architecture of the website to follow a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend is using an RDS PostgreSQL database. Caching is implemented using a Redis ElastiCache cluster. You would like to increase the security of your authentication to Redis from the Lambda function, leveraging a username and password combination.

As a solutions architect, which of the following options would you recommend?

A

Use Redis Auth - Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications.

  • Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store.
  • ElastiCache for Redis supports replication, high availability, and cluster sharding right out of the box.
  • IAM Auth is not supported by ElastiCache.
  • Redis authentication tokens enable Redis to require a token (password) before allowing clients to execute commands, thereby improving data security.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company manages a multi-tier social media application that runs on EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. As a solutions architect, you have been tasked to make the application more resilient to periodic spikes in request rates.

Which of the following solutions would you recommend for the given use-case? (Select two)

A

You can use Aurora replicas and CloudFront distribution to make the application more resilient to spikes in request rates.

  1. Use Aurora Replica
  2. Use CloudFront distribution in front of the Application Load Balancer

Aurora Replicas have two main purposes.

You can issue queries to them to scale the read operations for your application.

  • You typically do so by connecting to the reader endpoint of the cluster.
  • That way, Aurora can spread the load for read-only connections across as many Aurora Replicas as you have in the cluster.

Aurora Replicas also help to increase availability.

  • If the writer instance in a cluster becomes unavailable, Aurora automatically promotes one of the reader instances to take its place as the new writer.
  • Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region.

Amazon CloudFront - is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

  • CloudFront points of presence (POPs) (edge locations) make sure that popular content can be served quickly to your viewers.
  • CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content.
  • CloudFront offers an origin failover feature to help support your data resiliency needs.
  • CloudFront is a global service that delivers your content through a worldwide network of data centers called edge locations or points of presence (POPs).
  • If your content is not already cached in an edge location, CloudFront retrieves it from an origin that you’ve identified as the source for the definitive version of the content.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A leading social media analytics company is contemplating moving its dockerized application stack into AWS Cloud. The company is not sure about the pricing for using Elastic Container Service (ECS) with the EC2 launch type compared to the Elastic Container Service (ECS) with the Fargate launch type.

Which of the following is correct regarding the pricing for these two services?

A
  • ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used.
  • ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests
  • Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service.
  • ECS allows you to easily run, scale, and secure Docker container applications on AWS.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A Big Data analytics company wants to set up an AWS cloud architecture that throttles requests in case of sudden traffic spikes. The company is looking for AWS services that can be used for buffering or throttling to handle such traffic variations.

Which of the following services can be used to support this requirement?

A

Throttling is the process of limiting the number of requests an authorized program can submit to a given operation in a given amount of time.

Amazon API Gateway, Amazon SQS and Amazon Kinesis -

  • To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API using the token bucket algorithm, where a token counts for a request.
  • Specifically, API Gateway sets a limit on a steady-state rate and a burst of request submissions against all APIs in your account.
  • In the token bucket algorithm, the burst is the maximum bucket size.

Amazon SQS - Amazon Simple Queue Service (SQS)

  • Is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS offers buffer capabilities to smooth out temporary volume spikes without losing messages or increasing latency.

Amazon Kinesis -

  • Amazon Kinesis is a fully managed, scalable service that can ingest, buffer, and process streaming data in real-time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The business analytics team at a company has been running ad-hoc queries on Oracle and PostgreSQL services on Amazon RDS to prepare daily reports for senior management. To facilitate the business analytics reporting, the engineering team now wants to continuously replicate this data and consolidate these databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift.

As a solutions architect, which of the following would you recommend as the MOST resource-efficient solution that requires the LEAST amount of development time without the need to manage the underlying infrastructure?

A

Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift

  • AWS Database Migration Service helps you migrate databases to AWS quickly and securely.
  • The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
  • With AWS Database Migration Service, you can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A media company wants a low-latency way to distribute live sports results which are delivered via a proprietary application using UDP protocol.

As a solutions architect, which of the following solutions would you recommend such that it offers the BEST performance for this use case?

A

Use Global Accelerator to provide a low latency way to distribute live sports results

  • AWS Global Accelerator is a networking service that helps you improve the availability and performance of the applications that you offer to your global users.
  • AWS Global Accelerator is easy to set up, configure, and manage.
  • It provides static IP addresses that provide a fixed entry point to your applications and eliminate the complexity of managing specific IP addresses for different AWS Regions and Availability Zones.
  • AWS Global Accelerator always routes user traffic to the optimal endpoint based on performance, reacting instantly to changes in application health, your user’s location, and policies that you configure.
  • Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

An e-commerce company has copied 1 PB of data from its on-premises data center to an Amazon S3 bucket in the us-west-1 Region using an AWS Direct Connect link. The company now wants to copy the data to another S3 bucket in the us-east-1 Region. The on-premises data center does not allow the use of AWS Snowball.

As a Solutions Architect, which of the following would you recommend to accomplish this?

A

Copy data from the source bucket to the destination bucket using the aws S3 sync command

  • The aws S3 sync command uses the CopyObject APIs to copy objects between S3 buckets.
  • The sync command lists the source and target buckets to identify objects that are in the source bucket but that aren’t in the target bucket.
  • The command also identifies objects in the source bucket that have different LastModified dates than the objects that are in the target bucket.
  • The sync command on a versioned bucket copies only the current version of the object—previous versions aren’t copied.
  • By default, this preserves object metadata, but the access control lists (ACLs) are set to FULL_CONTROL for your AWS account, which removes any additional ACLs.
  • If the operation fails, you can run the sync command again without duplicating previously copied objects.

You can use the command like so:

aws s3 sync s3://DOC-EXAMPLE-BUCKET-SOURCE s3://DOC-EXAMPLE-BUCKET-TARGET

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A media company wants to get out of the business of owning and maintaining its own IT infrastructure. As part of this digital transformation, the media company wants to archive about 5PB of data in its on-premises data center to durable long term storage.

As a solutions architect, what is your recommendation to migrate this data in the MOST cost-optimal way?

A

Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices.

Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier

Snowball Edge Storage Optimized

  • Is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS.
  • It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases.
  • The data stored on the Snowball Edge device can be copied into the S3 bucket and later transitioned into AWS Glacier via a lifecycle policy.
  • You can’t directly copy data from Snowball Edge devices into AWS Glacier.
19
Q

A news network uses Amazon S3 to aggregate the raw video footage from its reporting teams across the US. The news network has recently expanded into new geographies in Europe and Asia. The technical teams at the overseas branch offices have reported huge delays in uploading large video files to the destination S3 bucket.

Which of the following are the MOST cost-effective options to improve the file upload speed into S3? (Select two)

Use AWS Global Accelerator for faster file uploads into the destination S3 bucket

(Incorrect)

A
  1. Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket
  2. Use multipart uploads for faster file uploads into the destination S3 bucket

Amazon S3 Transfer Acceleration

  • Enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket.
  • Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations.
  • As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

Multipart upload

  • Allows you to upload a single object as a set of parts.
  • Each part is a contiguous portion of the object’s data.
  • You can upload these object parts independently and in any order.
  • If transmission of any part fails, you can retransmit that part without affecting other parts.
  • After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object.
  • In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.
  • Multipart upload provides improved throughput, therefore it facilitates faster file uploads.
20
Q

An e-commerce company is planning to migrate their two-tier application from on-premises infrastructure to AWS Cloud. As the engineering team at the company is new to the AWS Cloud, they are planning to use the Amazon VPC console wizard to set up the networking configuration for the two-tier application having public web servers and private database servers.

Can you spot the configuration that is NOT supported by the Amazon VPC console wizard?

A

VPC with a public subnet only and AWS Site-to-Site VPN access

The Amazon VPC console wizard provides the following four configurations:

VPC with a single public subnet

  • The configuration for this scenario includes a virtual private cloud (VPC) with a single public subnet, and an internet gateway to enable communication over the internet.
  • We recommend this configuration if you need to run a single-tier, public-facing web application, such as a blog or a simple website.

VPC with public and private subnets (NAT)

  • The configuration for this scenario includes a virtual private cloud (VPC) with a public subnet and a private subnet.
  • We recommend this scenario if you want to run a public-facing web application while maintaining back-end servers that aren’t publicly accessible.
  • A common example is a multi-tier website, with the web servers in a public subnet and the database servers in a private subnet.
  • You can set up security and routing so that the web servers can communicate with the database servers.

VPC with public and private subnets and AWS Site-to-Site VPN access

  • The configuration for this scenario includes a virtual private cloud (VPC) with a public subnet and a private subnet, and a virtual private gateway to enable communication with your network over an IPsec VPN tunnel.
  • We recommend this scenario if you want to extend your network into the cloud and also directly access the Internet from your VPC.
  • This scenario enables you to run a multi-tiered application with a scalable web front end in a public subnet and to house your data in a private subnet that is connected to your network by an IPsec AWS Site-to-Site VPN connection.

VPC with a private subnet only and AWS Site-to-Site VPN access

  • The configuration for this scenario includes a virtual private cloud (VPC) with a single private subnet, and a virtual private gateway to enable communication with your network over an IPsec VPN tunnel.
  • There is no Internet gateway to enable communication over the Internet.
  • We recommend this scenario if you want to extend your network into the cloud using Amazon’s infrastructure without exposing your network to the Internet.
21
Q

A retail company uses AWS Cloud to manage its IT infrastructure. The company has set up “AWS Organizations” to manage several departments running their AWS accounts and using resources such as EC2 instances and RDS databases. The company wants to provide shared and centrally-managed VPCs to all departments using applications that need a high degree of interconnectivity.

As a solutions architect, which of the following options would you choose to facilitate this use-case?

A

Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations

  • VPC sharing (part of Resource Access Manager) allows multiple AWS accounts to create their application resources such as EC2 instances, RDS databases, Redshift clusters, and Lambda functions, into shared and centrally-managed Amazon Virtual Private Clouds (VPCs).
  • To set this up, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations.
  • After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them.
  • Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner.
  • You can share Amazon VPCs to leverage the implicit routing within a VPC for applications that require a high degree of interconnectivity and are within the same trust boundaries.
  • This reduces the number of VPCs that you create and manage while using separate accounts for billing and access control.
22
Q

A developer needs to implement a Lambda function in AWS account A that accesses an Amazon S3 bucket in AWS account B.

As a Solutions Architect, which of the following will you recommend to meet this requirement?

A

Create an IAM role for the Lambda function that grants access to the S3 bucket. Set the IAM role as the Lambda function’s execution role. Make sure that the bucket policy also grants access to the Lambda function’s execution role

  • If the IAM role that you create for the Lambda function is in the same AWS account as the bucket, then you don’t need to grant Amazon S3 permissions on both the IAM role and the bucket policy.
  • Instead, you can grant the permissions on the IAM role and then verify that the bucket policy doesn’t explicitly deny access to the Lambda function role.
  • If the IAM role and the bucket are in different accounts, then you need to grant Amazon S3 permissions on both the IAM role and the bucket policy.
  • Therefore, this is the right way of giving access to AWS Lambda for the given use-case.
23
Q

A medium-sized business has a taxi dispatch application deployed on an EC2 instance. Because of an unknown bug, the application causes the instance to freeze regularly. Then, the instance has to be manually restarted via the AWS management console.

Which of the following is the MOST cost-optimal and resource-efficient way to implement an automated solution until a permanent fix is delivered by the development team?

A

Setup a CloudWatch alarm to monitor the health status of the instance. In case of an Instance Health Check failure, an EC2 Reboot CloudWatch Alarm Action can be used to reboot the instance

  • Using Amazon CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your EC2 instances.
  • You can use the stop or terminate actions to help you save money when you no longer need an instance to be running.
  • You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.
  • You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance.
  • The reboot alarm action is recommended for Instance Health Check failures (as opposed to the recover alarm action, which is suited for System Health Check failures).
24
Q

A big data analytics company is using Kinesis Data Streams (KDS) to process IoT data from the field devices of an agricultural sciences company. Multiple consumer applications are using the incoming data streams and the engineers have noticed a performance lag for the data delivery speed between producers and consumers of the data streams.

As a solutions architect, which of the following would you recommend for improving the performance for the given use-case?

A

Use Enhanced Fanout feature of Kinesis Data Streams

Amazon Kinesis Data Streams (KDS)

  • Is a massively scalable and durable real-time data streaming service.
  • KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.
  • By default, the 2MB/second/shard output is shared between all of the applications consuming data from the stream.
  • You should use enhanced fan-out if you have multiple consumers retrieving data from a stream in parallel.
  • With enhanced fan-out developers can register stream consumers to use enhanced fan-out and receive their own 2MB/second pipe of read throughput per shard, and this throughput automatically scales with the number of shards in a stream.
25
Q

An Electronic Design Automation (EDA) application produces massive volumes of data that can be divided into two categories. The ‘hot data’ needs to be both processed and stored quickly in a parallel and distributed fashion. The ‘cold data’ needs to be kept for reference with quick access for reads and updates at a low cost.

Which of the following AWS services is BEST suited to accelerate the aforementioned chip design process?

A

Amazon FSx for Lustre

Amazon FSx for Lustre

  • Makes it easy and cost-effective to launch and run the world’s most popular high-performance file system.
  • It is used for workloads such as machine learning, high-performance computing (HPC), video processing, and financial modeling.
  • The open-source Lustre file system is designed for applications that require fast storage – where you want your storage to keep up with your compute.
  • FSx for Lustre integrates with Amazon S3, making it easy to process data sets with the Lustre file system.
  • When linked to an S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files and allows you to write changed data back to S3.
  • FSx for Lustre provides the ability to both process the ‘hot data’ in a parallel and distributed fashion as well as easily store the ‘cold data’ on Amazon S3.

Therefore this option is the BEST fit for the given problem statement.

26
Q

The development team at a social media company wants to handle some complicated queries such as “What are the number of likes on the videos that have been posted by friends of a user A?”.

As a solutions architect, which of the following AWS database services would you suggest as the BEST fit to handle such use cases?

A

Amazon Neptune

  • Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets.
  • The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency.
  • Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security.
  • Amazon Neptune is highly available, with read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across Availability Zones.
  • Neptune is secure with support for HTTPS encrypted client connections and encryption at rest.
  • Neptune is fully managed, so you no longer need to worry about database management tasks such as hardware provisioning, software patching, setup, configuration, or backups.
  • Amazon Neptune can quickly and easily process large sets of user-profiles and interactions to build social networking applications.
  • Neptune enables highly interactive graph queries with high throughput to bring social features into your applications.

For example, if you are building a social feed into your application, you can use Neptune to provide results that prioritize showing your users the latest updates from their family, from friends whose updates they ‘Like,’ and from friends who live close to them.

27
Q

A Big Data analytics company writes data and log files in Amazon S3 buckets. The company now wants to stream the existing data files as well as any ongoing file updates from Amazon S3 to Amazon Kinesis Data Streams.

As a Solutions Architect, which of the following would you suggest as the fastest possible way of building a solution for this requirement?

A

Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams

  • You can achieve this by using AWS Database Migration Service (AWS DMS).
  • AWS DMS enables you to seamlessly migrate data from supported sources to relational databases, data warehouses, streaming platforms, and other data stores in AWS cloud.
  • The given requirement needs the functionality to be implemented in the least possible time.
  • You can use AWS DMS for such data-processing requirements.
  • AWS DMS lets you expand the existing application to stream data from Amazon S3 into Amazon Kinesis Data Streams for real-time analytics without writing and maintaining new code.
  • AWS DMS supports specifying Amazon S3 as the source and streaming services like Kinesis and Amazon Managed Streaming of Kafka (Amazon MSK) as the target.
  • AWS DMS allows migration of full and change data capture (CDC) files to these services.
  • AWS DMS performs this task out of box without any complex configuration or code development.
  • You can also configure an AWS DMS replication instance to scale up or down depending on the workload.
  • AWS DMS supports Amazon S3 as the source and Kinesis as the target, so data stored in an S3 bucket is streamed to Kinesis.
  • Several consumers, such as AWS Lambda, Amazon Kinesis Data Firehose, Amazon Kinesis Data Analytics, and the Kinesis Consumer Library (KCL), can consume the data concurrently to perform real-time analytics on the dataset.

Each AWS service in this architecture can scale independently as needed.

28
Q

The DevOps team at a leading social media company uses AWS OpsWorks, which is a fully managed configuration management service. OpsWorks eliminates the need to operate your configuration management systems or worry about maintaining its infrastructure.

Can you identify the configuration management tools for which OpsWorks provides managed instances? (Select two)

A
  1. Chef
  2. Puppet
  • AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.
  • Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.
  • OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments.
29
Q

A financial services company recently launched an initiative to improve the security of its AWS resources and it had enabled AWS Shield Advanced across multiple AWS accounts owned by the company. Upon analysis, the company has found that the costs incurred are much higher than expected.

Which of the following would you attribute as the underlying reason for the unexpectedly high costs for AWS Shield Advanced service?

A

Consolidated billing has not been enabled.

  • All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once
  • If your organization has multiple AWS accounts, then you can subscribe multiple AWS Accounts to AWS Shield Advanced by individually enabling it on each account using the AWS Management Console or API.
  • You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and resources in those accounts.
30
Q

A pharma company is working on developing a vaccine for the COVID-19 virus. The researchers at the company want to process the reference healthcare data in an in-memory database that is highly available as well as HIPAA compliant.

As a solutions architect, which of the following AWS services would you recommend for this task?

A

ElastiCache for Redis

Amazon ElastiCache for Redis

  • Is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications.
  • Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store.
  • ElastiCache for Redis supports replication, high availability, and cluster sharding right out of the box.
  • Amazon ElastiCache for Redis is also HIPAA Eligible Service.
31
Q

An e-commerce application uses an Amazon Aurora Multi-AZ deployment for its database. While analyzing the performance metrics, the engineering team has found that the database reads are causing high I/O and adding latency to the write requests against the database.

As an AWS Certified Solutions Architect Associate, what would you recommend to separate the read requests from the write requests?

A

Set up a read replica and modify the application to use the appropriate endpoint

  • An Amazon Aurora DB cluster consists of one or more DB instances and a cluster volume that manages the data for those DB instances.
  • An Aurora cluster volume is a virtual database storage volume that spans multiple Availability Zones, with each Availability Zone having a copy of the DB cluster data.

Two types of DB instances make up an Aurora DB cluster:

Primary DB instance

  • Supports read and write operations, and performs all of the data modifications to the cluster volume.
  • Each Aurora DB cluster has one primary DB instance.

Aurora Replica

  • Connects to the same storage volume as the primary DB instance and supports only read operations.
  • Each Aurora DB cluster can have up to 15 Aurora Replicas in addition to the primary DB instance.
  • Aurora automatically fails over to an Aurora Replica in case the primary DB instance becomes unavailable.
  • You can specify the failover priority for Aurora Replicas.
  • Aurora Replicas can also offload read workloads from the primary DB instance.

Aurora Replicas have two main purposes.

  • You can issue queries to them to scale the read operations for your application.
  • You typically do so by connecting to the reader endpoint of the cluster.
  • That way, Aurora can spread the load for read-only connections across as many Aurora Replicas as you have in the cluster.
  • Aurora Replicas also help to increase availability.
  • If the writer instance in a cluster becomes unavailable, Aurora automatically promotes one of the reader instances to take its place as the new writer.
  • While setting up a Multi-AZ deployment for Aurora, you create an Aurora replica or reader node in a different AZ.

You use the reader endpoint for read-only connections for your Aurora cluster.

  • This endpoint uses a load-balancing mechanism to help your cluster handle a query-intensive workload.
  • The reader endpoint is the endpoint that you supply to applications that do reporting or other read-only operations on the cluster.
  • The reader endpoint load-balances connections to available Aurora Replicas in an Aurora DB cluster.
32
Q

A financial services company is looking to move its on-premises IT infrastructure to AWS Cloud. The company has multiple long-term server bound licenses across the application stack and the CTO wants to continue to utilize those licenses while moving to AWS.

As a solutions architect, which of the following would you recommend as the MOST cost-effective solution?

A

Use EC2 dedicated hosts

  • You can use Dedicated Hosts to launch Amazon EC2 instances on physical servers that are dedicated for your use.
  • Dedicated Hosts give you additional visibility and control over how instances are placed on a physical server, and you can reliably use the same physical server over time.

As a result, Dedicated Hosts enable you to use your existing server-bound software licenses like Windows Server and address corporate compliance and regulatory requirements.

33
Q

The engineering team at an e-commerce company wants to migrate from SQS Standard queues to FIFO queues with batching.

As a solutions architect, which of the following steps would you have in the migration checklist? (Select three)

A
  1. Delete the existing standard queue and recreate it as a FIFO queue
  2. Make sure that the name of the FIFO queue ends with the .fifo suffix
  3. Make sure that the throughput for the target FIFO queue does not exceed 3,000 messages per second

Amazon Simple Queue Service (SQS)

  • Is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
  • SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work.
  • Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

SQS offers two types of message queues.

Standard queues

  • Offer maximum throughput, best-effort ordering, and at-least-once delivery.

SQS FIFO queues

  • Are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
  • By default, FIFO queues support up to 3,000 messages per second with batching, or up to 300 messages per second (300 send, receive, or delete operations per second) without batching.
  • Therefore, using batching you can meet a throughput requirement of upto 3,000 messages per second.
  • The name of a FIFO queue must end with the .fifo suffix.
  • The suffix counts towards the 80-character queue name limit.
  • To determine whether a queue is FIFO, you can check whether the queue name ends with the suffix.
  • If you have an existing application that uses standard queues and you want to take advantage of the ordering or exactly-once processing features of FIFO queues, you need to configure the queue and your application correctly.
  • You can’t convert an existing standard queue into a FIFO queue.
  • To make the move, you must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue.
34
Q

The engineering team at an e-commerce company has been tasked with migrating to a serverless architecture. The team wants to focus on the key points of consideration when using Lambda as a backbone for this architecture.

As a Solutions Architect, which of the following options would you identify as correct for the given requirement? (Select three)

A

By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs.

  • Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources
  • Lambda functions always operate from an AWS-owned VPC.
  • By default, your function has the full ability to make network requests to any public internet address — this includes access to any of the public AWS APIs.

For example, your function can interact with AWS DynamoDB APIs to PutItem or Query for records.

  • You should only enable your functions for VPC access when you need to interact with a private resource located in a private subnet.
  • An RDS instance is a good example.
  • Once your function is VPC-enabled, all network traffic from your function is subject to the routing rules of your VPC/Subnet.
  • If your function needs to interact with a public resource, you will need a route through a NAT gateway in a public subnet.
  • Since Lambda functions can scale extremely quickly, its a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold
  • Since Lambda functions can scale extremely quickly, this means you should have controls in place to notify you when you have a spike in concurrency.
  • A good idea is to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds your threshold.
  • You should create an AWS Budget so you can monitor costs on a daily basis.
  • If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code
  • You can configure your Lambda function to pull in additional code and content in the form of layers.
  • A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies.
  • With layers, you can use libraries in your function without needing to include them in your deployment package.
  • Layers let you keep your deployment package small, which makes development easier.
  • A function can use up to 5 layers at a time.
  • You can create layers, or use layers published by AWS and other AWS customers.
  • Layers support resource-based policies for granting layer usage permissions to specific AWS accounts, AWS Organizations, or all accounts.
  • The total unzipped size of the function and all layers can’t exceed the unzipped deployment package size limit of 250 MB.
35
Q

The DevOps team at an IT company is provisioning a two-tier application in a VPC with a public subnet and a private subnet. The team wants to use either a NAT instance or a NAT gateway in the public subnet to enable instances in the private subnet to initiate outbound IPv4 traffic to the internet but needs some technical assistance in terms of the configuration options available for the NAT instance and the NAT gateway.

As a solutions architect, which of the following options would you identify as CORRECT? (Select three)

A
  1. NAT instance can be used as a bastion server
  2. Security Groups can be associated with a NAT instance
  3. NAT instance supports port forwarding

A NAT instance or a NAT Gateway can be used in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet.

36
Q

A financial services company wants to identify any sensitive data stored on its Amazon S3 buckets. The company also wants to monitor and protect all data stored on S3 against any malicious activity.

As a solutions architect, which of the following solutions would you recommend to help address the given requirements?

A

Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use Amazon Macie to identify any sensitive data stored on S3

Amazon GuardDuty

  • Offers threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data stored in Amazon S3.
  • GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs.
  • It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.

Amazon Macie

  • Is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data on Amazon S3.
  • Macie automatically detects a large and growing list of sensitive data types, including personally identifiable information (PII) such as names, addresses, and credit card numbers.
  • It also gives you constant visibility of the data security and data privacy of your data stored in Amazon S3.
37
Q

The engineering team at a company wants to use Amazon SQS to decouple components of the underlying application architecture. However, the team is concerned about the VPC-bound components accessing SQS over the public internet.

As a solutions architect, which of the following solutions would you recommend to address this use-case?

A

Use VPC endpoint to access Amazon SQS

AWS customers can access Amazon Simple Queue Service (Amazon SQS) from their Amazon Virtual Private Cloud (Amazon VPC) using VPC endpoints, without using public IPs, and without needing to traverse the public internet.

VPC endpoints for Amazon SQS

  • Are powered by AWS PrivateLink, a highly available, scalable technology that enables you to privately connect your VPC to supported AWS services.
  • Amazon VPC endpoints are easy to configure.
  • They also provide reliable connectivity to Amazon SQS without requiring an internet gateway, Network Address Translation (NAT) instance, VPN connection, or AWS Direct Connect connection.
  • With VPC endpoints, the data between your Amazon VPC and Amazon SQS queue is transferred within the Amazon network, helping protect your instances from internet traffic.

AWS PrivateLink

  • Simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet.
  • AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the Amazon network.
  • AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture.
38
Q

A company runs its EC2 servers behind an Application Load Balancer along with an Auto Scaling group. The engineers at the company want to be able to install proprietary tools on each instance and perform a pre-activation status check of these tools whenever an instance is provisioned because of a scale-out event from an auto-scaling policy.

Which of the following options can be used to enable this custom action?

Use the Auto Scaling group scheduled action to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check

(Incorrect)

A

Use the Auto Scaling group lifecycle hook to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check

An Auto Scaling group

  • Contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management.

Auto Scaling group lifecycle hooks

  • Enable you to perform custom actions as the Auto Scaling group launches or terminates instances.
  • Lifecycle hooks enable you to perform custom actions by pausing instances as an Auto Scaling group launches or terminates them.
  • When an instance is paused, it remains in a wait state either until you complete the lifecycle action using the complete-lifecycle-action command or the CompleteLifecycleAction operation, or until the timeout period ends (one hour by default).

For example, you could install or configure software on newly launched instances, or download log files from an instance before it terminates.

39
Q

A leading online gaming company is migrating its flagship application to AWS Cloud for delivering its online games to users across the world. The company would like to use a Network Load Balancer (NLB) to handle millions of requests per second. The engineering team has provisioned multiple instances in a public subnet and specified these instance IDs as the targets for the NLB.

As a solutions architect, can you help the engineering team understand the correct routing mechanism for these target instances?

A

Traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance

A Network Load Balancer

  • Functions at the fourth layer of the Open Systems Interconnection (OSI) model.
  • It can handle millions of requests per second.
  • After the load balancer receives a connection request, it selects a target from the target group for the default rule.
  • It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.

Request Routing and IP Addresses

  • If you specify targets using an instance ID, traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance.
  • The load balancer rewrites the destination IP address from the data packet before forwarding it to the target instance.
  • If you specify targets using IP addresses, you can route traffic to an instance using any private IP address from one or more network interfaces.
  • This enables multiple applications on an instance to use the same port.
  • Note that each network interface can have its security group.
  • The load balancer rewrites the destination IP address before forwarding it to the target.
40
Q

A retail company wants to rollout and test a blue-green deployment for its global application in the next 48 hours. Most of the customers use mobile phones which are prone to DNS caching. The company has only two days left for the annual Thanksgiving sale to commence.

As a Solutions Architect, which of the following options would you recommend to test the deployment on as many users as possible in the given time frame?

A

Blue/green deployment is a technique for releasing applications by shifting traffic between two identical environments running different versions of the application:

  • “Blue” is the currently running version
  • “Green” the new version.
  • This type of deployment allows you to test features in the green environment without impacting the currently running version of your application.
  • When you’re satisfied that the green version is working properly, you can gradually reroute the traffic from the old blue environment to the new green environment.

Blue/green deployments can mitigate common risks associated with deploying software, such as downtime and rollback capability.

  • Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment
  • AWS Global Accelerator is a network layer service that directs traffic to optimal endpoints over the AWS global network, this improves the availability and performance of your internet applications.
  • It provides two static anycast IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, Elastic IP addresses or Amazon EC2 instances, in a single or in multiple AWS regions.
  • AWS Global Accelerator uses endpoint weights to determine the proportion of traffic that is directed to endpoints in an endpoint group, and traffic dials to control the percentage of traffic that is directed to an endpoint group (an AWS region where your application is deployed).
  • While relying on the DNS service is a great option for blue/green deployments, it may not fit use-cases that require a fast and controlled transition of the traffic.
  • Some client devices and internet resolvers cache DNS answers for long periods; this DNS feature improves the efficiency of the DNS service as it reduces the DNS traffic across the Internet, and serves as a resiliency technique by preventing authoritative name-server overloads.
  • The downside of this in blue/green deployments is that you don’t know how long it will take before all of your users receive updated IP addresses when you update a record, change your routing preference or when there is an application failure.

With AWS Global Accelerator

  • You can shift traffic gradually or all at once between the blue and the green environment and vice-versa without being subject to DNS caching on client devices and internet resolvers, traffic dials and endpoint weights changes are effective within seconds.
41
Q

A financial services firm uses a high-frequency trading system and wants to write the log files into Amazon S3. The system will also read these log files in parallel on a near real-time basis. The engineering team wants to address any data discrepancies that might arise when the trading system overwrites an existing log file and then tries to read that specific log file.

Which of the following options BEST describes the capabilities of Amazon S3 relevant to this scenario?

A

A process replaces an existing object and immediately tries to read it. Amazon S3 always returns the latest version of the object

  • Amazon S3 delivers strong read-after-write consistency automatically, without changes to performance or availability, without sacrificing regional isolation for applications, and at no additional cost.
  • After a successful write of a new object or an overwrite of an existing object, any subsequent read request immediately receives the latest version of the object.
  • S3 also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with any changes reflected.
  • Strong read-after-write consistency helps when you need to immediately read an object after a write.
  • For example, strong read-after-write consistency when you often read and list immediately after writing objects.

To summarize, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are strongly consistent. What you write is what you will read, and the results of a LIST will be an accurate reflection of what’s in the bucket.

42
Q

A company has noticed that its application performance has deteriorated after a new Auto Scaling group was deployed a few days back. Upon investigation, the team found out that the Launch Configuration selected for the Auto Scaling group is using the incorrect instance type that is not optimized to handle the application workflow.

As a solutions architect, what would you recommend to provide a long term resolution for this issue?

A
  1. Create a new launch configuration to use the correct instance type.
  2. Modify the Auto Scaling group to use this new launch configuration.
  3. Delete the old launch configuration as it is no longer needed
  • A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances.
  • When you create a launch configuration, you specify information for the instances.
  • Include the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping.

It is not possible to modify a launch configuration once it is created.

  • The correct option is to create a new launch configuration to use the correct instance type.
  • Then modify the Auto Scaling group to use this new launch configuration.
  • Lastly to clean-up, just delete the old launch configuration as it is no longer needed.
43
Q

A multi-national retail company has multiple business divisions, with each division having its own AWS account. The engineering team at the company would like to debug and trace data across these AWS accounts and visualize it in a centralized account.

As a Solutions Architect, which of the following solutions would you suggest for the given use-case?

A

X-Ray

AWS X-Ray

  • Helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture.
  • With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors.
  • X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.
  • You can use X-Ray to collect data across AWS Accounts.
  • The X-Ray agent can assume a role to publish data into an account different from the one in which it is running.
  • This enables you to publish data from various components of your application into a central account.
44
Q

A startup’s cloud infrastructure consists of a few Amazon EC2 instances, Amazon RDS instances and Amazon S3 storage. A year into their business operations, the startup is incurring costs that seem too high for their business requirements.

Which of the following options represents a valid cost-optimization solution?

A

Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations

AWS Cost Explorer

  • Helps you identify under-utilized EC2 instances that may be downsized on an instance by instance basis within the same instance family,
  • Also understand the potential impact on your AWS bill by taking into account your Reserved Instances and Savings Plans.

AWS Compute Optimizer

  • Recommends optimal AWS Compute resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics.
  • Compute Optimizer helps you choose the optimal Amazon EC2 instance types, including those that are part of an Amazon EC2 Auto Scaling group, based on your utilization data.