Practice Exam 4 Flashcards
An IT company has built a custom data warehousing solution for a retail organization by using Amazon Redshift. As part of the cost optimizations, the company wants to move any historical data (any data older than a year) into S3, as the daily analytical reports consume data for just the last one year. However the analysts want to retain the ability to cross-reference this historical data along with the daily reports.
The company wants to develop a solution with the LEAST amount of effort and MINIMUM cost. As a solutions architect, which option would you recommend to facilitate this use-case?
- Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3.
- The analytics team can then query this historical data to cross-reference with the daily reports from Redshift
An IT company provides S3 bucket access to specific users within the same account for completing project specific work. With changing business requirements, cross-account S3 access requests are also growing every month. The company is looking for a solution that can offer user level as well as account-level access permissions for the data stored in S3 buckets.
As a Solutions Architect, which of the following would you suggest as the MOST optimized way of controlling access for this use-case?
Use Amazon S3 Bucket Policies
- Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects within a single bucket.
- Policies can be attached to users, groups, or Amazon S3 buckets, enabling centralized management of permissions.
- With bucket policies, you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources.
- You can further restrict access to specific resources based on certain conditions.
- For example, you can restrict access based on request time (Date Condition),
- whether the request was sent using SSL (Boolean Conditions),
- a requester’s IP address (IP Address Condition), or
- based on the requester’s client application (String Conditions). To identify these conditions, you use policy keys.
A company has moved its business critical data to Amazon EFS file system which will be accessed by multiple EC2 instances.
As an AWS Certified Solutions Architect Associate, which of the following would you recommend to exercise access control such that only the permitted EC2 instances can read from the EFS file system? (Select three)
- Use VPC security groups to control the network traffic to and from your file system
- Attach an IAM policy to your file system to control clients who can mount your file system with the required permissions
- Use EFS Access Points to manage application access
- You control which EC2 instances can access your EFS file system by using VPC security group rules and AWS Identity and Access Management (IAM) policies.
- Use VPC security groups to control the network traffic to and from your file system.
- Attach an IAM policy to your file system to control which clients can mount your file system and with what permissions,
- Use EFS Access Points to manage application access.
- Control access to files and directories with POSIX-compliant user and group-level permissions.
- Files and directories in an Amazon EFS file system support standard Unix-style read, write, and execute permissions based on the user ID and group IDs.
- When an NFS client mounts an EFS file system without using an access point, the user ID and group ID provided by the client is trusted.
- You can use EFS access points to override user ID and group IDs used by the NFS client.
- When users attempt to access files and directories, Amazon EFS checks their user IDs and group IDs to verify that each user has permission to access the objects
A cybersecurity company uses a fleet of EC2 instances to run a proprietary application. The infrastructure maintenance group at the company wants to be notified via an email whenever the CPU utilization for any of the EC2 instances breaches a certain threshold.
Which of the following services would you use for building a solution with the LEAST amount of development effort? (Select two)
Amazon SNS - Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging.
Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Amazon CloudWatch allows you to monitor AWS cloud resources and the applications you run on AWS.
You can use CloudWatch Alarms to send an email via SNS whenever any of the EC2 instances breaches a certain threshold. Hence both these options are correct.
Your company is deploying a website running on Elastic Beanstalk. The website takes over 45 minutes for the installation and contains both static as well as dynamic files that must be generated during the installation process.
As a Solutions Architect, you would like to bring the time to create a new Instance in your Elastic Beanstalk deployment to be less than 2 minutes. What do you recommend? (Select two)
- AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
- You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring.
- At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.
- When you create an AWS Elastic Beanstalk environment, you can specify an Amazon Machine Image (AMI) to use instead of the standard Elastic Beanstalk AMI included in your platform version.
- A custom AMI can improve provisioning times when instances are launched in your environment if you need to install a lot of software that isn’t included in the standard AMIs.
- Create a Golden AMI with the static installation components already setup - A Golden AMI is an AMI that you standardize through configuration, consistent security patching, and hardening.
- It also contains agents you approve for logging, security, performance monitoring, etc. For the given use-case, you can have the static installation components already setup via the golden AMI.
- Use EC2 user data to customize the dynamic installation parts at boot time - EC2 instance user data is the data that you specified in the form of a configuration script while launching your instance.
- You can use EC2 user data to customize the dynamic installation parts at boot time, rather than installing the application itself at boot time.
The development team at an e-commerce startup has set up multiple microservices running on EC2 instances under an Elastic Load Balancer. The team wants to route traffic to multiple back-end services based on the content of the request.
Which of the following types of load balancers would allow routing based on the content of the request?
An Application Load Balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model.
- After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply and then selects a target from the target group for the rule action.
- You can configure listener rules to route requests to different target groups based on the content of the application traffic.
- Each target group can be an independent microservice, therefore this option is correct.
An engineering team wants to examine the feasibility of the user data feature of Amazon EC2 for an upcoming project.
Which of the following are true about the EC2 user data configuration? (Select two)
- User Data is generally used to perform common automated configuration tasks and even run scripts after the instance starts.
- When you launch an instance in Amazon EC2, you can pass two types of user data - shell scripts and cloud-init directives.
- You can also pass this data into the launch wizard as plain text or as a file.
- By default, scripts entered as user data are executed with root user privileges - Scripts entered as user data are executed as the root user, hence do not need the sudo command in the script.
- Any files you create will be owned by root; if you need non-root users to have file access, you should modify the permissions accordingly in the script.
- By default, user data runs only during the boot cycle when you first launch an instance - By default, user data scripts and cloud-init directives run only during the boot cycle when you first launch an instance.
- You can update your configuration to ensure that your user data scripts and cloud-init directives run every time you restart your instance.
For security purposes, a team has decided to put their instances in a private subnet. They plan to deploy a VPC endpoint to access these services. The members of the team would like to know about the only two AWS services that require a Gateway Endpoint instead of an Interface Endpoint.
As a solutions architect, which of the following services would you suggest for this requirement? (Select two)
- Amazon S3
- DynamoDB
- A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
- Instances in your VPC do not require public IP addresses to communicate with resources in the service.
- Traffic between your VPC and the other service does not leave the Amazon network.
- Endpoints are virtual devices.
- They are horizontally scaled, redundant, and highly available VPC components.
- They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
- There are two types of VPC endpoints:
- Interface Endpoints - An Interface Endpoint is an Elastic Network Interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service.
- Gateway Endpoints - A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3 and DynamoDB.
- You must remember that only these two services use a VPC gateway endpoint.
- The rest of the AWS services use VPC interface endpoints.
A media agency stores its re-creatable assets on Amazon S3 buckets. The assets are accessed by a large number of users for the first few days and the frequency of access falls down drastically after a week. Although the assets would be accessed occasionally after the first week, but they must continue to be immediately accessible when required. The cost of maintaining all the assets on S3 storage is turning out to be very expensive and the agency is looking at reducing costs as much as possible.
As a Solutions Architect, can you suggest a way to lower the storage costs while fulfilling the business requirements?
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days - S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed.
- Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA.
- S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed and re-creatable data but do not require the availability and resilience of S3 Standard or S3 Standard-IA.
- The minimum storage duration is 30 days before you can transition objects from S3 Standard to S3 One Zone-IA.
- S3 One Zone-IA offers the same high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee.
- S3 Storage Classes can be configured at the object level, and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.
- You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
The engineering team at an e-commerce company is working on cost optimizations for EC2 instances. The team wants to manage the workload using a mix of on-demand and spot instances across multiple instance types. They would like to create an Auto Scaling group with a mix of these instances.
Which of the following options would allow the engineering team to provision the instances for this use-case?
You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost
A launch template is similar to a launch configuration, in that it specifies instance configuration information such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances.
Also, defining a launch template instead of a launch configuration allows you to have multiple versions of a template.
With launch templates, you can provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost.
Your company runs a website for evaluating coding skills. As a Solutions Architect, you’ve designed the architecture of the website to follow a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend is using an RDS PostgreSQL database. Caching is implemented using a Redis ElastiCache cluster. You would like to increase the security of your authentication to Redis from the Lambda function, leveraging a username and password combination.
As a solutions architect, which of the following options would you recommend?
Use Redis Auth - Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications.
- Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store.
- ElastiCache for Redis supports replication, high availability, and cluster sharding right out of the box.
- IAM Auth is not supported by ElastiCache.
- Redis authentication tokens enable Redis to require a token (password) before allowing clients to execute commands, thereby improving data security.
A company manages a multi-tier social media application that runs on EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. As a solutions architect, you have been tasked to make the application more resilient to periodic spikes in request rates.
Which of the following solutions would you recommend for the given use-case? (Select two)
You can use Aurora replicas and CloudFront distribution to make the application more resilient to spikes in request rates.
- Use Aurora Replica
- Use CloudFront distribution in front of the Application Load Balancer
Aurora Replicas have two main purposes.
You can issue queries to them to scale the read operations for your application.
- You typically do so by connecting to the reader endpoint of the cluster.
- That way, Aurora can spread the load for read-only connections across as many Aurora Replicas as you have in the cluster.
Aurora Replicas also help to increase availability.
- If the writer instance in a cluster becomes unavailable, Aurora automatically promotes one of the reader instances to take its place as the new writer.
- Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region.
Amazon CloudFront - is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
- CloudFront points of presence (POPs) (edge locations) make sure that popular content can be served quickly to your viewers.
- CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content.
- CloudFront offers an origin failover feature to help support your data resiliency needs.
- CloudFront is a global service that delivers your content through a worldwide network of data centers called edge locations or points of presence (POPs).
- If your content is not already cached in an edge location, CloudFront retrieves it from an origin that you’ve identified as the source for the definitive version of the content.
A leading social media analytics company is contemplating moving its dockerized application stack into AWS Cloud. The company is not sure about the pricing for using Elastic Container Service (ECS) with the EC2 launch type compared to the Elastic Container Service (ECS) with the Fargate launch type.
Which of the following is correct regarding the pricing for these two services?
- ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used.
- ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests
- Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service.
- ECS allows you to easily run, scale, and secure Docker container applications on AWS.
A Big Data analytics company wants to set up an AWS cloud architecture that throttles requests in case of sudden traffic spikes. The company is looking for AWS services that can be used for buffering or throttling to handle such traffic variations.
Which of the following services can be used to support this requirement?
Throttling is the process of limiting the number of requests an authorized program can submit to a given operation in a given amount of time.
Amazon API Gateway, Amazon SQS and Amazon Kinesis -
- To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API using the token bucket algorithm, where a token counts for a request.
- Specifically, API Gateway sets a limit on a steady-state rate and a burst of request submissions against all APIs in your account.
- In the token bucket algorithm, the burst is the maximum bucket size.
Amazon SQS - Amazon Simple Queue Service (SQS)
- Is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS offers buffer capabilities to smooth out temporary volume spikes without losing messages or increasing latency.
Amazon Kinesis -
- Amazon Kinesis is a fully managed, scalable service that can ingest, buffer, and process streaming data in real-time.
The business analytics team at a company has been running ad-hoc queries on Oracle and PostgreSQL services on Amazon RDS to prepare daily reports for senior management. To facilitate the business analytics reporting, the engineering team now wants to continuously replicate this data and consolidate these databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift.
As a solutions architect, which of the following would you recommend as the MOST resource-efficient solution that requires the LEAST amount of development time without the need to manage the underlying infrastructure?
Use AWS Database Migration Service to replicate the data from the databases into Amazon Redshift
- AWS Database Migration Service helps you migrate databases to AWS quickly and securely.
- The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
- With AWS Database Migration Service, you can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3.
A media company wants a low-latency way to distribute live sports results which are delivered via a proprietary application using UDP protocol.
As a solutions architect, which of the following solutions would you recommend such that it offers the BEST performance for this use case?
Use Global Accelerator to provide a low latency way to distribute live sports results
- AWS Global Accelerator is a networking service that helps you improve the availability and performance of the applications that you offer to your global users.
- AWS Global Accelerator is easy to set up, configure, and manage.
- It provides static IP addresses that provide a fixed entry point to your applications and eliminate the complexity of managing specific IP addresses for different AWS Regions and Availability Zones.
- AWS Global Accelerator always routes user traffic to the optimal endpoint based on performance, reacting instantly to changes in application health, your user’s location, and policies that you configure.
- Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP.
An e-commerce company has copied 1 PB of data from its on-premises data center to an Amazon S3 bucket in the us-west-1 Region using an AWS Direct Connect link. The company now wants to copy the data to another S3 bucket in the us-east-1 Region. The on-premises data center does not allow the use of AWS Snowball.
As a Solutions Architect, which of the following would you recommend to accomplish this?
Copy data from the source bucket to the destination bucket using the aws S3 sync command
- The aws S3 sync command uses the CopyObject APIs to copy objects between S3 buckets.
- The sync command lists the source and target buckets to identify objects that are in the source bucket but that aren’t in the target bucket.
- The command also identifies objects in the source bucket that have different LastModified dates than the objects that are in the target bucket.
- The sync command on a versioned bucket copies only the current version of the object—previous versions aren’t copied.
- By default, this preserves object metadata, but the access control lists (ACLs) are set to FULL_CONTROL for your AWS account, which removes any additional ACLs.
- If the operation fails, you can run the sync command again without duplicating previously copied objects.
You can use the command like so:
aws s3 sync s3://DOC-EXAMPLE-BUCKET-SOURCE s3://DOC-EXAMPLE-BUCKET-TARGET