Practice Exam Questions Flashcards

1
Q

A company needs to ensure that its AWS cloud environment is protected from DDoS attacks. What AWS service would be the most appropriate for this requirement?

A

AWS Shield provides DDoS protection and is specifically designed to safeguard AWS applications against DDoS attacks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What service is primarily used for filtering traffic to applications?

A

AWS WAF is primarily used for filtering traffic to applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What service is a threat detection service that continuously monitors for malicious activity?

A

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What service is used for security assessment and compliance of applications?

A

Amazon Inspector is used for security assessment and compliance of applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An application requires a database solution that can automatically scale to meet sudden increases in demand while maintaining consistent performance. What 2 AWS services would best meet these requirements and why?

A

DynamoDB with on demand capacity can automatically adjust for varying loads and ElastiCache enhances database performance through caching, handling spikes efficiently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What AWS service offers the most cost-effective solution for storing infrequently accessed data over a long period?

A

Amazon S3 Glacier is designed for long-term storage with infrequent access, making it the most cost-effective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A Solutions Architect is designing a web application that experiences seasonal traffic. To optimize costs, what combination of AWS services should be used for compute resources and why?

A

Amazon EC2 Reserved Instances provide a significant discount for predictable usage, ideal for expected seasonal traffic. Amazon EC2 Spot Instances can offer cost savings for flexible, non-critical components of the workload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An organization wants to secure it’s AWS infrastructure by ensuring that access keys are not embedded in code or left in accessible locations. What AWS service or feature should be used to achieve this best practice?

A

AWS Identity and Access Management (IAM) allows for the creation of roles and temporary credentials, reducing the need to store access keys within code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What AWS service is used for managing encryption keys?

A

AWS Key Management Service (KMS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What AWS service is used for tracking resource configurations and changes?

A

AWS Config

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What AWS service is used for monitoring and logging?

A

Amazon CloudWatch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A Solutions Architect is designing a cost-optimized architecture for a workload that has predictable, consistent performance requirements. What EC2 purchasing option would be the most cost-effective?

A

Reserved instances offer significant savings for consistent, predictable workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Spot Instances are best for what type of workloads?

A

flexible, interruptible workloads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Dedicated Hosts are typically used for what reason?

A

Compliance requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A Solutions Architect needs to design a system that can handle sudden, unpredictable spikes in web traffic. What AWS service combination is the best for this requirement?

A

ASG with ELB. Auto Scaling dynamically adjusts EC2 capacity, while Elastic Load Balancing distributes traffic, effectively managing sudden spikes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A Solutions Architect needs to design a secure communication channel for transferring sensitive data between an Amazon EC2 instance and an S3 bucket. What combination of actions should be taken to ensure data security and why?

A

Implement server-side encryption with AWS KMS and restrict S3 bucket access with IAM roles. Server-side encryption with KMS encrypts the data at rest in S3, enhancing security. Using IAM roles to restrict bucket access ensures that only authorized EC2 instances can access the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A Solutions Architect needs to improve the data processing speed of a large-scale analytics application. The current setup uses multiple EC2 instances accessing data stored in Amazon S3. What service should be implemented to enhance the processing speed without significant changes to the existing infrastructure?

A

Amazon ElastiCache can significantly improve the data processing speed by caching frequently accessed data, reducing the need to fetch it repeatedly from the S3.

18
Q

What AWS service is a fully managed NoSQL database service?

A

Amazon DynamoDB

19
Q

What is the main purpose of AWS IAM?

A

AWS IAM manages users and their levels of access to the AWS console.

20
Q

What does Amazon S3 stand for?

A

Amazon Simple Storage Service.

21
Q

What is the use of Amazon EC2?

A

EC2 provides scalable computing capacity in the AWS cloud.

22
Q

What does Auto Scaling do in AWS?

A

Auto Scaling helps you ensure that you have the correct number of EC2 instances available.

23
Q

A social photo-sharing company uses Amazon Simple Storage Service (Amazon S3) to store the images uploaded by the users. These images are kept encrypted in Amazon S3 by using AWS Key Management Service (AWS KMS) and the company manages its own AWS KMS keys for encryption. A member of the DevOps team accidentally deleted the AWS KMS key a day ago, thereby rendering the user’s photo data unrecoverable. You have been contacted by the company to consult them on possible solutions to this crisis.

As a solutions architect, what steps would you recommend to solve this issue?

A

As the AWS KMS key was deleted a day ago, it must be in the ‘pending deletion’ status and hence you can just cancel the KMS key deletion and recover the key

AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. AWS KMS is a secure and resilient service that uses hardware security modules that have been validated under FIPS 140-2.

Deleting an AWS KMS key in AWS Key Management Service (AWS KMS) is destructive and potentially dangerous. Therefore, AWS KMS enforces a waiting period. To delete a KMS key in AWS KMS you schedule key deletion. You can set the waiting period from a minimum of 7 days up to a maximum of 30 days. The default waiting period is 30 days. During the waiting period, the KMS key status and key state is Pending deletion. To recover the KMS key, you can cancel key deletion before the waiting period ends. After the waiting period ends you cannot cancel key deletion, and AWS KMS deletes the KMS key.

24
Q

A media agency stores its re-creatable assets on Amazon Simple Storage Service (Amazon S3) buckets. The assets are accessed by a large number of users for the first few days and the frequency of access falls down drastically after a week. Although the assets would be accessed occasionally after the first week, but they must continue to be immediately accessible when required. The cost of maintaining all the assets on Amazon S3 storage is turning out to be very expensive and the agency is looking at reducing costs as much as possible.

As an AWS Certified Solutions Architect – Associate, can you suggest a way to lower the storage costs while fulfilling the business requirements?

A

Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.

Amazon S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), Amazon S3 One Zone-IA stores data in a single Availability Zone (AZ) and costs 20% less than Amazon S3 Standard-IA. Amazon S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed and re-creatable data but do not require the availability and resilience of Amazon S3 Standard or Amazon S3 Standard-IA. The minimum storage duration is 30 days before you can transition objects from Amazon S3 Standard to Amazon S3 One Zone-IA.
Amazon S3 One Zone-IA offers the same high durability, high throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee. S3 Storage Classes can be configured at the object level, and a single bucket can contain objects stored across Amazon S3 Standard, Amazon S3 Intelligent-Tiering, Amazon S3 Standard-IA, and Amazon S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.

25
Q

A gaming company is looking at improving the availability and performance of its global flagship application which utilizes User Datagram Protocol and needs to support fast regional failover in case an AWS Region goes down. The company wants to continue using its own custom Domain Name System (DNS) service.

What AWS services represents the best solution for this use-case?

A

AWS Global Accelerator

AWS Global Accelerator utilizes the Amazon global network, allowing you to improve the performance of your applications by lowering first-byte latency (the round trip time for a packet to go from a client to your endpoint and back again) and jitter (the variation of latency), and increasing throughput (the amount of time it takes to transfer data) as compared to the public internet.

AWS Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.

26
Q

A software engineering intern at an e-commerce company is documenting the process flow to provision Amazon EC2 instances via the Amazon EC2 API. These instances are to be used for an internal application that processes Human Resources payroll data. He wants to highlight those volume types that cannot be used as a boot volume.

Can you help the intern by identifying those storage volume types that CANNOT be used as boot volumes while creating the instances?

A

Cold Hard disk drive (sc1)
Throughput Optimized Hard disk drive (st1)

The Amazon EBS volume types fall into two categories:
Solid state drive (SSD) backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS.
Hard disk drive (HDD) backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS.
Throughput Optimized HDD (st1) and Cold HDD (sc1) volume types CANNOT be used as a boot volume, so these two options are correct.

27
Q

A media company runs a photo-sharing web application that is accessed across three different countries. The application is deployed on several Amazon Elastic Compute Cloud (Amazon EC2) instances running behind an Application Load Balancer. With new government regulations, the company has been asked to block access from two countries and allow access only from the home country of the company.

What configuration should be used to meet this changed requirement?

A

Configure AWS Web Application Firewall (AWS WAF) on the Application Load Balancer in a Amazon Virtual Private Cloud (Amazon VPC)

You can use AWS WAF with your Application Load Balancer to allow or block requests based on the rules in a web access control list (web ACL). Geographic (Geo) Match Conditions in AWS WAF allows you to use AWS WAF to restrict application access based on the geographic location of your viewers. With geo match conditions you can choose the countries from which AWS WAF should allow access.

28
Q

The DevOps team at an e-commerce company wants to perform some maintenance work on a specific Amazon EC2 instance that is part of an Auto Scaling group using a step scaling policy. The team is facing a maintenance challenge - every time the team deploys a maintenance patch, the instance health check status shows as out of service for a few minutes. This causes the Auto Scaling group to provision another replacement instance immediately.

As a solutions architect, which are the MOST time/resource efficient steps that you would recommend so that the maintenance work can be completed at the earliest? (Select two)

A

Put the instance into the Standby state and then update the instance by applying the maintenance patch. Once the instance is ready, you can exit the Standby state and then return the instance to service - You can put an instance that is in the InService state into the Standby state, update some software or troubleshoot the instance, and then return the instance to service. Instances that are on standby are still part of the Auto Scaling group, but they do not actively handle application traffic.
Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can manually set the instance’s health status back to healthy and activate the ReplaceUnhealthy process type again - The ReplaceUnhealthy process terminates instances that are marked as unhealthy and then creates new instances to replace them. Amazon EC2 Auto Scaling stops replacing instances that are marked as unhealthy. Instances that fail EC2 or Elastic Load Balancing health checks are still marked as unhealthy. As soon as you resume the ReplaceUnhealthly process, Amazon EC2 Auto Scaling replaces instances that were marked unhealthy while this process was suspended.

29
Q

A new DevOps engineer has just joined a development team and wants to understand the replication capabilities for Amazon RDS Multi-AZ deployment as well as Amazon RDS Read-replicas.

What correctly summarizes these capabilities for the given database?

A

Multi-AZ follows synchronous replication and spans at least two Availability Zones (AZs) within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone (AZ), Cross-AZ, or Cross-Region

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Multi-AZ spans at least two Availability Zones (AZs) within a single region.
Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS creates a second DB instance using a snapshot of the source DB instance. It then uses the engines’ native asynchronous replication to update the read replica whenever there is a change to the source DB instance.
Amazon RDS replicates all databases in the source DB instance. Read replicas can be within an Availability Zone (AZ), Cross-AZ, or Cross-Region.

30
Q

An audit department generates and accesses the audit reports only twice in a financial year. The department uses AWS Step Functions to orchestrate the report creating process that has failover and retry scenarios built into the solution. The underlying data to create these audit reports is stored on Amazon S3, runs into hundreds of Terabytes and should be available with millisecond latency.

As an AWS Certified Solutions Architect – Associate, what is the MOST cost-effective storage class that you would recommend to be used for this use-case?

A

Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

Since the data is accessed only twice in a financial year but needs rapid access when required, the most cost-effective storage class for this use-case is Amazon S3 Standard-IA. S3 Standard-IA storage class is for data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA matches the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. Amazon Standard-IA is designed for 99.9% availability compared to 99.99% availability of Amazon S3 Standard. However, the report creation process has failover and retry scenarios built into the workflow, so in case the data is not available owing to the 99.9% availability of Amazon S3 Standard-IA, the job will be auto re-invoked till data is successfully retrieved. Therefore this is the correct option.

31
Q

A technology blogger wants to write a review on the comparative pricing for various storage types available on AWS Cloud. The blogger has created a test file of size 1 gigabytes with some random data. Next he copies this test file into AWS S3 Standard storage class, provisions an Amazon EBS volume (General Purpose SSD (gp2)) with 100 gigabytes of provisioned storage and copies the test file into the Amazon EBS volume, and lastly copies the test file into an Amazon EFS Standard Storage filesystem. At the end of the month, he analyses the bill for costs incurred on the respective storage types for the test file.

What is the correct order of the storage charges incurred for the test file on these three storage types?

A

Cost of test file storage on Amazon S3 Standard < Cost of test file storage on Amazon EFS < Cost of test file storage on Amazon EBS
With Amazon EBS Elastic Volumes, you pay only for the resources that you use. The Amazon EFS Standard Storage pricing is $0.30 per GB per month. Therefore the cost for storing the test file on EFS is $0.30 for the month.
For Amazon EBS General Purpose SSD (gp2) volumes, the charges are $0.10 per GB-month of provisioned storage. Therefore, for a provisioned storage of 100GB for this use-case, the monthly cost on EBS is $0.10*100 = $10. This cost is irrespective of how much storage is actually consumed by the test file.
For S3 Standard storage, the pricing is $0.023 per GB per month. Therefore, the monthly storage cost on S3 for the test file is $0.023.
Therefore this is the correct option.

32
Q

A retail company uses Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon API Gateway, Amazon RDS, Elastic Load Balancer and Amazon CloudFront services. To improve the security of these services, the Risk Advisory group has suggested a feasibility check for using the Amazon GuardDuty service.

What would you identify as data sources supported by Amazon GuardDuty?

A

VPC Flow Logs, Domain Name System (DNS) logs, AWS CloudTrail events

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time-consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in AWS. The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.
Amazon GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.
With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or hardware to deploy or maintain. By integrating with Amazon EventBridge Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts, and straightforward to push into existing event management and workflow systems.

33
Q

The development team at an e-commerce startup has set up multiple microservices running on Amazon EC2 instances under an Application Load Balancer. The team wants to route traffic to multiple back-end services based on the URL path of the HTTP header. So it wants requests for https://www.example.com/orders to go to a specific microservice and requests for https://www.example.com/products to go to another microservice.

What features of Application Load Balancers can be used for this use-case?

A

Path-based Routing

Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and AWS Lambda functions.
If your application is composed of several individual services, an Application Load Balancer can route a request to a service based on the content of the request. Here are the different types -
Host-based Routing:
You can route a client request based on the Host field of the HTTP header allowing you to route to multiple domains from the same load balancer.
Path-based Routing:
You can route a client request based on the URL path of the HTTP header.
HTTP header-based routing:
You can route a client request based on the value of any standard or custom HTTP header.
HTTP method-based routing:
You can route a client request based on any standard or custom HTTP method.
Query string parameter-based routing:
You can route a client request based on the query string or query parameters.
Source IP address CIDR-based routing:
You can route a client request based on source IP address CIDR from where the request originates.
Path-based Routing Overview:
You can use path conditions to define rules that route requests based on the URL in the request (also known as path-based routing).
The path pattern is applied only to the path of the URL, not to its query parameters.

34
Q

An ivy-league university is assisting NASA to find potential landing sites for exploration vehicles of unmanned missions to our neighboring planets. The university uses High Performance Computing (HPC) driven application architecture to identify these landing sites.

What Amazon EC2 instance topologies should this application be deployed on?

A

The Amazon EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low network latency and high network throughput

The key thing to understand in this question is that HPC workloads need to achieve low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications. Cluster placement groups pack instances close together inside an Availability Zone. These are recommended for applications that benefit from low network latency, high network throughput, or both. Therefore this option is the correct answer.

35
Q

One of the biggest football leagues in Europe has granted the distribution rights for live streaming its matches in the USA to a silicon valley based streaming services company. As per the terms of distribution, the company must make sure that only users from the USA are able to live stream the matches on their platform. Users from other countries in the world must be denied access to these live-streamed matches.

What options would allow the company to enforce these streaming restrictions? (Select two)

A

Use Amazon Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights
Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from. For example, you might want all queries from Europe to be routed to an ELB load balancer in the Frankfurt region. You can also use geolocation routing to restrict the distribution of content to only the locations in which you have distribution rights.
Use georestriction to prevent users in specific geographic locations from accessing content that you’re distributing through a Amazon CloudFront web distribution
You can use georestriction, also known as geo-blocking, to prevent users in specific geographic locations from accessing content that you’re distributing through a Amazon CloudFront web distribution. When a user requests your content, Amazon CloudFront typically serves the requested content regardless of where the user is located. If you need to prevent users in specific countries from accessing your content, you can use the CloudFront geo restriction feature to do one of the following: Allow your users to access your content only if they’re in one of the countries on a whitelist of approved countries. Prevent your users from accessing your content if they’re in one of the countries on a blacklist of banned countries. So this option is also correct.

36
Q

A healthcare startup needs to enforce compliance and regulatory guidelines for objects stored in Amazon S3. One of the key requirements is to provide adequate protection against accidental deletion of objects.

As a solutions architect, what are your recommendations to address these guidelines? (Select two) ?

A

Enable versioning on the Amazon S3 bucket
Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite.
Enable multi-factor authentication (MFA) delete on the Amazon S3 bucket
To provide additional protection, multi-factor authentication (MFA) delete can be enabled. MFA delete requires secondary authentication to take place before objects can be permanently deleted from an Amazon S3 bucket. Hence, this is the correct option.

37
Q

A telecom company operates thousands of hardware devices like switches, routers, cables, etc. The real-time status data for these devices must be fed into a communications application for notifications. Simultaneously, another analytics application needs to read the same real-time status data and analyze all the connecting lines that may go down because of any device failures.

As an AWS Certified Solutions Architect – Associate, what solution would you suggest, so that both the applications can consume the real-time status data concurrently?

A

Amazon Kinesis Data Streams

Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering).

AWS recommends Amazon Kinesis Data Streams for use cases with requirements that are similar to the following:

Routing related records to the same record processor (as in streaming MapReduce). For example, counting and aggregation are simpler when all records for a given key are routed to the same record processor.
Ordering of records. For example, you want to transfer log data from the application host to the processing/archival host while maintaining the order of log statements.
Ability for multiple applications to consume the same stream concurrently. For example, you have one application that updates a real-time dashboard and another that archives data to Amazon Redshift. You want both applications to consume data from the same stream concurrently and independently.
Ability to consume records in the same order a few hours later. For example, you have a billing application and an audit application that runs a few hours behind the billing application. Because Amazon Kinesis Data Streams stores data for up to 365 days, you can run the audit application up to 365 days behind the billing application.

38
Q

A gaming company uses Amazon Aurora as its primary database service. The company has now deployed 5 multi-AZ read replicas to increase the read throughput and for use as failover target. The replicas have been assigned the following failover priority tiers and corresponding instance sizes are given in parentheses: tier-1 (16 terabytes), tier-1 (32 terabytes), tier-10 (16 terabytes), tier-15 (16 terabytes), tier-15 (32 terabytes).

In the event of a failover, Amazon Aurora will promote what read replicas?

A

Tier-1 (32 terabytes)

Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 128TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones (AZs).

For Amazon Aurora, each Read Replica is associated with a priority tier (0-15). In the event of a failover, Amazon Aurora will promote the Read Replica that has the highest priority (the lowest numbered tier). If two or more Aurora Replicas share the same priority, then Amazon RDS promotes the replica that is largest in size. If two or more Aurora Replicas share the same priority and size, then Amazon Aurora promotes an arbitrary replica in the same promotion tier.

Therefore, for this problem statement, the Tier-1 (32 terabytes) replica will be promoted.

39
Q

A financial services company recently launched an initiative to improve the security of its AWS resources and it had enabled AWS Shield Advanced across multiple AWS accounts owned by the company. Upon analysis, the company has found that the costs incurred are much higher than expected.

What would you attribute as the underlying reason for the unexpectedly high costs for AWS Shield Advanced service?

A

Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once

If your organization has multiple AWS accounts, then you can subscribe multiple AWS Accounts to AWS Shield Advanced by individually enabling it on each account using the AWS Management Console or API. You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and resources in those accounts.

40
Q

A leading video streaming service delivers billions of hours of content from Amazon Simple Storage Service (Amazon S3) to customers around the world. Amazon S3 also serves as the data lake for its big data analytics solution. The data lake has a staging zone where intermediary query results are kept only for 24 hours. These results are also heavily referenced by other parts of the analytics pipeline.

What is the MOST cost-effective strategy for storing this intermediary query data?

A

Store the intermediary query results in Amazon S3 Standard storage class

Amazon S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Because it delivers low latency and high throughput, S3 Standard is appropriate for a wide variety of use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics. As there is no minimum storage duration charge and no retrieval fee (remember that intermediary query results are heavily referenced by other parts of the analytics pipeline), this is the MOST cost-effective storage class amongst the given options.

41
Q
A