Route 53 + Adv. S3 Flashcards

1
Q

What does DNS stand for?

What is the primary function of the Domain Name System (DNS)?

How does DNS facilitate internet communication?

Why is DNS an essential component of the internet?

A

DNS stands for the Domain Name System. The primary function of DNS is to translate human-readable domain names, like www.example.com, into IP addresses, which are numerical identifiers used by computers to locate each other on the internet. DNS serves as a distributed directory that helps browsers, applications, and devices find the correct IP address associated with a given domain, allowing seamless internet communication.

DNS is like a magic helper that makes sure your computer knows where to go on the internet when you type in your favorite website names!

DNS is crucial for the internet because it simplifies how we interact with websites. Without DNS, we’d have to remember and use long strings of numbers instead of easy-to-recall names, making the internet much less user-friendly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are some key terminologies associated with DNS?

  • Domain name
  • IP address
  • Namer Server
  • DNS Resolver
  • Zone
  • Top level domain
  • Second level domain
  • Domain registrar
  • Domain records
A
  • Domain Name: A human-readable name associated with an IP address, such as www.example.com.
  • IP Address: A numerical identifier for a device on a network, facilitating communication over the internet.
  • Name Server: A server that holds DNS records and provides information about a specific domain.
  • DNS Resolver: The part of the DNS that receives domain name queries from clients and seeks the corresponding IP addresses.
  • Zone: A portion of the DNS namespace managed by a specific authority.
  • Top level doamin: .com, .us, .in, .gov …
  • Second level domain: amazon.com, google.com,…
  • Domain registrar: Amazon Route 53, GoDaddy,…
  • Domain records: A, AAA, CNAME…

Understanding these terms is crucial as they collectively define how DNS works. The domain names help us navigate the internet, and DNS servers ensure that our devices can find the correct IP addresses associated with those names.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the name of Amazon’s scalable domain name system (DNS) service?

What functionalities does Amazon Route 53 provide in the context of internet domain management?

How does Route 53 contribute to the efficient route of internet traffic

Why is Amazon Route 53 an essential service for website owners and businesses?

A

Amazon Route 53 is a scalable domain name system (DNS) web service provided by Amazon Web Services.
It offers various functionalities such as domain registration, DNS routing, and health checking of resources.
Route 53 plays a crucial role in efficiently routing internet traffic by translating human-readable domain names into IP addresses, directing users to the correct web servers hosting the requested content. (URL Resolver)

Amazon Route 53 is essential for website owners and businesses because it ensures that people can easily find their websites on the internet.
It’s like having a trustworthy guide that ensures visitors always reach the right place without any confusion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are Amazon Route 53 records?

How do Route 53 records contribute to domain management?

Why is understanding Route 53 records important for configuring and optimizing internet domains?

A

Amazon Route 53 records are configurations that define how the DNS (Domain Name System) should handle requests for a domain.
These records provide crucial information about the domain’s behavior, such as -
* Domain/subdomain Name (eg: example.com)
* Record type (eg; A or AAAA)
* Value (eg; 12.65.46.78)
* Routing Policy (how route 53 responds to queries)
* TTL (amount of time the record cached at DNS resolvers)

Record Types:
* A - maps a hostname to IPv4
* AAAA - maps a hostname to IPv6
* CNAME - maps a hostname to another hostname. (Can’t create a CNAME at the top a DNS namespace)
* NS - Name servers for the hosted zone: Controls how traffic is routed for a domain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Route 53 Hosted Zones

How do Route 53 Hosted Zones contribute to domain management?

What is the relationship between domain and hosted zone in Route 53?

Why are Route 53 Hosted Zones a critical component in configuring DNS settings?

A

Route 53 Hosted Zones are containers for DNS records, allowing users to manage the DNS settings for their domains.

  • Public Hosted Zones - Records that specify how to route traffic on the internet (public domain names)
  • Private Hosted Zones - Records that specify how you route traffic within one or more VPCs (private domain names)

Z - Zone Management: Zones in Route 53 help you manage your domain’s DNS settings and configurations.

O - Organized Records: Within a hosted zone, you organize and maintain DNS records that specify how your domain should function.

N - Navigation Control: Control over the navigation of traffic to and from your domain.

E - Effective Routing Policies: Routing policies within hosted zones to implement effective traffic routing strategies

S - Scalability and Adaptability: At the domain’s infrastructure level, allowing you to easily scale resources and update configurations as your needs evolve.

Route 53 Hosted Zones are super important because they hold the special instructions that guide the internet to the right places.
Understanding and configuring these zones accurately ensures that websites and online services work perfectly, just like they should!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are CNAME and Alias records in Amazon Route 53?

CNAME -> Canonical Names
How do CNAME and Alias records differ in their functionality?

In what scenarios would you choose CNAME over Alias, or vice versa?

Why is understanding the distinction between CNAME and Alias important in DNS configuration?

A

CNAME and Alias records in Amazon Route 53 serve similar purposes by allowing one domain to point to another. However, they differ in their usage:
* CNAME Record: It is used for creating aliases from one domain to another, typically for subdomains or non-root domains.
1. CNAME records can’t be used for the root domain (apex domain).
2. Only used for non root domain

  • Alias Record: It functions similarly to a CNAME but is specific to AWS services. Alias records can be used for the root domain and are often preferred when pointing to AWS resources like an S3 bucket, CloudFront distribution, or an Elastic Load Balancer (ELB).
    1. Points a hostname to an AWS resource
    2. Works for root domain and non root domain
    3. Free fo charge
    4. Has a native health check capability
    5. Always of type A/AAAA for AWS resources (IPv4/IPv6)
    6. You can’t set TTL. Done by R53.

CNAME is like a good guide for most places, but Alias is the superhero guide that can go everywhere, even to the main entrance!

Using Alias; You can’t set am ALS record for an EC2 DNS name.

Knowing when to use CNAME or Alias is like choosing the right guide for the right adventure.
Understanding their differences ensures that your DNS settings work perfectly, directing internet traffic accurately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are Routing Policies in Amazon Route 53

How do Routing Policies control the distribution of traffic for a domain in Route 53

Can you name a few types of Routing Policies supported by Route 53

Why is understanding Routing Policies crucial for optimizing the performance and availability of web applications?

A

Routing Policies in Amazon Route 53 are configurations that determine how traffic is distributed among different resources, such as web servers or endpoints. They play a crucial role in controlling the flow of internet traffic for a domain. Several types of Routing Policies are supported, each serving different purposes, including Simple Routing, Weighted Routing, Latency-Based Routing, Failover Routing, Geolocation Routing, and Multivalue Answer Routing.

Routing Policies are like magical guides that make sure everyone reaches the right places on the internet
* Simple
* Weighted
* Failover
* Latency based
* Geolocation
* Multi-Value Answer

Knowing about Routing Policies is like having special maps for the internet that make sure people always reach their destinations quickly and efficiently. Understanding these policies helps ensure that websites and applications run smoothly and are always available to users.

Unlike ELBs, DNS does not route any traffic, it only responds to the DNS queries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the Weighted Routing Policy in Amazon Route 53?

How does the Weighted Routing Policy control the distribution of internet traffic?

When might you choose to use the Weighted Routing Policy in Route 53?

Why is the Weighted Routing Policy important for optimizing resource utilization in web applications?

A

The Weighted Routing Policy in Amazon Route 53 is a method of distributing internet traffic among multiple resources based on assigned weights.
Each resource (like an endpoint or server) is given a weight percentage, and Route 53 directs traffic proportionally according to these weights.
This allows for controlled testing of new versions or gradual migration of traffic between resources.

The Weighted Routing Policy is important because it lets you control how much traffic goes to different places, allowing for careful testing or gradual changes without overwhelming any particular resource.
This is crucial for optimizing the performance and reliability of web applications during updates or changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are LB and Weighted Routing Policies in the context of AWS?

How do Load Balancers and Weighted Routing Policies differ in distributing internet traffic?

LB -Load Balancer

In what scenarios would you choose to use LB, & Weighted Routing Policy

Why is understanding the distinction between Load Balancers and Weighted Routing Policies important for optimizing the performance of web applications?

A

Load Balancers:
* Load balancers distribute internet traffic across multiple servers or resources based on factors like server health, ensuring even load distribution for improved performance and reliability.
* They are beneficial for scenarios where high availability and fault tolerance are essential.

Weighted Routing Policies:
* Weighted Routing Policies in Amazon Route 53 distribute traffic based on assigned weights to different resources.
* This allows controlled testing, gradual migration, or specific distribution percentages for different resources.
* Weighted Routing is useful when you want to direct a specific percentage of traffic to different endpoints.

Understanding when to use Load Balancers or Weighted Routing Policies is crucial for optimizing how internet traffic is distributed.
Load Balancers ensure even distribution for reliability
Weighted Routing allows you to control and test different scenarios based on assigned weights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the Latency-Based Routing Policy in Amazon Route 53?

How does the Latency-Based Routing Policy control the distribution of internet traffic?

In what scenarios would you choose to use this Routing Policy ?

Why is the Latency-Based Routing Policy important for optimizing the performance of web applications?

A

The Latency-Based Routing Policy in Amazon Route 53 directs internet traffic based on the lowest network latency for end-users.
It measures the time it takes for DNS queries to be resolved and directs traffic to the resource with the lowest latency.
This policy is beneficial for optimizing the performance of web applications by minimizing the delay between users and resources.
It can also be associated with health checks.

Super helpful when the latency for users is priority

Latency is based on traffic between users & AWS regions.
The Latency-Based Routing Policy is essential for making sure internet traffic takes the quickest path to resources.
Providing users with a smooth and responsive experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are Route 53 Health Checks?

How do Route 53 Health Checks contribute to the management of internet resources?

Mention purpose of conducting health checks for resources in Route 53?

Why are Route 53 Health Checks important for ensuring the reliability and availability of web applications?

A

Route 53 Health Checks are a feature in Amazon Route 53 that monitors the health and performance of internet resources, such as servers or endpoints.
If a resource fails a health check, Route 53 can automatically reroute traffic away from the unhealthy resource, helping maintain high availability and reliability.
- Multiple health checker requests come from all over the world.
- 2XX or 3XX response from the health checker is healthy.
- Returns healthy if >18 % of the checks return 2XX or 3XX.
- At a time, only 256 health checks can be handled.
- All the health checkers live in the public network. Hence, it only checks public end-points.
- To perform health checks for private end-points. One has to colloborate health checks with cloudWatch alarms to do so.

HTTP health checks are only for public resources for DNS failovers

Route 53 Health Checks are super important because they ensure that internet resources are in good shape. If something isn’t right, they can quickly guide internet traffic away from the problem, making sure websites and applications are always available and working well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the Failover Routing Policy in Amazon Route 53?

How does the Failover Routing Policy control the distribution of internet traffic?

In what scenarios would you choose to use the Failover Routing Policy in

Why is the Failover Routing Policy important for ensuring the availability and reliability of web applications?

A

The Failover Routing Policy in Amazon Route 53 is a configuration that directs internet traffic to a designated resource (such as a backup server) when the primary resource becomes unhealthy.
It allows users to set up a primary resource and a backup resource, ensuring that if the primary resource fails a health check, traffic is automatically redirected to the backup.
This policy is crucial for maintaining high availability and minimizing downtime.

Failover Routing is as a magical switch; ensures a backup for failures

The Failover Routing Policy is important because it acts like a safety net for internet resources. If the main resource isn’t working properly, it quickly switches to a backup, ensuring that websites and applications are always available and reliable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the Geolocation Routing Policy in Amazon Route 53?

Why is the Geolocation Routing Policy important for optimizing the performance of web applications?

How does Geolocation Routing Policy control the distribution of traffic?

In what scenarios would you choose to use the Geolocation Routing Policy in Route 53?

A

The Geolocation Routing Policy in Amazon Route 53 directs internet traffic based on the geographical location of the user.
It allows users to define specific routing configurations for different regions or countries.
This policy is useful when tailoring the user experience or directing traffic to region-specific resources based on the geographical location of the user.

Routing based on user’s location based on IP address

The Geolocation Routing Policy is important because it lets websites and applications provide a customized experience based on where users are in the world.
This ensures that users get the fastest and most relevant content, optimizing their overall online experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Geoproximity Routing Policy in Amazon Route 53?

How does the Geoproximity Routing Policy control the distribution of internet traffic?

In what scenarios would you choose to use Geoproximity Routing Policy

Why is the Geoproximity Routing Policy important for optimizing the performance of web applications?

A

The Geoproximity Routing Policy in Amazon Route 53 directs internet traffic based on the physical proximity of the user to AWS resources.
It uses geographic locations of the user and the AWS resources to determine the best routing.
This policy is particularly useful when optimizing performance by directing users to the nearest resources or data centers, reducing latency and improving overall user experience.
- Route traffic to your resources based on the geographic location of users and resources.
- Ability to shift more traffic to resources based on the defined bias.
- Resources can AWS resources or Non-AWS resources
- You must use Route 53 traffic flow (advanced) to use this feature.

The Geoproximity Routing Policy is important because it makes sure that internet traffic is directed to the nearest resources, reducing delays and making websites and applications load faster.
This is crucial for providing users with an optimal and enjoyable online experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the Multi-Value Routing Policy in Amazon Route 53?

How does the Multi-Value Routing Policy control the distribution of internet traffic?

In what scenarios would you choose Multi-Value Routing Policy?

Why is the Multi-Value Routing Policy important for optimizing the performance of web applications?

A

The Multi-Value Routing Policy in Amazon Route 53 allows users to configure multiple values for a DNS record, such as:
- IP addresses.
It responds to DNS queries with a random selection of values, distributing traffic across all the configured resources.
This policy is useful when optimizing performance by providing fault tolerance and load balancing across multiple resources.
- Used when routing traffic to multiple resources.
- Route 53 returns multiple values/resources.
- Can be associated with health checks.
- Uto 8 healthy records are returned for each multi-value query.
- Multi-value routing is not a substitute for having an ELB.

The Multi-Value Routing Policy is important because it helps distribute internet traffic randomly across multiple resources.
This not only ensures fault tolerance but also provides a balanced load on different servers or endpoints, optimizing the overall performance of web applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the distinction between a Domain Registrar and a DNS Service?

How do Domain Registrars and DNS Services differ in their functions?
In what scenarios would you typically engage with a Domain Registrar, and when would you use a DNS Service?

Why is understanding the roles of Domain Registrars and DNS Services important for managing a website’s online presence?

A

Domain Registrar: A Domain Registrar is a service that allows individuals or businesses to register and purchase domain names.
It is responsible for maintaining the registration of the domain, including the contact information of the domain owner, and ensuring the domain remains active as long as it is renewed.

DNS Service: A DNS (Domain Name System) Service, on the other hand, translates human-readable domain names into IP addresses, directing internet traffic to the appropriate servers or resources.
It manages the DNS records associated with a domain, such as A records, CNAME records, and others.

You buy or register your domain name with Domain registrar by annual payment, Domain registrar provides you with a DNS service to manage your DNS records.But, you can use another DNS service to manage your DNS records.

Understanding the roles of Domain Registrars and DNS Services is crucial for managing a website’s online presence. While a Domain Registrar ensures you own and can use a specific domain name, a DNS Service ensures that when people type in that domain, they are directed to the right place on the internet.

17
Q

You have purchased mycoolcompany.com on Amazon Route 53 Registrar and would like the domain to point to your Elastic Load Balancer my-elb-1234567890.us-west-2.elb.amazonaws.com. Which Route 53 Record type must you use here?
- A. CNAME
- B. Alias

A

Alias is the correct answer

You can’t create CNAME record that has the same name as the top node of the DNS namespace, in our case “mycoolcompany.com”

Both CNAME and Alias records are used for aliasing domain names, Alias records are a specific type introduced by some DNS providers, providing additional functionality, especially in cloud environments. If you’re using a cloud DNS service like AWS Route 53, you might prefer Alias records for certain use cases. Otherwise, CNAME records are more widely supported and used in general DNS configurations.

18
Q

You have updated a Route 53 Record’s myapp.mydomain.com value to point to a new Elastic Load Balancer, but it looks like users are still redirected to the old ELB. What is a possible cause for this behavior?
1. Because of the Alias record
2. Bcz of the CNAME record
3. Bcz of the TTL
4. Bcz of Route 53 Health Checks

A
  1. Because of the the TTL

Each DNS record has a TTL (Time To Live) which orders clients for how long to cache these values and not overload the DNS Resolver with DNS requests. The TTL value should be set to strike a balance between how long the value should be cached vs. how many requests should go to the DNS Resolver.

19
Q

You have an application that’s hosted in two different AWS Regions us-west-1 and eu-west-2. You want your users to get the best possible user experience by minimizing the response time from application servers to your users. Which Route 53 Routing Policy should you choose?
1. Multi Value
2. Weighted
3. Latency
4. Geolocation

A
  1. Latency

Latency Routing Policy will evaluate the latency between your users and AWS Regions, and help them get a DNS response that will minimize their latency (e.g. response time)

20
Q

You have purchased a domain on GoDaddy and would like to use Route 53 as the DNS Service Provider. What should you do to make this work?
1. Request for a domain transfer
2. Create a private hosted zone and update the 3rd party registrar NS records.
3. Create a public hosted zone and update the route 53 NS records.
4. Create a public hosted zone and update the 3rd party registrar NS records.

A
  1. Create a public hosted zone and update the 3rd party registrar NS records.

Public Hosted Zones are meant to be used for people requesting your website through the Internet. Finally, NS records must be updated on the 3rd party Registrar.

21
Q

Which of the following are NOT valid Route 53 Health Checks?
1. Health Checks thta monitors SQS Queue.
2. Health Checks that monitors an endpoint.
3. Health Checks thta monitors other health checks.
4. Health Checks thta monitors cloudWatch Alarms.

A
  1. Health Checks that monitors SQS Queue.
22
Q

What are Amazon S3 lifecycle rules?

How does Amazon S3 lifecycle management work?

Explain the key components of Amazon S3 lifecycle rules.

Summarize the purpose of Amazon S3 lifecycle rules.

A

Amazon S3 lifecycle rules allow you to define actions that should be taken on objects over time. This includes transitioning objects between storage classes and setting expiration policies.

Real world Use-Case: Imagine you have a set of log files. You can use S3 lifecycle rules to automatically move older files to cheaper storage classes or delete them after a certain period.

Transition, expiration & the ability to specify rules based on metadata.

Amazon S3 lifecycle rules help manage storage costs and optimize performance by automating the movement and deletion of objects over time.

23
Q

What is S3 Requester Pays?

How does S3 Requester Pays affect data transfer costs?

Explain the implications of enabling S3 Requester Pays on a bucket.

Summarize the key aspect of S3 Requester Pays.

A

Answer: S3 Requester Pays is a feature in Amazon S3 that allows the bucket owner to configure a bucket so that the requester, rather than the bucket owner, pays the data transfer and request costs.
Real world Use-Case: Suppose you have a public dataset, and you want external users to bear the cost of accessing that data. Enabling S3 Requester Pays ensures that those accessing the data pay for the associated costs.
Explaining to a kid: Imagine you have a treasure box, and your friend wants to look inside. With S3 Requester Pays, your friend brings their own coins to pay for the privilege of checking out the treasures in your box.

requester of data from the bucket is responsible for the associated cost

S3 Requester Pays shifts the cost burden of data transfer and requests from the bucket owner to the requester, providing flexibility in managing costs for publicly accessible buckets.

24
Q

What are S3 event notifications?

How does S3 event notifications work in Amazon S3?

Explain the purpose of configuring S3 event notifications for a bucket.

Summarize the key feature of S3 event notifications.

A

Answer: S3 event notifications are a feature in Amazon S3 that enable you to receive notifications when certain events occur in your S3 bucket, such as the creation, deletion, or modification of objects.
Real world Use-Case: Consider a scenario where you want to trigger a workflow whenever a new file is uploaded to your S3 bucket. S3 event notifications allow you to automatically receive a notification when such an event occurs.
Explaining to a kid: Think of it like having a magical messenger who tells you whenever something new is added, removed, or changed in your treasure chest (S3 bucket).

  • For S3 Event notifications to work we need to attach an IAM permission:
    1. SNS Resource (Acess) policy.
    2. SQS Resource(Access) Policy.
    3. Lambda Resource Policy.

S3 event notifications enhance automation by allowing you to react to changes in your S3 bucket, enabling you to trigger workflows, notifications, or other actions in response to specific events.

25
Q

What is S3 Event Notification with Amazon EventBridge?

How does S3 Event Notification integrate with Amazon EventBridge?

Explain benefits of using Amazon EventBridge for S3 event notifications

Summarize the key integration aspect of S3 Event Notification with Amazon EventBridge.

A

Answer: S3 Event Notification with Amazon EventBridge is a feature that allows you to route events from Amazon S3 to Amazon EventBridge, providing a way to integrate S3 events with event-driven architectures.
Real world Use-Case: Imagine you have a serverless application, and you want to process S3 events using AWS Lambda. By connecting S3 Event Notification to Amazon EventBridge, you can seamlessly trigger serverless functions in response to S3 events.
Using Amazon EventBridge for S3 event notifications offers advantages such as simplifying event-driven architectures, decoupling components, and enabling seamless integration with various AWS services.
Explaining to a kid: Think of it like having a magical bridge that connects your treasure chest (S3 bucket) to a magical city (EventBridge). Whenever something new happens in your chest, the magical city knows about it instantly and deploys a soldier to perform an action.

EventBridge aims multiple destination & also has adv. filtering options

The integration of S3 Event Notification with Amazon EventBridge enhances the ability to build scalable and event-driven applications, allowing you to easily connect S3 events with a variety of AWS services and third-party applications.

26
Q

What is S3 baseline performance?

How does Amazon S3 ensure baseline performance for object storage?

Explain the factors that contribute to S3 baseline performance.

Summarize the key considerations for understanding S3 baseline performance.

A

Answer: S3 baseline performance refers to the fundamental level of performance provided by Amazon S3 for object storage, ensuring reliable and consistent access to stored data.
Real world Use-Case: Consider a scenario where you need to retrieve data quickly and consistently from your S3 bucket. The baseline performance of S3 ensures that your access times remain reliable, regardless of the load on the system.
1. Milti-part Upload:
- recommended for files > 10MB.
- Must of files >5GB
- Helps with parallel uploads (speed up transfer)
2. S3 transfer acceleration:
- Increase tranfer speed by transfering file t an AWS edge location which will forward the data to the S3 bucket in the target region.
- Compatible with multi-part upload.

Clarifier: Factors contributing to S3 baseline performance include low-latency access, high throughput, and the ability to scale to accommodate varying workloads, providing a solid foundation for object storage.

S3 transer accelration

Multi-part upload, s3 transfer acceleration, S3 byte range fetches

Understanding S3 baseline performance is crucial for designing applications that rely on consistent and reliable access to stored objects, ensuring optimal performance for a wide range of use cases.

27
Q

What are S3 Select and Glacier Select?

How do S3 Select and Glacier Select enhance data retrieval in Amazon S3 and Amazon Glacier?

Explain the key features and use cases of S3 Select and Glacier Select.

Summarize the significance of S3 Select and Glacier Select in object storage.

A

Answer: S3 Select is a feature in Amazon S3 that allows you to retrieve only a subset of data from an object using SQL expressions, reducing the amount of data transferred and improving query performance.
Glacier Select is a similar feature in Amazon Glacier, providing the ability to perform SQL queries on data stored in Glacier archives without the need to restore the entire archive.
Real world Use-Case: Imagine you have a large CSV file in your S3 bucket, and you only need specific columns or rows. S3 Select enables you to retrieve only the relevant data without downloading the entire file.

Clarifier: S3 Select and Glacier Select are useful for optimizing data retrieval by allowing you to filter and transform data directly within Amazon S3 and Glacier, reducing the need for unnecessary data transfer and improving query efficiency.

S3 Select and Glacier Select provide powerful querying capabilities for object storage, enabling more efficient and cost-effective data retrieval by processing and filtering data directly within the storage service.

28
Q

What are S3 Batch Operations?

How does S3 Batch Operations simplify managing large-scale object operations in Amazon S3?

Explain the key features and capabilities of S3 Batch Operations.

Summarize the purpose and benefits of using S3 Batch Operations.

A

Answer: S3 Batch Operations is a feature in Amazon S3 that allows you to perform large-scale batch operations on objects, such as copying, deleting, or updating metadata, making it easier to manage vast amounts of data in your S3 buckets.
Real world Use-Case: Consider a scenario where you need to apply a new access control policy to a large set of objects in your S3 bucket. S3 Batch Operations enables you to make these changes efficiently across multiple objects.

Clarifier: S3 Batch Operations supports various operations, including copy, delete, and tag changes, and it allows you to filter objects based on specific criteria, making it a versatile tool for managing large amounts of data in your S3 buckets.

S3 Batch Operations simplifies the process of managing massive datasets in Amazon S3 by providing a scalable and efficient way to perform bulk operations on objects, saving time and resources for users dealing with large-scale object management tasks.

29
Q

What is MFA Delete?

How is MFA Delete defined in the context of AWS?

Explain the purpose and characteristics of MFA Delete.

Summarize the significance of using MFA Delete in AWS.

A

Answer: MFA Delete is a security feature in AWS that requires multi-factor authentication (MFA) to be enabled for certain sensitive operations, such as deleting versioned objects in Amazon S3 buckets. It adds an additional layer of protection by requiring a valid MFA code in addition to regular credentials to perform specified actions.

Real world Use-Case: Consider a scenario where a user wants to delete versioned objects in an S3 bucket. MFA Delete ensures that, in addition to regular credentials, the user must provide a valid MFA code, adding an extra layer of security to prevent accidental or unauthorized deletions.

MFA Delete is particularly important for securing critical operations involving the deletion of versioned objects. It helps mitigate the risk of accidental or malicious deletions by requiring an additional authentication factor.

Only bucket owner(root account) can enable/disable MFA delete

MFA Delete is a crucial security feature in AWS, enhancing protection for sensitive operations. By enforcing multi-factor authentication for specified actions, MFA Delete adds an extra layer of security, reducing the risk of unauthorized deletions and providing an additional safeguard for critical operations on the AWS cloud.

30
Q

What are S3 Access Logs?

How are S3 Access Logs defined in the context of Amazon S3?

Explain the purpose and characteristics of S3 Access Logs.

Summarize the significance of using S3 Access Logs in AWS.

A

Answer: S3 Access Logs are log files generated by Amazon S3 that record detailed information about requests made to a specific S3 bucket. These logs capture details such as the requester’s IP address, the time of the request, the requested resource, and the action performed (e.g., GET, PUT). S3 Access Logs are valuable for monitoring and auditing access to S3 buckets.

Real world Use-Case: Consider a scenario where an organization wants to track who is accessing their S3 bucket and what actions are being performed. S3 Access Logs provide detailed records of all requests, aiding in monitoring and auditing for security and compliance purposes.

S3 Access Logs offer visibility into access patterns for S3 buckets, enabling users to analyze and monitor activities for security, compliance, and optimization purposes.

The use of S3 Access Logs is essential for understanding and monitoring access to S3 buckets. By analyzing these logs, users can gain insights into access patterns, detect anomalies, and ensure security and compliance with regard to S3 bucket activities on the AWS cloud.

31
Q

What are Amazon S3 Pre-Signed URLs?

How are Amazon S3 Pre-Signed URLs defined in the context of Amazon S3?

Explain the purpose and characteristics of Amazon S3 Pre-Signed URLs.

Summarize the significance of using Amazon S3 Pre-Signed URLs in AWS.

A

Answer: Amazon S3 Pre-Signed URLs are URLs with a time-limited token that grants temporary access to a specific S3 object. These URLs are generated by the S3 bucket owner and provide temporary, controlled access to the object without requiring the requester to have AWS security credentials.

Real world Use-Case: Consider a scenario where a user needs to share a private file stored in an S3 bucket for a limited time. Generating a Pre-Signed URL allows the user to share the URL without exposing AWS credentials, and access is limited to the specified time period.

Amazon S3 Pre-Signed URLs are a secure and flexible way to grant temporary access to private S3 objects. They are particularly useful for scenarios where temporary access needs to be provided to users or applications without sharing long-term AWS credentials.

The use of Amazon S3 Pre-Signed URLs is a key security feature in AWS, providing a secure method to share temporary access to private S3 objects.
By generating time-limited URLs, users can control and restrict access to S3 objects without exposing AWS credentials.

32
Q

What is S3 Glacier Vault Lock?

How is S3 Glacier Vault Lock defined in the context of Amazon S3 Glacier?

Explain the purpose and characteristics of S3 Glacier Vault Lock.

Summarize the significance of using S3 Glacier Vault Lock in AWS.

A

**Answer: S3 Glacier Vault Lock is a feature in Amazon S3 Glacier that allows users to enforce compliance controls on their Glacier vaults. With Vault Lock, users can set policies to lock the vault** for a specific retention period, during which data cannot be deleted. This ensures that data is retained for regulatory and compliance requirements.

Real world Use-Case: Consider a scenario where an organization needs to comply with data retention regulations. By using S3 Glacier Vault Lock, the organization can enforce a policy that prevents the deletion of data in the Glacier vault for a specified period, meeting compliance requirements.

S3 Glacier Vault Lock is a crucial feature for organizations with** strict compliance needs**. It provides a mechanism to enforce data retention policies on Glacier vaults, ensuring that data is preserved for the required duration.

- Versoning must be enabled for this feature to work.

The use of S3 Glacier Vault Lock is significant for organizations that must adhere to data retention regulations. By setting and enforcing retention policies on Glacier vaults, users can maintain compliance, prevent accidental data deletions, and ensure the preservation of data for the required retention period on the AWS cloud.

33
Q

What is an S3 Access Point?

How is an S3 Access Point defined in the context of Amazon S3?

Explain the purpose and characteristics of an S3 Access Point.

Summarize the significance of using an S3 Access Point in AWS.

A

Answer: An S3 Access Point is a unique hostname that customers can create to access their Amazon S3 buckets. It simplifies managing data access at scale by providing a dedicated endpoint with specific access policies and network configurations. S3 Access Points make it easier to manage data access for applications and reduce the risk of data leakage.

Real world Use-Case: Consider a scenario where an organization wants to grant controlled and secure access to specific S3 buckets for different applications. Creating S3 Access Points allows them to tailor access policies and configurations, ensuring secure and simplified data access.

S3 Access Points streamline data access management by providing dedicated endpoints with customized access policies. They enable users to create distinct access points for different use cases, improving security and reducing the complexity of data access.

The use of S3 Access Points is significant for organizations managing large-scale data access. By creating dedicated access points, users can implement specific access policies and configurations, enhancing security and simplifying data access for applications on the AWS cloud.

34
Q

What is S3 Object Lambda?

How is S3 Object Lambda defined in the context of Amazon S3?

Explain the purpose and characteristics of S3 Object Lambda.

Summarize the significance of using S3 Object Lambda in AWS.

A

Answer: S3 Object Lambda is a feature in Amazon S3 that allows users to automatically transform data as it is retrieved from an S3 bucket. It enables the use of custom code (AWS Lambda functions) to modify or augment the content of S3 objects in real-time, providing a dynamic and serverless way to process data on-the-fly during retrieval.

Real world Use-Case: Imagine a scenario where images stored in an S3 bucket need to be dynamically resized based on the device requesting them. S3 Object Lambda can be used to apply a Lambda function that performs real-time image resizing, delivering optimized content tailored to the requesting device.

S3 Object Lambda provides a serverless mechanism to apply custom transformations to S3 objects at the time of retrieval. By using AWS Lambda functions, users can dynamically process and modify content based on specific requirements.

The use of S3 Object Lambda is significant for scenarios where dynamic and real-time processing of S3 object content is required. By leveraging Lambda functions, users can apply custom transformations, augmentations, or filtering to S3 objects, enhancing flexibility and responsiveness in data retrieval on the AWS cloud.

35
Q

Route 53

Facts

A
  1. Register Domains one of the many functionalities.
  2. Host Zones..managed nameservers that AWS provides.
  3. Global Service … single database; Operates as a single global AWS service
  4. Globally Resilient; Can tolerate the failure of 1/more regions and still continue functioning.