General notes 1 Flashcards
SNS vs SQS
SNS
* publish/subscriber pattern
* uses push mechanism to immediately deliver messaged to subscribers
* 1 message set out to multiple consumers via topics
* suited to real-time apps
* does NOT persist messages, it delivers to subcribers that are present and then deletes them
SQS
* queueing system
* messages deleived through long polling (pull) mechanism
* 1 message usually consumed by 1 consumer
* suites to message processing use cases
* messages persist (from 1 minute to 14 days), however you should delete from queue after being consumed.
OSI Model 7 layers
- Layer 7 = application (HTTP, FTP SMTP)
- Layer 6 = presentation (TLS, SSL)
- Layer 5 = session (sockets)
- Layer 4 = transport (TCP, UDP)
- Layer 3 = network (IP, ICMP, IGMP, IPsec)
- Layer 2 = data (ethernet, wifi)
- Layer 1 = physical (fiber)
More info - https://twitter.com/alexxubyte/status/1752001717699592287
Where can Amazon S3 publish events to such as create/delete operations on S3 data?
Amazon S3 supports the following destinations where it can publish events:
– Amazon Simple Notification Service (Amazon SNS) topic (publishes to multiple subscribers of the topic, message is automatically deleted when published)
– Amazon Simple Queue Service (Amazon SQS) queue (one recipient, pull mechanism using long polling, message persists unless deleted/expires)
– AWS Lambda
Take note that Amazon S3 event notifications are designed to be delivered at least once and to one destination only. You cannot attach two or more SNS topics or SQS queues for S3 event notification
API Gateway - handling traffic spikes globally
Amazon API Gateway provides throttling at multiple levels including global and by a service call. Throttling limits can be set for standard rates and bursts. For example, API owners can set a rate limit of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000 requests per second for a few seconds.
Amazon API Gateway tracks the number of requests per second. Any requests over the limit will receive a 429 HTTP response. The client SDKs generated by Amazon API Gateway retry calls automatically when met with this response.
Aurora Auto Scaling
Aurora Auto Scaling is particularly useful for businesses that have fluctuating workloads. It ensures that your database cluster scales up or down as needed without manual intervention. This feature saves time and resources, allowing businesses to focus on other aspects of their operations. Aurora Auto Scaling is also cost-effective, as it helps minimize unnecessary expenses associated with overprovisioning or underprovisioning database resources.
In this scenario, the company can benefit from using Aurora Auto Scaling. This solution allows the system to dynamically manage resources, effectively addressing the surge in read traffic during peak periods. This dynamic management of resources ensures that the company pays only for the extra resources when they are genuinely required.
Amazon EFS and NFS operating system support
Amazon EFS (Elastic File System) does not support Windows systems, only Linux OS.
Amazon NFS (Network File System) is mainly used for Linux.
How to handle bursts in traffic within seconds
The requirement is to handle the burst of traffic within seconds. You should use AWS Lambda in this scenario because Lambda functions can absorb reasonable bursts of traffic for approximately 15-30 minutes.
Lambda can scale faster than the regular Auto Scaling feature of Amazon EC2, Amazon Elastic Beanstalk, or Amazon ECS. This is because AWS Lambda is more lightweight than other computing services. Under the hood, Lambda can run your code to thousands of available AWS-managed EC2 instances (that could already be running) within seconds to accommodate traffic. This is faster than the Auto Scaling process of launching new EC2 instances that could take a few minutes or so. An alternative is to overprovision your compute capacity but that will incur significant costs. The best option to implement given the requirements is a combination of AWS Lambda and Amazon API Gateway.
IAM (Identity & Access Management) database authentication
You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance. Instead, you use an authentication token.
An authentication token is a unique string of characters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don’t need to store user credentials in the database, because authentication is managed externally using IAM. You can also still use standard database authentication.
IAM database authentication provides the following benefits:
* Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL).
* You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.
* For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security
Metrics not supports in Cloudwatch for EC2 which need a custom metric creating
Custom metrics in Cloudwatch:
* Memory utilization
* Disk swap utilization
* Disk space utilization
* Page file utilization
* Log collection
EC2 supports out of the box the following metrics:
* CPU Utilization
* Disk Reads activity
* Network packets out
* etc…
Note: Enhanced Monitoring is a feature of Amazon RDS, not EC2
Cardinality in databases
In the context of a database, cardinality is a measure of the uniqueness of values in the data. Low cardinality means few unique values; high cardinality means many unique values.
Remember that the more distinct partition key values your workload accesses, the more those requests will be spread across the partitioned space. Conversely, the less distinct partition key values, the less evenly spread it would be across the partitioned space, which effectively slows the performance.
High cardinality is better for performance.
AWS Resource Access Manager (RAM)
Hence, the correct combination of options in this scenario is:
* Consolidate all of the company’s accounts using AWS Organizations.
* Use the AWS Resource Access Manager (RAM) service to easily and securely share your resources with your AWS accounts.
AWS Resource Access Manager (RAM) is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization. You can share AWS Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM.
Many organizations use multiple accounts to create administrative or billing isolation, and limit the impact of errors. RAM eliminates the need to create duplicate resources in multiple accounts, reducing the operational overhead of managing those resources in every single account you own. You can create resources centrally in a multi-account environment, and use RAM to share those resources across accounts in three simple steps: create a Resource Share, specify resources, and specify accounts. RAM is available to you at no additional charge.
AWS Shield & AWS Shield Advanced
AWS Shield = DDoS protection
AWS WAF = SQL injection, XSS protection.
AWS Shield
AWS Shield is a managed distributed denial of service (DDoS) protection service that safeguards applications running on AWS. It provides dynamic detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
AWS Shield Advanced
For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing(ELB), Amazon CloudFront, and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced. In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall.
AWS Shield Advanced also gives you 24×7 access to the AWS DDoS Response Team (DRT) and protection against DDoS related spikes in your Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing(ELB), Amazon CloudFront, and Amazon Route 53 charges.
Note: Even though AWS WAF can help you block common attack patterns to your VPC such as SQL injection or cross-site scripting, this is still not enough to withstand DDoS attacks. It is better to use AWS Shield in this scenario.
Amazon Macie
Amazon Macie = PII / sensetive data security service
Amazon Macie is an ML-powered security service that helps you prevent data loss by automatically discovering, classifying, and protecting sensitive data stored in Amazon S3. Amazon Macie uses machine learning to recognize sensitive data such as personally identifiable information (PII) or intellectual property, assigns a business value, and provides visibility into where this data is stored and how it is being used in your organization.
AWS Directory Service AD Connector
Use AWS Directory Service AD Connector for integration with Active Directory.
Note: AWS Directory Service Simple AD just provides a subset of features offered by AWS Managed Microsoft AD
S3 object lock
S3 Object Lock provides two retention modes:
* Governance mode - users with specific IAM permissions can overwrite/delete proected object version during retention period
* Compliance mode - no user can overwirte/delete protected object during retention period