Messaging Flashcards
SQS Message Duration
Default is 4 days but range 1-14 days
attribute: MessageRetentionPeriod
SNS Max Subscribers and Max Topics?
12.5 million subscribers. 100K Topics.
Kinesis Data Expiry
Default 24 hours. Extended Data can be 7 days. Long Term Retention 365 days.
Max sqs msg size
256kb.
Using Java extended client library to store msgs > 256kb. It stores data in S3 and just a link ref in sqs
Max sqs msgs
Unlimited.
Max of 120k in flight for std queue and 20k
SQS Visibiltiy Timeout
30 Seconds. Configurable up to 12 hours.
attribute: VisibilityTimeout
SQS Delivery Delay
Default 0. Maximum 15 seconds.
attribute: DelaySeconds
How to Scale Standard SQS Queue
Add more consumers.
Can you change SQS Queue Type after creation ?
NO
How to remove a queue and its contents ?
How to just empty the queue?
DeleteQueue will empty and remove the Queue. Can take 60 seconds.
PurgeQueue will just empty it.
BeanStalk Updates - Rolling v ROlling with Additional Batches.
Rolling update takes down old instances in small batches, replacing it with the new version. Users may therefore sometimes point to old version and sometimes new version. No additional instances are spun up so cost remains the same but deployment will take longer and minor performance hit during the rollout.
Rolling WIth Additional batches is similar but it adds new batches (and this more Instances at more $) but ensures that performance remains the same throughout.
in SQS, whats the best way to handle high variable traffic spikes ?
Use backlog per instance metric with target tracking scaling policy
Whilst you can use CloudWatch Amazon SQS metric like ApproximateNumberOfMessagesVisible, this doesn’t consider how many consumers are running so back log per instance = msgs / consumers
What options do you have to address the issue of ProvisionedThroughputExceeded for Kinesis?
1) Increase the number of shards within your data streams to provide enough capacity
2) Configure the data producer to retry with an exponential backoff
How long does kinesis keep data for ?
Default 24 hours. Max 365 days
Kinesis or Kinesis firehose?
Kinesis data streams are used in places where an unbounded stream of data needs to worked on in real time. Data by default is stored for 24 hours but can store up to 365 days. Requires custom dev code.
Kinesis Firehose delivery streams are used when data needs to be delivered to a storage destination, such as S3
Firehose: Firehose handles loading data streams directly into AWS products for processing.
Fully managed, send to S3, Splunk, Redshift, ElasticSearch
Serverless data transformations with Lambda
Near real time (lowest buffer time is 1 minute)
Automated Scaling
No data storage