Lambda Flashcards
We are looking to roll out a new version of our lambda function to production using code deploy. we are confident with our testing, but we don’t want to do a big bang approach. What three methods can we use to do a deployment of a lambda function using code deploy and which one would we NOT use?
Linear traffic shift: switch 10% of traffic to the new function every 3 or 10 minutes
Canary: Switch X% of traffic to new version for 5 or 30 minutes and switch the rest after that time
All at Once: Switch everything immediately.
We probably wouldn’t use all at once in this case.
What is a Lambda Destination? Do they apply to Synchronous or asynchronous invocations and what services can be used as a destination (4). How does this differ from a DLQ and which is reccomended?
Lambda destinations allow the result of a successful asynchronous invocation, or a failure to another service. For instance, we could send notification of successful processing to one destination, and notification of a failure to another.
A message can be sent to SQS SNS lambda AWS EVENT BRIDGE BUS
This differs from a DLQ as a DLQ can only send notifications to SNS or SQS. Destinations are the preferred method for asynch interactions. They can’t be used for Event Mappings as these are synchronous.
Does lambda have an out of the box caching feature?
No. API gateway will provide this functionality in parallel with lambda.
We have a heavily CPU bound function, which we have allocated 2.5GB of RAM to. This has helped somewhat, but not as much as expected. Why is this? Is there a ‘magic number’ that we need to be aware of?
When you increase RAM past 1792MB you are using more than one vCPU. To benefit from this the code will need to be multi-threaded.
If I have a lambda function consuming a STANDARD SQS queue, how many functions will be invoked to process the queue PER MINUTE and is there a LIMIT? What order will messages be processed in?
Messages will be processed based on best effort ordering and lambda will scale the amount of functions out to process the queue as quickly as possible at a rate of 60/per minute up to 1000 concurrent functions.
If I want to do a rolling deployment of updated lambda functions in my environment using code pipeline - would I use code build or code deploy and why?
You configure a rolling deployment by using AWS CodeDeploy and AWS SAM. CodeDeploy is a service that automates application deployments to Amazon COMPUTING platforms such as Amazon EC2 and AWS Lambda.
Where would I be able to find data on how many times my lambda function has been called, the duration of the execution and how many concurrent executions there have been?
Lambda Metrics contains this data
Your lambda function is invoked asynchronously and some events fail from being processed after 3 retries. You’d like to collect and analyze these events later on. What should do you?
- Create a DLQ and send to SNS
- Create a DLQ and send to SQS
Create a DLQ and send to SQS - we wont use SNS in this case as we wanrt to hold the message for some days so we have time to consume it
I have a batch process that currently executes nightly on an EC2 instance between midnight and 01:30. We’re looking to save costs on this execution and lambda has been mooted as a solution. Is this appropriate?
No, lambda functions have a maximum timeout duration of 15 minutes, so lambda will not be a suitable platform to run the batch on.
I need to pass sensitive data to my lambda function to allow connectivity to a database. What is the MOST secure way of acheiving this?
You would pass KMS encrypted variables to your function using environment variables
By default, for an asynchronous lambda invocation how many times will lambda attempt retry a failed function and what is the delay between each attempt?
2 times, with a 1 minute and then a 2 minute delay
What is the maximum memory that can be allocated to a lambda function (GB)? Does increasing RAM have any impact on CPU or Network? Whats the default timeout for lambda?
3GB. Assigning more RAM to a function will also increase CPU and network bandwidth.
Default timeout is 3seconds
If an asynchronous lambda function exceeds its concurrency limits, over what time period will it attempt to retry over and what is the maximum time between retries
The function will retry automatically for up to 6 hours using exponential back off from 1 second to a maximum of 5 minute intervals. If this fails, it will go to a DLQ.
I have a VPC with a public and a private subnet. My private subnet routes traffic to 0.0.0.0/0 to a NAT gateway in my public subnet, which in turn routes to an IGW. When I deploy my lambda function, where would I deploy it to allow access to the internet - in the public or private subnet?
You would deploy your function in the PRIVATE subnet. Traffic then gets routed to the NAT and then IGW. This is the only way that a lambda function can communicate with the public web.
What is the function of the “Execution Context” for Lambda?
The execution context is a temporary runtime environment for initialising external dependencies in your lambda code (such as DB connections). This context is temporary, but it does persist for a time in the anticipation of another lambda function starting. It exists above the def handler.
What is the total size for all environment variables in Lambda?
The total size of all environment variables can’t exceed 4 KB.
If we are using Lambda event source mapping configured with a SQS queue, in event that something goes wrong with the lambda function processing data - where would we set up the DLQ, on SQS or Lambda and why?
The Queue would need to be set up on the SQS side as Event Source Mapping is a synchronous pattern so setting up a DLQ on lambda won’t work as this can only be done with an asynch pattern.
I have a application which makes heavy use of lambda. There are three interaction channels that it uses, one is for public user access via an application load balancer, one is via API gateway for B2B functions and finally there is a channel for internal applications via the SDK.. Currently there is no form of reserved concurrency set on any of my functions. What risks do I have under this setup?
Lambda allows 1000 concurrent executions for all functions in your account. If reserved concurrency is not set, it means that sudden spikes in load (such as over the public interface) will use up this 1000 concurrent requests limit and then other lambda functions will be throttled.
We are implementing X-Ray in our lambda function. How would we pass the X-Ray Daemon IP address and port to our lambda function so we don’t need to hard code it?
We would use environment variables to pass X-Ray config information to our function. AWS-XRAY-DAEMON-ADDRESS
Assume we have an Event source configured for a Kinesis stream consisting of 10 shards. If my lambda function returns an error when processing a batch then what will happen to that batch, and how will in order processing be ensured? If I am using a DLQ - what two services could I send the notification to?
By default, if your function returns an error, the entire batch is reprocessed until the function succeeds, or the items in the batch expire. To ensure in-order processing, processing for the affected shard is paused until the error is resolved. Notifications can be sent to SQS or SNS.
What sort of policy allows an ALB to invoke a lambda function - Resource or Role based?
A resource based policy grants permissions for a ALB to invoke a lambda function.
I have a lambda function set up with an event mapping to a Dynamo DB stream. I want to find how far my function is lagging behind processing versus the amount of data being fed into Dynamo. What built in feature of lambda event mappings could I use to find this and where would I look? (hint - its something to do with the iterator)
Lambda metrics will provide the iterator age metric which will indicate how far behind you are in reading from the stream.
When you deploy a lambda function behind an ALB, where is the function registered with respect to the ALB? (Hint: think in terms of scaling)
The lambda function must be registered in a target group for the ALB
You are uploading your lambda bundle to production. This contains your function code and all required dependencies. How big can the zip file be, and what is the maximum allowable size for your code and dependencies when uncompressed?
50MB Compressed, 250MB uncompressed.
In terms of security - what sort of IAM policy is used when:
- A Lambda Function Calls an AWS Service
- An AWS service calls a lambda function?
Hint: Think in terms of roles and policies
When a lambda function calls an AWS service it uses an execution role.
When a service calls a lambda function, it uses a resource based policy