AWS Compute Services - Lambda Flashcards
Lambda Overview
A serverless compute service
Lambda executes your code only when needed and scales automatically
Lambda functions are stateless - no affinity to the underlying infrastructure
you choose the amount of memory you want to allocate to your functions and lambda allocates proportional CPU power, network bandwidth, and disk I/O
Lambda is SOC, HIPAA, PCI, ISO compliant
Natively supports the following languages:
Node.js
Java
C#
Go
Python
Ruby
Powershell
you can also provide your own custom runtime
components of a lambda application function - a script or program that runs in lambda. Lambda passes invocation events to your function. the function processes an event and returns a response runtimes - lambda runtimes allow functions in different languages to run in the same base execution environment. the runtime sits in-between the lambda service and your function code, relaying invocation events, context information, and responses between the two. layers - lambda layers are a distribution mechanism for libraries, custom runtimes, and other function dependencies. layers let you manage your in-development function code independently from your unchanging code and resources that it uses event source- an AWS service or a custom service that triggers your function and executes its logic downstream resources - an AWS service that your lambda function calls once it is triggered log streams - while lambda automatically monitors your function invocations and reports metrics to cloudwatch, you can annotate your function code with custom logging statements that allow you to analyze the execution flow and performance of your lambda function.
Lambda Overview
Lambda functions
you upload your application code in the form of one or more lambda functions. lambda stores code in S3 and encrypts it at rest
to create a lambda function, you first package your code and dependencies in a deployment package. then you upload the deployment package to create your lambda function
after your lambda function is in production, lambda automatically monitors functions on your behalf, reporting metrics through cloudwatch
configure basic function settings including the description, memory usage, execution timeout, and role that the function will use to execute your code.
environment variables are always encrypted at rest, and can be encrypted in transit as well
versions and aliases are secondary resources that you can create to manage function deployment and invocation
A layer is a zip archive that contains libraries, a custom runtime, or other dependencies. Use layers to manage your function’s dependencies independently and keep your deployment package small
you can configure a function to mount an EFS file system to a local directory. with EFS your function code can access and modify shared resources securely and high concurrency.
Lambda functions
Use Lambda environment variables or parameter store or secrets manager?
depends on the use case. Each of them has their own set of benefits but generally you should be using the built-in environment variables. It is free to use and it works as if you are creating environment variables or aliases in a terminal. You also have the option to encrypt your variables using KMS but that entails additional charges.
You can likewise use parameter store if you have variables that need to be referenced not just from a single lambda function. for example you want a centralized service where you can update environment variables in one go. Its also free and similarly you can use securestring datatype to have KMS encrypt your values.
the main benefit of using secrets manager is that you can rotate your keys or variables automatically. this is typically used in setups that involve a database or access key credential. secrets manager is a paid service however so unless you have secrets to rotate better go with the free services.
environment variable or parameter store
ways to upload code
there are 3 ways to add your code into your lambda function
1. add code inline thru the function’s IDE - enter your function code as is in the built-in editor. choose this option if your code is short or it does not have any external package dependencies.
2. upload a zip file of your code which will be stored in S3- upload a zip file which contains your code. Use this option if your code has package dependencies or if your code is long. You need to ensure a few things when using this option.
the filename of your code should match your lambda handler’s filename part (filename.handler_function)
your main function should have the same function name as your lambda’s handler’s function handler part (filename.handler_function). your file should be executable by your function’s programming language. verify the filename extension.
3. reference a path to an S3 bucket where your code is stored - if you already store you function code in S3, you can just reference it in your lambda function. this way you can easily build a CI/CD pipeline and your lambda function will run the latest code that you upload in your S3 bucket. this option has the same requirements as uploading a zip file.
ways to upload code
Invoking functions
lambda supports synchronous and asynchronous invocation of lambda function. you can control the invocation type only when you invoke a lambda function (referred to as on-demand invocation)
an event source is the entity that publishes events, and a lambda function is the custom code that processes the events.
event source mapping maps an event source to a lambda function. it enables automatic invocation of your lambda function when events occur. lambda provides event source mappings for the following services: kinesis, dynamodb, simple queue service.
Your functions concurrency is the number of instances that serve requests at a given time. when your function is invoked, lambda allocates an instance of it to process the event. when the function code finishes running, it can handle another requst. if the function is invoked again while a request is still being processed, another instance is allocated, which increases the function’s concurrency
to ensure that a function can always reach a certain level of concurrency, you can configure the function with reserved concurrency. when a function has reserved concurrency, no other function can use that concurrency. reserved concurrency also limits the maximum concurrency for the function
to enable your function to scale without fluctuations in latency, use provisioned concurrency. by allocating provisioned concurrency before an increase in invocations, you can ensure that all requests are served by initialized instances with very low latency
Invoking functions
what is an execution context?
when lambda executes your lambda function it provisions and manages the resources needed to run your lambda function. the execution context is a temporary runtime environment that initializes any external dependencies of your lambda function such as database connections or http enpoints. this provides better performance for your subsequent invocations because there is no need to re-initialize those external dependencies (cold start)
after a lambda function is executed, lambda maintains the execution context for some time in anticipation of another lambda function invocation. this way you can design your functions such that they can verify if there are existing database connections or http endpoints that are reusable.
each execution context also provide 512mb of additional disk space in /tmp directory. the directory content remains when the execution context is frozen, similar to a cache.
you should ensure that any background processes or call backs in our code are complete before the code exits so that subsequent invocations won’t resume any unfinished processes.
execution context
configuring a lambda function to access resources in a VPC
in lambda, you can set up your function to establish a connection to your VPC. with this connection your function can access the private resources of your VPC during execution like EC2, RDS, and many others
by default AWS executes your lambda function code securely within a VPC. Alternately you can enable your lambda function to access resources inside your private VPC by providing additional VPC specific configuration information such as VPC subnet ids and security group ids. it uses this information to setup elastic network interfaces which enable your lambda function to connect securely to other resources within your VPC
configuring with VPC
Lambda@edge
lets you run lambda functions to customize content that cloudfront delivers, executing the functions in AWS locations closer to the viewer. the functions run in response to cloudfront events, without provisioning or managing servers
you can use lambda functions to change cloudfront requests and responses at the following points: after cloudfront receives a request from a viewer (viewer request). before cloudfront forwards the request to the origin (origin request). After cloudfront receives the response from the origin (origin response). before cloudfront forwards the response to the viewer (viewer response)
you can automate your serverless application’s release process using CodePipeline and CodeDeploy
Lambda will automatically track the behavior of your lambda function invocations and provide feedback that you can monitor. in addition it provides metrics that allows you to analyze the full function invocation spectrum, including event source integration and whether downstream resources perform as expected
Lambda@edge
Pricing
you are charged based on the total number of requests for your function and the duration, the time ti takes your code to execute
Pricing