Explore Azure Functions Flashcards
What are azure functions?
- serverless solution that allow you to write less code, maintain less infrastructure and save on costs
- No worries about managing infrastructure, azure provides all the up-to-date resources needed to keep apps running
What are the two kep components of azure functions?
- Triggers; provide a way to start execution of code
- Bindings; simplify coding for input and output data
- All functions must have one trigger but they do not have to have a binding
How are functions and logic apps different from one another? (Long answer)
- Functions are serverless compute service, logic apps are serverless workflow integration platform
- Functions use code to create orchestrations, logic apps use GUI/Config files
- Functions are code first (imperative), logic apps are designer-first (declarative)
- Each activity is an azure function that we write code for, logic apps has a collection of ready-made actions instead
- Functions use app insights, logic apps use monitor logs
What are orchestrations in terms of functions
- a collection of functions that are executed to complete a task
How are functions and webjobs similar?
- Both code first integration services designed for devs
- built on app service and support features such as source control integration, authentication and monitoring with app insights
- functions are built on top of webjobs SDK, sharing many of the same triggers and connections
How are functions and webjobs different?
- Only functions provide automatic scaling, development and testing browser, pay-per-use pricing and integration with logic apps
- Webjob triggers include Timer, queues, blobs, service bus queues and topics, cosmos DB, event hubs and file system
- Function triggers include all of those plus HTTP/WebHook and azure event grid
- functions offer more dev productivity and offer more options for programming languages, dev environments, azure service integration and pricing
What hosting plans are availble for functions?
- consumption
- premium
- app service/dedicated
- all available on windows and linux
What does the function hosting plan dictate?
- how your function app is scaled
- the resources available to each function app instance
- support for advanced functionality such as vnet connectivity
What are the benefits of the consumption function hosting plan?
- default hosting plan that scales automatically and you only pay for compute resources when your functions are running
- Instances of the functions host are dynamically added and removed based on the number of incoming events
- Timeout default duration is 5 mins with max of 10
What are the benefits of the premium function hosting plan?
- auto scales based on demand using pre-warmed workers which run apps with no delay after being idle
- runs on more powerful instances and connections to vnets
- timeout default duration is 30 mins with unlimited max
What are the benefits of the dedicated function hosting plan?
- run your functions with an app service plan at regular app service plan rates
- best for long-running scenarios where durable functions cant be used
- timeout default duration is 30 mins with unlimited max
What are the two other hosting options for functions?
- App Service Environment (ASE); app service feature that provides a fully isolated and dedicated env for securely running app service apps at high scale
- Kubernetes; provides a fully isolated and dedicated env running on top of the kubernetes platform
- both provide the highest level of control and isolation in which to run your functions apps
How does scaling work on the consumption function hosting plan?
- event driven, scale out automatically even during periods of high load
- adds more instances of the functions host based on the number of incoming trigger events
- max instances are 200 on windows and 100 on linux per function app
How does scaling work on the premium function hosting plan?
- event driven and identitcal to function app
- max instances are 100 on windows and 20-100 on linux
How does scaling work on the dedicated function hosting plan?
- manual/autoscale
- 10-20 max instances
How does scaling work on the ASE and Kubernetes hosting plans?
- ASE; manual/autoscale with 100 max instances
- Kubernetes; event driven autoscale for Kubernetes clusters using KEDA with instances varying by cluster
Why does a function app require a storage account?
- requires a general storage account which supports blob, queue, file and table storage
- functions rely on azure storage for operations such as managing triggers and logging function executions
- the same storage account used by your function can also be used by your triggers and binding to store your app data
- …however for storage intensive-operatiosn you should use a separate storage account
How do consumption and premium function hosting plans handle scaling?
- add more instances of function host
- in consumption each instance is limited to 1.5GB of memory and 1 CPU
- an instance is the entire function app, meaning all functions in the app share resource within an instance and scale out together
- function apps that share the same plan scale independently
- In premium plan the plan size determines the available memory and CPU for all apps in that plan on that instance
Where are function code files stored?
- Azure file shares on the functions main storage account
- when you delete the main storage account of the function app, the function doe files are deleted and cant be recovered
What is the scale controller?
- Functions use scale controller to monitor the rate of eventsand determine whether to scale out or in
- uses heuristics for each trigger type
- e.g. when using a queue storage trigger, it scales based on queue length and age of oldest queue message
- unit of scaling for functions is the function app
- when scaled out more resources are allocated to run multiple instance of functions host
What happens if the function app is idle for several mins?
- Platform may scale the number of instances on which your app runs in to zero
- the next request has the added latency of scaling from zero to one]
- this latency is known as a cold start
What are the intricacies of scaling in Azure functions?
Max instances
- A single function app only scales out to a max of 200 instances
- a single interface may process more than one msg or request at a time though so there isn’t a set limit on number of concurrent executions
New instance rate
- for HTTP triggers new instances are allocated at most once per second
- for non HTTP triggers new instances are allocated at most every 30 seconds
How can we specify a lower max instance value for a function app?
- By modifying the ‘functionappscalelimit’ value
- can be set to 0 or null for unrestricted or a value between 1 and the app max