AWS Lambda Flashcards
What triggers has lambda functions?
- On a CloudWatch log group
What re the lambda invocation models?
- Synchronous (API GW calls)
- Asynchronous (events calling)
- Stream (Kinesis, Dynamo DB calling)
What is the context object?
Provides methods and properties for runtime, this means property like memory, name fo function, version, ARN, mem limit, request ID
I am in lambda, how cna I get the AWS request ID for tracing?
You can get the request-id from the context object.
What is the context logGroupName
Each lambda is part of a log group and also is using a logstream
How cna I find out the time remaining for my lambda function?
You use the contect.getTimeRemainingMillis(). this gives you the time the lambda has remaining to execute code before AWS reclaims it.
How can I set the CPU size?
You dose not set CPU directly, CPU is proportional to the Memory, for example, if you have 128MB memory and you double it to 256, you get twice the CPU available ot the lambda function.
What is the event object?
The caller packed pup the event object with information thet cna be used in the lambda function.
What services can call lambda?
API GW (when a request is received)
S3 (When an object is received)
CloudWatch Events (When an event is received)
SNS (when a message is pushed into the topic)
Dynamo DB ‘When an item changes, you cna get the old and new’
CloudWatch edge events.
Kinesis
API GW auth
What is the lambda execute role?
This is the role used by lambda to access other services and resources in aws.
What does lambda proxy do?
It forwards the entire request to the lambda, not just the body, what is expected in return is again the complete response.
Imagine your image processing function relies on a specific image library. What are some strategies for including this library in your Lambda deployment package? And what are the trade-offs of each approach?
You could include the library directly in your deployment package, but that can make it quite large. A better approach might be to use Lambda layers, which lets you package dependencies separately and share them across functions. This keeps your deployment package small and makes updates easier.
Explain in detail what Lambda layers is.
Lambda layers are libraries you can attach to your lambda functions. they let you package code and dependencies separately, so you don’t have to include them in every function’s deployment package. this makes your functions smaller, faster to deploy, and easier to manage.
Explain in detail how Lambda layers are achieved.
You create a zip archive containing the code and dependencies for your layer. Then you upload it to Lambda and specify the layer’s runtime compatibility. Once created, you can attach the layer to your Lambda functions. When a function is invoked, Lambda automatically extracts the layer’s contents into the execution environment.
Imagine you have a stream of real-time data coming into Kinesis Data Streams, and you need to process each record as it arrives. How would you design this architecture using Lambda, and what are the benefits of this approach?
Exactly. You can configure Lambda to automatically pull the Kinesis stream and invoke your function whenever new data arrives. This serverless approach scales automatically with the stream’s throughput, and you only pay for the compute time used to process the data.
A Lambda function that processes incoming orders but suddenly you experience a massive spike in order volume. How does Lambda handle this increased load? And what mechanisms can you use to control the function’s concurrency to prevent overloading downstream systems?
Lambda automatically scales by provisioning multiple instances of your function to handle concurrent requests. To control concurrency and prevent overloading downstream systems, you can use reserved concurrency to set a limit on the number of concurrent executions for a function.
One key feature is its ability to handle asynchronous invocations. Imagine you have a Lambda function that sends welcome emails to new users. How would you configure this function for asynchronous invocation? And what are the benefits compared to synchronous invocation?
You configure it by setting the invocation type to event in the Lambda API or using the AWS CLI or SDKs. Asynchronous invocation means Lambda cues the event for processing and you get an immediate 202 response. This is great for things like emails where you don’t need an immediate response and it prevents delays in your main application flow.
What are some potential use cases for this invocation record feature? And how can it help you build more resilient and fault-tolerant applications?
You could use it for things like auditing, logging, or even retrying failed invocations. It helps you track what’s happening with your asynchronous tasks and make sure nothing gets lost.
What is asynchronous invocation?
When you configure Lambda to be asynchronous (meaning that once you make a call to it, it will queue the event and fire it off in the future), it will return to you straight away with a 200. You can use callbacks into SNS and EventBridge to get a callback once it’s finished.
Let’s talk about Lambda’s environment variables. Imagine you have a Lambda function that connects to a database. How would you securely manage the database credentials? What are the benefits of using environment variables in this scenario?
You’d store the credentials securely in AWS Secrets Manager, and then retrieve them in your Lambda function using environment variables. This keeps your credentials out of your code and makes it easy to update them without redeploying your function.
Imagine your Lambda function fails due to a temporary network issue. How does Lambda handle this? And what mechanisms can you use to customize the retry behavior?
Automatically retries the function twice for errors. You can customize this by setting the maximum retry attempts parameter in your function’s configuration. You can also use a dead-letter queue to capture events that fail after all retries.
Okay, here’s your question. What are some common use cases for dead-letter queues with Lambda, and how can they help you build more resilient and fault-tolerant applications?
You can use DLQs for things like debugging failed invocations, replaying events, or setting up alerts for when failures happen. They help you make sure you don’t lose important data and can keep your application running smoothly even when things go wrong.
OK. Here’s your question. What are the steps involved in configuring a dead-letter queue for a Lambda function, and what IAM permissions are required?
First, you’ll need to create an SQS queue or SNS topic to act as your dead-letter queue (DLQ). Then, in your Lambda function’s configuration, you’ll specify the ARN of that queue or topic.
As for permissions, your Lambda function’s execution role needs permissions to send messages to the DLQ.
OK, here it is again. Imagine you have a Lambda function that needs to access resources within your private VPC, like a database or an internal API. How would you configure your Lambda function to do this and what are the security implications?
You’d configure your Lambda function to connect to your VPC by specifying the VPC ID, subnets, and security groups in the function’s configuration settings. This allows your function to access resources within the VPC, but it also means your function will have network access to everything within the VPC, so you’ll need to be mindful of your security group rules.