< All Topics

AWS – Compute – Lambda

1. Lambda Coding machine on rent

  1. No Server Management: You only give the Code; AWS provides the Server.
  2. Event-Driven: The code runs Only when needed based on specific triggers.
  3. Pay-as-you-go: You pay only for the milliseconds the code actually runs.
  4. Simple Analogy: It is like an Electric Calling Bell. It doesn’t consume power all day—only when someone presses the button. Once they let go, consumption is zero.

It’s like an Electric Calling Bell. It doesn’t consume electricity all day. It only uses power when someone presses the button. Once they let go, the power consumption is zero.

Why go Serverless?

Before Lambda, we suffered from “Server Management Fatigue”.

  1. The Problem: Manual patching, high costs for idle servers, and scaling headaches during peak times like Diwali sales.
  2. The Solution: Lambda is Infinitely Scalable. If 1 person calls it, 1 instance runs; if 10 lakh people call it, AWS spins up 10 lakh instances automatically.

2. Technical Architecture & Runtime

Lambda runs on Firecracker MicroVMs and follows a specific lifecycle:

  1. Download: AWS fetches your code from S3 or an internal repository.
  2. Environment Setup: A micro-container is created with your defined memory/CPU.
  3. Initialization (INIT): Runs code outside the main handler (like imports and DB connections).
  4. Invocation (RUN): The main handler function executes.
  5. Freeze/Shutdown: If idle, the environment is frozen; eventually, it is destroyed.

The “Stateless” Golden Rule

  1. Ephemeral Nature: Lambda functions are short-lived; you cannot save a file locally and expect it to be there for the next run.
  2. Persistence: For saving data, always use external services like Amazon S3, DynamoDB, or Amazon EFS.

Every function must have a Handler.

  1. Supported Runtimes: AWS supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby.
  2. Custom Runtimes: You can bring your own language (like Rust or PHP) using the Runtime API.

3. Performance & Optimization

Cold Starts vs. Warm Starts

  1. Cold Start: A 1–5 second delay when AWS creates a new environment from scratch.
  2. Warm Start: Lightning-fast reuse of a recently active environment.
  3. Pro Solution: Use Provisioned Concurrency to keep environments “ready” for high-traffic apps.

Memory vs. CPU

  1. In Lambda, you only choose Memory (128MB to 10GB).
  2. AWS automatically increases CPU power as you increase memory. If your code is slow, increasing memory often provides a faster CPU.

5. Global Presence and Scaling

5.1 Lambda@Edge

  1. Global Computing: Run code at Edge locations globally in response to CloudFront events.
  2. DevSecOps Angle: Use it to block malicious bots or add security headers right at the edge before traffic hits your origin.

5.2 Concurrency & Throttling

  1. Reserved Concurrency: Reserve a portion of your account’s limit for a specific function to ensure critical security tasks always have room to run.
  2. Throttling (429 Error): If limits are exceeded, AWS returns a TooManyRequestsException.
  3. Monitoring: Set CloudWatch Alarms on the “Throttles” metric to catch blocked security automation immediately.

6. Monitoring and Observability

Since you cannot “SSH” into a Lambda, use these tools:

  1. CloudWatch Logs: All print() statements and errors go here.
  2. CloudWatch Metrics: Track Invocations, Errors, and Throttles.
  3. AWS X-Ray: Provides a “Map” of your execution to find exactly where delays (DB, API, etc.) are happening.

Real-World Challenges & Guru Solutions

ChallengeWhy it happens?Guru Solution
15-Minute TimeoutHard limit of 900 seconds.Use AWS Step Functions to break tasks into pieces.
DB Connection ExhaustionEach Lambda opens a new connection; RDS can crash.Use RDS Proxy to pool connections.
Vendor Lock-inCode is too tied to AWS SDKs.Use Hexagonal Architecture to separate logic from SDKs.
Large Deployment PackagesZIP files are limited to 50MB (uploaded) or 250MB (unzipped).Use Lambda Container Images (Docker) for up to 10GB.

7. Deep Dive

Every Lambda function must have a Handler.

import json
import boto3 # AWS SDK

def lambda_handler(event, context):
    # 1. 'event' = The Input. (e.g., Who uploaded the file? What is the filename?)
    # 2. 'context' = The Metadata. (e.g., How much time is left before timeout?)
    
    print("Logic starts here...")
    
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from DevSecOps Guru!')
    }

8. Lambda Limits

8.1. Fundamental Execution Limits

These boundaries define what a single “invocation” or run can do:

  1. Timeout (Hard Limit): Code can run for a maximum of 15 minutes (900 seconds). If your task (like video processing) takes 16 minutes, AWS will terminate it immediately.
  2. Memory Allocation: You can assign between 128 MB and 10,240 MB (10 GB).
  3. CPU Power: You do not choose CPU directly; AWS allocates CPU power proportional to memory. At certain memory thresholds (like 1,769 MB), you get the equivalent of one full vCPU.
  4. Ephemeral Storage (/tmp): You get between 512 MB and 10 GB of local scratch space.

8.2. Networking & Payload Limitations

Crucial for designing APIs and secure microservices:

  1. Invocation Payload Size:
    • Synchronous: Maximum 6 MB for request and response.
    • Asynchronous: Maximum 256 KB.
  2. VPC Connectivity: While Lambda can be placed in a VPC, it does not have a public IP by default. To reach the internet, it must route through a NAT Gateway.
  3. Database Connections: Lambda is stateless and “scales out” fast. Without a tool like RDS Proxy, 1,000 simultaneous Lambda runs can easily crash a traditional database by exhausting its connection limit.

8.3. Scaling & Concurrency (Soft Limits)

These protect your AWS account and other customers from sudden traffic spikes:

  1. Account Concurrency: The default limit is 1,000 concurrent executions per region across your entire account. This can be increased by a support request.
  2. Burst Concurrency: Depending on the region, Lambda can only scale by a certain amount (e.g., 500 to 3,000 instances) every minute.
  3. Throttling: If you exceed your concurrency limit, you will see a 429 “Too Many Requests” error.

8.4. Deployment & Package Quotas

These define the physical size of the code you upload:

  1. Deployment Package Size:
    • Zipped: Maximum 50 MB.
    • Unzipped (Extracted): Maximum 250 MB.
    • Container Images: Maximum 10 GB (Use Docker images for large ML libraries or dependencies).
  2. Console Editor: You can only edit code directly in the AWS browser console if the package is under 3 MB.
  3. Layers: You can add a maximum of 5 layers to a single function.
ChallengeImpact on Security/DevOpsSolution
15-Min LimitFailed security audits or long jobs.Orchestrate with AWS Step Functions.
Payload LimitCannot pass large files via API.Upload large files to S3 and pass the S3 link to the Lambda.
Concurrency Limit“Noisy Neighbor” effect where one app steals all capacity.Set Reserved Concurrency for critical security functions.
Package SizeHeavy libraries (Pandas, TensorFlow) won’t fit.Use Container Images (Docker) or Amazon EFS for storage.
Tags:
Contents
Scroll to Top