- Free tier: 1M requests/month.
- Cold starts: 100-500ms typical, worse with big packages.
- 15-min max timeout.
- No persistent connections.
- Python + Node best supported.
💡 Where Lambda fell apart for me: I tried using it for a PDF rendering pipeline that needed Puppeteer. The 250MB deployment limit killed the idea before I even got a prototype running. Chromium alone exceeds that. Ended up on Fargate for that one job while everything else stayed on Lambda.
Most of my backend runs on Lambda now. Once you stop thinking about servers entirely, you start noticing how many tasks are just "wait for something, do a thing, stop." That describes 90% of real work.
File lands in S3? Resize it. Row appears in DynamoDB? Notify somebody. API call comes in through API Gateway? Return JSON. Scheduled job needs to run at 2 AM? EventBridge triggers it. Each one is its own isolated function. Each one scales to zero when nobody is using it. Each one costs nothing when idle.
The 15-minute execution limit is the main wall. If your task genuinely needs to run longer than that, Lambda is the wrong tool. But I've found that constraint forces better architecture -- you break big jobs into smaller chunks, fan them out with SQS, and the system ends up more resilient than the monolith it replaced.
Your First Function (Console)
Fastest way to get something running. Skip the CLI for now and go straight to the console:
- Go to the Lambda Console
- Click Create function
- Select Author from scratch
- Name it something like
my-first-function - Runtime: Python 3.12 (or Node.js if you prefer)
- Leave everything else at defaults
- Click Create function
You'll see a code editor with a starter function:
import json
def lambda_handler(event, context):
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Bare minimum. Two arguments come in, one response goes out. Here's what each piece does:
- lambda_handler — The entry point. AWS calls this function name by default. You can rename it in the runtime settings if you want, but there's no reason to.
- event — Whatever triggered the function sends its data here. HTTP body, S3 file metadata, SQS message -- it all arrives as a dict.
- context — Runtime info. Mostly useful for checking how much execution time is left via
context.get_remaining_time_in_millis(). - return — Goes back to the caller. For API Gateway integrations, you need statusCode and body. For internal triggers, the shape doesn't matter much.
Test It
- Click Test
- Create a new test event, name it
test-event - Leave the default JSON
- Click Save, then Test again
Green banner, "Execution result: succeeded." If you see this, your function is live:
{
"statusCode": 200,
"body": "\"Hello from Lambda!\""
}
A Real Example: File Upload Processor
This is closer to what you'd actually deploy. A file hits S3, Lambda wakes up, logs what arrived:
import json
import urllib.parse
def lambda_handler(event, context):
# Get bucket name and file key from the S3 event
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(
event['Records'][0]['s3']['object']['key'],
encoding='utf-8'
)
size = event['Records'][0]['s3']['object']['size']
print(f"New file uploaded!")
print(f"Bucket: {bucket}")
print(f"File: {key}")
print(f"Size: {size} bytes")
return {
'statusCode': 200,
'body': json.dumps({
'message': 'File processed',
'file': key,
'bucket': bucket
})
}
💡 Lambda timing out? Three things to check in order: timeout setting (default is 3 seconds, which is absurdly low for anything calling an external API), VPC configuration (Lambda in a VPC needs a NAT gateway for internet access), and whether you have a connection hanging open waiting for a response that's never coming.
Connect It to S3
- In your Lambda function, go to Configuration → Triggers
- Click Add trigger
- Select S3
- Choose your bucket
- Event type: All object create events
- Click Add
Drop any file into that S3 bucket. Lambda picks it up within a second or two. No polling, no daemon, no cron. It just fires.
Check the Logs
Every print() in your Lambda ends up in CloudWatch Logs automatically. No logging library needed:
- Go to CloudWatch → Log groups
- Find
/aws/lambda/my-first-function - Click into the latest log stream
Your print output shows up alongside execution metadata -- duration in ms, memory consumed, billed duration. This is how you debug Lambda. Not SSH, not Docker logs. CloudWatch.
Using Environment Variables
Hardcoded API keys in Lambda source code is something I see in every "serverless tutorial" repo on GitHub. Don't. Environment variables exist:
- In your Lambda, go to Configuration → Environment variables
- Click Edit, then Add environment variable
- Add key
API_KEYwith your value - Save
Then read it in your handler like any other env var:
import os
api_key = os.environ['API_KEY']
const apiKey = process.env.API_KEY;
Adding Dependencies
Lambda comes with boto3 and a handful of standard libraries. Anything beyond that, you have two paths:
Option 1: Lambda Layers
Layers let you share packages across multiple functions. AWS maintains a few popular ones so you don't have to build them yourself:
- In your Lambda, go to Layers
- Click Add a layer
- Choose AWS layers
- Select something like
AWSSDKPandas-Python312
That's it. Pandas is available in your next deployment. No pip install in your build step, no bloated zip file.
Option 2: Deployment Package
When the library you need isn't in a pre-built layer, you bundle everything into a zip:
# Create a directory
mkdir my-lambda-package
cd my-lambda-package
# Install dependencies into the directory
pip install requests -t .
# Add your code
echo 'import requests
def lambda_handler(event, context):
response = requests.get("https://api.ipify.org")
return {"ip": response.text}' > lambda_function.py
# Zip it
zip -r ../deployment-package.zip .
# Upload the zip to Lambda
Scheduled Execution (Like Cron)
This is the use case that sold me. EventBridge replaces every cron server you've ever maintained:
- In your Lambda, go to Configuration → Triggers
- Click Add trigger
- Select EventBridge (CloudWatch Events)
- Create a new rule
- Rule type: Schedule expression
- Enter a cron expression or rate:
rate(1 hour)orcron(0 9 * * ? *)for 9 AM daily
I use this for nightly database cleanup, daily CSV exports, and an uptime checker that pings four endpoints every five minutes. All free tier. All zero maintenance since deployment.
Permissions (IAM Roles)
Every Lambda runs under an IAM role. Out of the box, it can write to CloudWatch Logs and nothing else. You have to explicitly grant access to every other AWS service.
Say your function needs to read from S3:
- Go to Configuration → Permissions
- Click on the role name (opens IAM console)
- Click Add permissions → Attach policies
- Search for
AmazonS3ReadOnlyAccess(or more specific) - Attach
Your function can now read from S3. Resist the urge to attach AdministratorAccess and "fix it later." That never happens, and you end up with a Lambda that can delete your entire AWS account.
Memory and Timeout
Two knobs that control everything:
- Memory: Range is 128MB to 10GB. The hidden trick -- CPU allocation scales linearly with memory. At 1769MB you get a full vCPU. I set most functions to 256MB minimum because 128MB makes even simple Python imports sluggish.
- Timeout: 3 seconds default, 15 minutes max. The default is a trap. Any function that calls an external API or reads from S3 will occasionally exceed 3 seconds and die silently.
Both live under Configuration → General configuration.
I set every new function to 256MB memory and 30-second timeout on creation, then tune down if the CloudWatch metrics show I'm overprovisioned. Starting low and debugging timeout failures is a worse use of time.
Cold Starts
The internet makes cold starts sound like a dealbreaker. They're not, for most use cases. My Python Lambdas with requests and boto3 cold-start in about 400-600ms. Background jobs, scheduled tasks, webhook processors -- none of them care about half a second of initialization. The only place it stings is user-facing API endpoints where someone is staring at a loading spinner.
If cold starts actually matter in your case:
- Provisioned Concurrency -- keeps N instances warm at all times. Costs money, but it's the only real fix. I use it for exactly one production API.
- Smaller deployment packages -- every MB you strip off the zip shaves milliseconds off init time. Drop unused dependencies aggressively.
- Go or Rust runtimes -- cold starts under 10ms. If latency is your primary concern and you can write Go, this solves it permanently.
- Scheduled warm-up pings -- CloudWatch rule triggers your function every 5 minutes with a dummy event. Hacky. Works. Free.
Calling Lambda from Other AWS Services
The real power of Lambda isn't the compute. It's the wiring. Almost every AWS service can trigger a Lambda function natively, no glue code required:
- API Gateway -- REST and HTTP APIs without a server process
- S3 -- react to uploads, deletions, metadata changes
- DynamoDB Streams -- fire on every insert, update, or delete in a table
- SQS -- pull messages off a queue automatically, with built-in retry
- SNS -- fan-out from a single notification to multiple functions
- EventBridge -- scheduled rules, cross-account events, partner integrations
- Alexa -- voice skill backends (I've never used this, but it exists)
Six of my eight production Lambdas are triggered by either S3 events or EventBridge schedules. The other two sit behind API Gateway. I haven't logged into a server to check a cron job in over a year.
Costs
Two billing dimensions:
- Requests: First 1M per month free. After that, $0.20 per million. Twenty cents.
- Compute time: Billed per millisecond based on memory allocation. 400,000 GB-seconds free per month.
To put that in perspective: I run eight Lambda functions across two AWS accounts. Combined monthly Lambda bill over the last six months: $0.00. Every single month. The free tier is genuinely generous for small-to-medium workloads.
Lambda gets expensive in exactly one scenario -- sustained high-throughput traffic where the function runs millions of times per hour at consistent load. At that point, a dedicated EC2 instance or Fargate task wins on cost. But that crossover point is much higher than most people think.
Local Development
Editing code in the AWS console works for the first hour. After that you'll want a real workflow:
- SAM CLI --
sam local invokeruns your function inside a Docker container that mimics the Lambda environment. Closest thing to production without deploying. - LocalStack -- spins up fake versions of S3, DynamoDB, SQS on your machine. Good for integration tests where your function talks to other services.
- Just write testable code -- this is what I actually do. Keep the handler function paper-thin. All logic goes into regular Python modules that I test with pytest. The handler just unpacks the event, calls the logic, and returns the response.
# lambda_function.py
from processor import process_file
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
# This function can be unit tested
result = process_file(bucket, key)
return {'statusCode': 200, 'body': result}
A Complete Example: Contact Form Handler
This one runs in production on two of my sites right now. Static HTML form posts to API Gateway, Lambda picks it up, sends an email through SES:
import json
import boto3
import os
def lambda_handler(event, context):
# Parse the form data
body = json.loads(event['body'])
name = body.get('name', 'Anonymous')
email = body.get('email', 'no-email')
message = body.get('message', '')
# Send via SES
ses = boto3.client('ses')
ses.send_email(
Source=os.environ['FROM_EMAIL'],
Destination={'ToAddresses': [os.environ['TO_EMAIL']]},
Message={
'Subject': {'Data': f'Contact form: {name}'},
'Body': {'Text': {'Data': f'From: {name} ({email})\n\n{message}'}}
}
)
return {
'statusCode': 200,
'headers': {
'Access-Control-Allow-Origin': '*', # CORS for frontend
'Content-Type': 'application/json'
},
'body': json.dumps({'message': 'Email sent successfully'})
}
Wire this up behind API Gateway and your static site has a working contact form. No Express server, no Flask app, no VPS. Just a function that runs when someone clicks "send."
Use Lambda for:
- Cron jobs that run under 15 minutes
- Processing S3 uploads (thumbnails, metadata extraction, virus scanning)
- Webhook receivers from Stripe, GitHub, Slack, whatever
- Contact forms and other simple API endpoints behind API Gateway
- Glue logic between AWS services (S3 to DynamoDB, SQS to SNS, etc.)
- Data transformation pipelines triggered by database changes
- Scheduled reports, nightly cleanups, health checks
- Anything that sits idle 95% of the time and runs for seconds when triggered
Don't use Lambda for:
- Anything that needs to run longer than 15 minutes or maintain a persistent connection
💬 Comments