The deployment checklist:
- SSH into your instance
- Install runtime dependencies
- Deploy your application code
- Set up a process manager
- Configure a reverse proxy
- Set up monitoring and log access
That's the full arc. By the end your application stays running after you close the terminal, restarts if it crashes, and sits behind Nginx on port 80 or 443.
The cleanup mistake:
Last March I spun up a t3.large to load-test a deployment pipeline. Ran my tests, closed the laptop, moved on. Forgot about it entirely. Thirty-one days later a billing alert email landed in my inbox: $73.41 for a single instance sitting there doing absolutely nothing. I hadn't set up a billing alarm on that account because it was a "quick test." The alert that finally caught it was a CloudWatch budget I'd configured at $50 on a different project in the same account. Without that, it would have kept running until I happened to check the console. Set a billing alarm before you deploy anything. $10 threshold. It takes 30 seconds and saves you from the surprise.
SSH In and Prepare the Server
If you set up an SSH config in the launch guide, ssh myserver gets you in. If not:
ssh -i ~/.ssh/my-aws-key.pem ubuntu@YOUR-PUBLIC-IP
First thing, update packages. Do this every time you SSH into a fresh instance — stale package lists cause weird install failures later.
sudo apt update && sudo apt upgrade -y
Install Runtime Dependencies
This depends entirely on your stack. Here's what the commands look like for the three most common ones:
# Install Node.js 20.x via NodeSource
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
# Verify
node --version
npm --version
sudo apt install -y python3 python3-pip python3-venv
# Create a virtual environment for your project
python3 -m venv /home/ubuntu/myapp/venv
source /home/ubuntu/myapp/venv/bin/activate
If you're deploying with Docker, see the Docker installation guide and skip straight to the reverse proxy section. Docker handles process management itself, so half this article doesn't apply to you.
Get Your Code Onto the Server
Three ways to do this. Git clone is the one you want for anything beyond a throwaway test.
Cleanest approach, and the one that makes future deploys painless:
sudo apt install -y git
cd /home/ubuntu
git clone https://github.com/youruser/yourapp.git
cd yourapp
npm install --production # or pip install -r requirements.txt, etc.
For private repos, either set up a deploy key (SSH key added to the repo settings) or use a personal access token. Deploy keys are better because they're scoped to one repo.
Fine for quick one-offs. I'll be blunt though: if you're using scp for anything with a node_modules folder, stop. SCP re-copies every single file every time, and on a project with 40,000 files in node_modules you'll be watching your terminal scroll for five minutes. Use rsync — it only sends what changed, so the second deploy takes seconds instead of minutes.
# From your local machine (scp — full copy every time)
scp -i ~/.ssh/my-aws-key.pem -r ./myapp ubuntu@YOUR-PUBLIC-IP:/home/ubuntu/
# rsync — only transfers changed files (use this one)
rsync -avz -e "ssh -i ~/.ssh/my-aws-key.pem" ./myapp/ ubuntu@YOUR-PUBLIC-IP:/home/ubuntu/myapp/
This is the CI/CD path — build your artifact in a pipeline, push it to S3, pull it on the instance:
# Install AWS CLI if not already present
sudo apt install -y awscli
# Pull your artifact
aws s3 cp s3://your-bucket/builds/myapp-latest.tar.gz /home/ubuntu/
tar -xzf /home/ubuntu/myapp-latest.tar.gz -C /home/ubuntu/myapp
This needs an IAM role on the instance. Attach an instance profile with S3 read access. Do not paste AWS keys into a file on the server — that's how keys end up in a git commit six months from now.
Keep It Running with a Process Manager
If you start your app with node server.js directly, it dies the moment you close your SSH session. You need something to keep it alive and restart it after crashes.
I used PM2 for about a year before switching to systemd for most things. PM2's cluster mode is genuinely nice, and the built-in log viewer is convenient, but it's one more thing that can break — and when PM2 itself crashes or gets into a weird state, you now have two problems. systemd is already on the box, it's battle-tested, and it restarts your process without any extra software. That said, PM2 is faster to set up and has a friendlier interface, so pick whichever you'll actually maintain.
# Install PM2 globally
sudo npm install -g pm2
# Start your application
cd /home/ubuntu/myapp
pm2 start server.js --name myapp
# Save the process list so it survives reboot
pm2 save
# Set PM2 to start on boot
pm2 startup systemd
# It will print a command -- copy and run that command
Useful PM2 commands for later:
pm2 list # See running processes
pm2 logs myapp # Tail the logs
pm2 restart myapp # Restart after a code update
pm2 monit # Real-time resource monitor
Already on the server. No install. Create a service file:
sudo nano /etc/systemd/system/myapp.service
Paste this, adjusting the paths and command:
[Unit]
Description=My Application
After=network.target
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/ubuntu/myapp
ExecStart=/usr/bin/node server.js
Restart=on-failure
RestartSec=5
Environment=NODE_ENV=production
Environment=PORT=3000
[Install]
WantedBy=multi-user.target
Then enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable myapp
sudo systemctl start myapp
# Check it's running
sudo systemctl status myapp
For a Python app, change ExecStart to something like /home/ubuntu/myapp/venv/bin/gunicorn -w 4 -b 127.0.0.1:8000 app:app.
Put Nginx in Front
Your app listens on 3000 or 8000. Nobody should have to type :3000 in a URL. Nginx sits in front and proxies traffic from port 80 to your app.
sudo apt install -y nginx
Create a config at /etc/nginx/sites-available/myapp:
sudo nano /etc/nginx/sites-available/myapp
One thing that will save you hours: pay attention to whether your proxy_pass URL has a trailing slash. proxy_pass http://127.0.0.1:3000 and proxy_pass http://127.0.0.1:3000/ behave differently — the trailing slash strips the location prefix from the forwarded URI. I once spent an entire afternoon debugging why my API routes were 404-ing through Nginx but worked fine when I curled the app directly. It was a trailing slash.
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Enable it and reload Nginx:
# Enable the site
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
# Remove the default site (optional but clean)
sudo rm /etc/nginx/sites-enabled/default
# Test the config for syntax errors
sudo nginx -t
# Reload Nginx
sudo systemctl reload nginx
Hit your server's public IP in a browser. You should see your app.
HTTPS with Let's Encrypt
Two commands. Certbot handles everything — edits the Nginx config, sets up auto-renewal via a systemd timer. You run this once and forget about it.
sudo apt install -y certbot python3-certbot-nginx
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com
Logs and Monitoring
Your app is running. Now you need to know when it stops. For PM2: pm2 logs myapp --lines 100. For systemd: sudo journalctl -u myapp -f. The -f flag tails the log in real time — leave that open in a second terminal while you're testing.
Nginx keeps its own logs:
sudo tail -f /var/log/nginx/access.log
sudo tail -f /var/log/nginx/error.log
The Nginx error log is where you'll look first when something goes wrong. Get in the habit of checking it before anything else — it usually tells you exactly what happened.
Install htop (sudo apt install -y htop) so you can actually read resource usage. The default top is painful. And run df -h occasionally — a full disk is a surprisingly common way for apps to silently fail, especially if you're writing logs to disk without rotation.
If this is running in production, set up a CloudWatch alarm on CPU. The free tier gives you 10 alarms. "Alert me if CPU exceeds 80% for 5 minutes" takes 30 seconds to create and will catch a runaway process before your users start complaining.
Deploying Updates
After the initial setup, updating is just a pull and restart. Git-based:
cd /home/ubuntu/myapp
git pull origin main
npm install --production # if dependencies changed
pm2 restart myapp # or: sudo systemctl restart myapp
Rsync-based (run from your local machine):
rsync -avz -e "ssh -i ~/.ssh/my-aws-key.pem" ./myapp/ ubuntu@YOUR-PUBLIC-IP:/home/ubuntu/myapp/
ssh myserver "pm2 restart myapp"
The restart causes a few hundred milliseconds of downtime. For a single-server setup, that's fine. If you need zero-downtime, PM2's pm2 reload does a graceful restart, but honestly on a single instance it's rarely worth the added complexity.
Things That Will Go Wrong
App works on localhost:3000 but not from outside
Almost always the security group. Go to EC2 Dashboard, find your instance's security group, and check that port 80 is open to inbound traffic. If that's fine, check Nginx is running (sudo systemctl status nginx), then check your app is actually listening (curl http://127.0.0.1:3000 from the server). If you enabled UFW at some point, make sure port 80 is allowed there too.
502 Bad Gateway
This one will happen to you at least once. Nginx is working, but the app behind it isn't responding. Run pm2 list or sudo systemctl status myapp to see if the app is even alive, then sudo ss -tlnp | grep 3000 to confirm it's listening on the right port.
# Quick check: is anything listening on port 3000?
sudo ss -tlnp | grep 3000
The weirdest 502 I've hit: the app worked perfectly when I curled it over SSH, but Nginx kept returning 502. Spent an hour checking configs. Turned out the app was binding to 127.0.0.1:3000 instead of 0.0.0.0:3000, and my Nginx proxy_pass was pointed at 127.0.0.1 — which should have worked, and did work from the command line, but Nginx was resolving it differently due to an IPv6 issue. Changed the app to bind to 0.0.0.0 and it worked instantly. If your 502 makes no logical sense, check what address your app is binding to.
App crashes on startup
Read the logs: pm2 logs myapp --err --lines 50 or sudo journalctl -u myapp --no-pager -n 50. Nine times out of ten it's a missing environment variable, or you forgot to run npm install after pulling new code that added a dependency.
When You're Done
The most expensive EC2 resource is the one you forgot about. After you finish a project, a test, or a demo, check what's still running. These three commands will tell you:
# List all running instances in your current region
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
--query "Reservations[].Instances[].[InstanceId, InstanceType, LaunchTime, Tags[?Key=='Name'].Value | [0]]" \
--output table
# Check all regions for running instances (the forgotten ones are never in your default region)
for region in $(aws ec2 describe-regions --query "Regions[].RegionName" --output text); do
echo "--- $region ---"
aws ec2 describe-instances --region "$region" \
--filters "Name=instance-state-name,Values=running" \
--query "Reservations[].Instances[].[InstanceId, InstanceType]" \
--output text
done
# Terminate a specific instance when you're sure you're done with it
aws ec2 terminate-instances --instance-ids i-0abc123def456789
Run the first command after every deployment session. Run the second one monthly. The third one is what you use when you find something that shouldn't be there.
💬 Comments