Backend Deployment - AWS EC2 Manual Deployment¶
This guide covers a manual EC2 deployment of the backend using Docker Compose + config in SSM Parameter Store, supporting both stage and prod.
Replace
<ENV>throughout with eitherstageorprod.
Examples used:
ENV=stage # or prod
PROJECT=mow # your project name
Overview¶
This phase covers:
- Prerequisites
- Create the EC2 Role and Policy
- Tagging Roles and Instance Profiles
- Create the Security Group
- Create the Key Pair (for SSH)
- Allocate and Tag the Elastic IP
- Add DNS A Records for the given env's domains
- Create API keys / update website restrictions for Google Maps
- Create env file for this env
- Upload + tag env file to SSM
- Make the Bootstrap Bundle + Upload to S3
- Build + Push Images to ECR
- Prepare + Launch the EC2 Instance
- Create EBS Snapshot Policy for Postgres Data (by Name tag)
- Verify Connectivity
- Sanity Tests + Troubleshooting
- Cleanup (Optional)
1. Prerequisites¶
You should have:
- Working AWS CLI profile:
PROFILE="admin-cli-sso"
* The backend repo checked out containing:
docker-compose.base.yml
docker-compose.prod.yml
docker-compose.deploy.yml
aws/scripts/user-data.sh
aws/scripts/build-and-push-ecr.sh
aws/scripts/ssm-to-env.sh
* Knowledge of which environment you're deploying:
PROJECT=mow # or your custom project name
ENV=stage # or prod
2. Create the EC2 Role + Policy¶
Name must be environment-aware — do not hardcode stage
ROLE_NAME="${PROJECT}-backend-ec2-${ENV}-role"
Create the Role¶
aws iam create-role \
--role-name "$ROLE_NAME" \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com" },
"Action": "sts:AssumeRole"
}
]
}' \
--profile "$PROFILE"
Tag the Role¶
After creating the role, tag it so ownership, environment, and automation are clear. Tags do not propagate automatically to the instance profile (you'll tag that in the next section).
Why tags here?¶
- Consistent
PROJECT:schema for searchability and governance - ABAC/guardrail readiness (e.g., restrict
iam:PassRoleby tags) - Human-friendly
Nameto spot it quickly in the console
Apply tags to the role¶
# Uses variables defined earlier:
# ENV=stage|prod, PROFILE=admin-cli-sso, ROLE_NAME="${PROJECT}-backend-ec2-${ENV}-role"
aws iam tag-role \
--role-name "$ROLE_NAME" \
--tags \
Key=${PROJECT}:project,Value=${PROJECT} \
Key=${PROJECT}:component,Value=backend \
Key=${PROJECT}:environment,Value=${ENV} \
Key=${PROJECT}:owner,Value=james \
Key=${PROJECT}:managed-by,Value=manual \
Key=Name,Value=${ROLE_NAME} \
--profile "$PROFILE"
- Roles do not need cost-center tags
- No repo tag required, since these are AWS-only objects
Verify tags¶
aws iam list-role-tags \
--role-name "$ROLE_NAME" \
--profile "$PROFILE"
Expected (example for ENV=prod):
{
"Tags": [
{"Key": "mow:project", "Value": "mow"},
{"Key": "mow:component", "Value": "backend"},
{"Key": "mow:environment", "Value": "prod"},
{"Key": "mow:owner", "Value": "james"},
{"Key": "mow:managed-by", "Value": "manual"},
{"Key": "Name", "Value": "mow-backend-ec2-prod-role"}
]
}
Notes
- Keep tags environment-aware (avoid hardcoding).
- IAM resources have an AWS tag limit per resource (~50).
- You'll also tag the instance profile created for this role in the next step.
Attach SSM Read Policy (env-restricted)¶
This policy allows SSM list + single-get under:
/${PROJECT}/backend/<ENV>/*
- Replace
<ENV>appropriately. - SSM paths must be env-aware.
aws iam put-role-policy \
--role-name "$ROLE_NAME" \
--policy-name "${PROJECT}-backend-ec2-${ENV}-ssm-read" \
--policy-document "{
\"Version\": \"2012-10-17\",
\"Statement\": [
{
\"Sid\": \"AllowListByPathForProject\",
\"Effect\": \"Allow\",
\"Action\": \"ssm:GetParametersByPath\",
\"Resource\": [
\"arn:aws:ssm:us-east-1:*:parameter/${PROJECT}/backend\",
\"arn:aws:ssm:us-east-1:*:parameter/${PROJECT}/backend/*\"
]
},
{
\"Sid\": \"AllowEnvRead\",
\"Effect\": \"Allow\",
\"Action\": [
\"ssm:GetParameter\",
\"ssm:GetParameters\"
],
\"Resource\": \"arn:aws:ssm:us-east-1:*:parameter/${PROJECT}/backend/*/${ENV}/*\"
},
{
\"Sid\": \"DenyNonEnvSingleGets\",
\"Effect\": \"Deny\",
\"Action\": [
\"ssm:GetParameter\",
\"ssm:GetParameters\"
],
\"NotResource\": \"arn:aws:ssm:us-east-1:*:parameter/${PROJECT}/backend/*/${ENV}/*\"
},
{
\"Sid\": \"DecryptViaSSMOnly\",
\"Effect\": \"Allow\",
\"Action\": \"kms:Decrypt\",
\"Resource\": \"*\",
\"Condition\": {
\"StringEquals\": {
\"kms:ViaService\": \"ssm.us-east-1.amazonaws.com\"
}
}
}
]
}" \
--profile "$PROFILE"
Put EC2 Describe Policy¶
aws iam put-role-policy \
--role-name "$ROLE_NAME" \
--policy-name "${PROJECT}-backend-ec2-${ENV}-ec2-describe" \
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowEC2Describe",
"Effect": "Allow",
"Action": [
"ec2:DescribeVolumes",
"ec2:DescribeInstances"
],
"Resource": "*"
}
]
}' \
--profile "$PROFILE"
Verify¶
aws iam list-role-policies --role-name $ROLE_NAME --profile $PROFILE
Attach ECR Read-Only Policy¶
aws iam attach-role-policy \
--role-name "$ROLE_NAME" \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
--profile "$PROFILE"
Verify¶
aws iam list-attached-role-policies --role-name "$ROLE_NAME" --profile "$PROFILE"
Expected (example for ENV=prod):
{
"PolicyNames": [
"mow-backend-ec2-prod-ec2-describe",
"mow-backend-ec2-prod-ssm-read"
]
}
Create Instance Profile + attach role¶
PROFILE_NAME="${PROJECT}-backend-ec2-${ENV}-profile"
aws iam create-instance-profile \
--instance-profile-name "$PROFILE_NAME" \
--profile "$PROFILE"
aws iam add-role-to-instance-profile \
--instance-profile-name "$PROFILE_NAME" \
--role-name "$ROLE_NAME" \
--profile "$PROFILE"
Verify:
aws iam get-instance-profile \
--instance-profile-name "$PROFILE_NAME" \
--profile "$PROFILE"
3. Tag Instance Profile¶
- Instance Profiles do not need cost-center tags
- No repo tag required, since these are AWS-only objects
aws iam tag-instance-profile \
--instance-profile-name "$PROFILE_NAME" \
--tags \
Key=${PROJECT}:project,Value=${PROJECT} \
Key=${PROJECT}:component,Value=backend \
Key=${PROJECT}:environment,Value=${ENV} \
Key=${PROJECT}:owner,Value=james \
Key=${PROJECT}:managed-by,Value=manual \
Key=Name,Value=${PROFILE_NAME} \
--profile "$PROFILE"
4. Create the Security Group¶
Caddy and Let's Encrypt require both HTTP (80) and HTTPS (443) for ACME validation.
SG_NAME="${PROJECT}-backend-ec2-${ENV}-web-sg"
REGION=us-east-1
VPC_ID=$(aws ec2 describe-vpcs \
--filters "Name=isDefault,Values=true" \
--query "Vpcs[0].VpcId" \
--output text \
--region "$REGION" \
--profile "$PROFILE")
echo $VPC_ID
SG_ID=$(aws ec2 create-security-group \
--group-name "$SG_NAME" \
--description "Web access for ${PROJECT} backend ${ENV}" \
--vpc-id "$VPC_ID" \
--profile "$PROFILE" \
--query "GroupId" \
--output text)
echo "Security Group ID: $SG_ID"
Tag the Security Group¶
After creating the Security Group, apply standard PROJECT: tags for ownership, environment, and traceability.
# Uses:
# ENV=stage|prod
# PROFILE=admin-cli-sso
# REGION=us-east-1
# SG_NAME="${PROJECT}-backend-ec2-${ENV}-web-sg"
# SG_ID (from the last step)
aws ec2 create-tags \
--resources "$SG_ID" \
--tags \
Key=${PROJECT}:project,Value=${PROJECT} \
Key=${PROJECT}:component,Value=backend \
Key=${PROJECT}:environment,Value=${ENV} \
Key=${PROJECT}:owner,Value=james \
Key=${PROJECT}:managed-by,Value=manual \
Key=Name,Value=${SG_NAME} \
--region "$REGION" \
--profile "$PROFILE"
Allow Inbound Access¶
aws ec2 authorize-security-group-ingress \
--group-name "$SG_NAME" \
--protocol tcp --port 80 --cidr 0.0.0.0/0 \
--profile "$PROFILE"
aws ec2 authorize-security-group-ingress \
--group-name "$SG_NAME" \
--protocol tcp --port 443 --cidr 0.0.0.0/0 \
--profile "$PROFILE"
Restrict SSH (22) to your public IP only:
MY_IP=$(curl -s ifconfig.me)
echo $MY_IP
aws ec2 authorize-security-group-ingress \
--group-name "$SG_NAME" \
--protocol tcp --port 22 --cidr ${MY_IP}/32 \
--profile "$PROFILE"
5. Create the Key Pair¶
KEY_NAME="${PROJECT}-backend-ec2-${ENV}-key"
aws ec2 create-key-pair \
--key-name "$KEY_NAME" \
--query 'KeyMaterial' \
--region "$REGION" \
--profile "$PROFILE" \
--output text > "${KEY_NAME}.pem"
chmod 400 "${KEY_NAME}.pem"
Tag the Key Pair¶
EC2 Key Pairs can be tagged.
After creating the key, tag it with standard PROJECT: tags so ownership and environment are clear.
# Uses:
# ENV=stage|prod
# PROFILE=admin-cli-sso
# REGION=us-east-1
# KEY_NAME="${PROJECT}-backend-ec2-${ENV}-key"
# Look up Key Pair ID (required for tagging)
KEYPAIR_ID=$(aws ec2 describe-key-pairs \
--key-names "$KEY_NAME" \
--query "KeyPairs[0].KeyPairId" \
--output text \
--region "$REGION" \
--profile "$PROFILE")
echo "KeyPair ID: $KEYPAIR_ID"
aws ec2 create-tags \
--resources "$KEYPAIR_ID" \
--tags \
Key=${PROJECT}:project,Value=${PROJECT} \
Key=${PROJECT}:component,Value=backend \
Key=${PROJECT}:environment,Value=${ENV} \
Key=${PROJECT}:owner,Value=james \
Key=${PROJECT}:managed-by,Value=manual \
Key=Name,Value=${KEY_NAME} \
--region "$REGION" \
--profile "$PROFILE"
6. Allocate + Tag Elastic IP¶
ALLOC=$(aws ec2 allocate-address \
--domain vpc \
--query AllocationId \
--output text \
--profile "$PROFILE")
echo $ALLOC
Tag it:
aws ec2 create-tags \
--resources "$ALLOC" \
--tags \
Key=${PROJECT}:project,Value=${PROJECT} \
Key=${PROJECT}:component,Value=backend \
Key=${PROJECT}:environment,Value=${ENV} \
Key=${PROJECT}:owner,Value=james \
Key=${PROJECT}:managed-by,Value=manual \
Key=${PROJECT}:cost-center,Value=ops-aws-ec2 \
Key=Name,Value=${PROJECT}-backend-ec2-${ENV}-eip \
--profile "$PROFILE"
7. Add DNS A Records¶
Before configuring Caddy + Let's Encrypt, create A records that point your environment's public Elastic IP (EIP) to your backend domain(s).
Get the EIP Address¶
If you stored the allocation ID in ALLOC (from the earlier step), retrieve the public IP:
EIP=$(aws ec2 describe-addresses \
--allocation-ids "$ALLOC" \
--query "Addresses[0].PublicIp" \
--output text \
--region "$REGION" \
--profile "$PROFILE")
echo "$EIP"
If you need to locate the EIP by tag instead:
EIP=$(aws ec2 describe-addresses \
--filters "Name=tag:${PROJECT}:environment,Values=${ENV}" \
"Name=tag:${PROJECT}:component,Values=backend" \
--query "Addresses[0].PublicIp" \
--output text \
--region "$REGION" \
--profile "$PROFILE")
echo "$EIP"
Result should look like:
3.214.xxx.xxx
Create A Records¶
In your DNS management system, create A records pointing to the EIP:
| Hostname Format | Example |
|---|---|
<domain_name> |
mow.example.com |
www.<domain_name> |
www.mow.example.com |
admin.<domain_name> |
admin.mow.example.com |
portal.<domain_name> |
portal.mow.example.com |
developer.<domain_name> |
developer.mow.example.com |
api.developer.<domain_name> |
api.developer.mow.example.com |
grafana.<domain_name> |
grafana.mow.example.com |
prometheus.<domain_name> |
prometheus.mow.example.com |
All of these should point to:
A → <EIP>
How you do this will vary depending on DNS provider (e.g., Route53, Cloudflare, Namecheap, Google Domains, etc.).
Once DNS propagates, the instance will be able to serve HTTPS via Caddy + LE.
8. Create/Update Google Maps API Website Restrictions¶
See: Google Maps API Keys — Setup, Security & Environment Guide
Update allowed IPs with the new Elastic IP.
9. Create env file for ¶
See: Environment Files for Dev, Stage, and Prod
Example:
.env.stage
10. Upload + tag env file into SSM¶
The application environment configuration for this EC2 deployment is stored in SSM Parameter Store under:
/${PROJECT}/backend/<ENV>/*
This step uploads the .env.<ENV> file to SSM and applies standardized tags.
For general background + advanced usage, see:
➡ AWS Systems Manager Parameter Store
Upload env file to SSM¶
We first define a small temporary tag file containing dynamic / implementation-specific metadata.
These tags are layered on top of the project's shared tag definitions.
EXTRA_TAGS="$(mktemp)"
cat > "$EXTRA_TAGS" <<'JSON'
[
{"Key":"PROJECT:project","Value":"PROJECT"},
{"Key":"PROJECT:owner","Value":"james"},
{"Key":"PROJECT:managed-by","Value":"manual"},
{"Key":"PROJECT:repo","Value":"github.com/<your-org>/PROJECT-backend"},
{"Key":"PROJECT:version","Value":"backend@1.0.0"}
]
JSON
Replace PROJECT with your actual project name.
Now seed SSM parameters from the .env.<ENV> file:
./aws/scripts/ssm-seed-from-env.sh \
--region us-east-1 \
--project ${PROJECT} \
--environment ${ENV} \
--env-file .env.${ENV} \
--print-keys \
--extra-tag-file "$EXTRA_TAGS" \
--profile "$PROFILE"
This will:
- Create or update parameters under
/${PROJECT}/backend/<ENV>/... - Apply
PROJECT:*tags +Nameconsistently - Print the keys written to SSM
- Merge in the ephemeral/dynamic tags from
EXTRA_TAGS
Note:
EXTRA_TAGSis used because some values (e.g.,PROJECT:owner,PROJECT:version) may change across workflows or implementations.
For more advanced workflows (multi-service envs, layering, bulk modification, CI/CD tagging), see: ➡ See: AWS Systems Manager Parameter Store
11. Create Bootstrap Bundle + Upload to S3¶
zip -j bootstrap-bundle.zip \
docker-compose.base.yml \
docker-compose.prod.yml \
docker-compose.deploy.yml \
aws/scripts/ssm-to-env.sh
Create bucket:
REGION="us-east-1"
BUCKET="${PROJECT}-bootstrap-artifacts-${REGION}"
PROJECT_PATH="${PROJECT}/backend/${ENV}"
aws s3api create-bucket \
--bucket "$BUCKET" \
--region "$REGION" \
--profile "$PROFILE" || true
Tag bucket:
(replace
REPO_URL="https://github.com/<your-org>/${PROJECT}-backend"
aws s3api put-bucket-tagging \
--bucket "$BUCKET" \
--tagging "{
\"TagSet\": [
{\"Key\": \"Name\", \"Value\": \"${PROJECT}-backend-${ENV}-bootstrap\"},
{\"Key\": \"${PROJECT}:project\", \"Value\": \"${PROJECT}\"},
{\"Key\": \"${PROJECT}:component\", \"Value\": \"backend\"},
{\"Key\": \"${PROJECT}:environment\", \"Value\": \"${ENV}\"},
{\"Key\": \"${PROJECT}:owner\", \"Value\": \"james\"},
{\"Key\": \"${PROJECT}:managed-by\", \"Value\": \"manual\"},
{\"Key\": \"${PROJECT}:cost-center\", \"Value\": \"ops-aws-s3\"},
{\"Key\": \"${PROJECT}:repo\", \"Value\": \"${REPO_URL}\"},
{\"Key\": \"${PROJECT}:version\", \"Value\": \"backend@1.0.0\"}
]
}" \
--profile "$PROFILE"
Upload:
aws s3 cp bootstrap-bundle.zip \
"s3://${BUCKET}/${PROJECT_PATH}/bootstrap-bundle.zip" \
--region "$REGION" \
--profile "$PROFILE" \
--sse AES256
Give S3 read access to EC2 Role¶
aws iam put-role-policy \
--role-name "$ROLE_NAME" \
--policy-name "s3-read-bootstrap" \
--policy-document "{
\"Version\": \"2012-10-17\",
\"Statement\": [
{
\"Sid\": \"ListBucketPrefix\",
\"Effect\": \"Allow\",
\"Action\": [ \"s3:ListBucket\" ],
\"Resource\": \"arn:aws:s3:::${PROJECT}-bootstrap-artifacts-us-east-1\",
\"Condition\": {
\"StringLike\": {
\"s3:prefix\": [ \"${PROJECT}/backend/${ENV}/*\" ]
}
}
},
{
\"Sid\": \"GetObjectsUnderPrefix\",
\"Effect\": \"Allow\",
\"Action\": [ \"s3:GetObject\" ],
\"Resource\": \"arn:aws:s3:::${PROJECT}-bootstrap-artifacts-us-east-1/${PROJECT}/backend/${ENV}/*\"
}
]
}" \
--profile "$PROFILE"
12. Build + Push Images to ECR¶
Use the script:
aws/scripts/build-and-push-ecr.sh
Key features:
- Multi-arch build + tag
- Tagged with
:<SHA>+:<ENV> - Automatically creates ECR repos if missing
- Supports selective image builds, dry-run mode, SHA override
- Default environment =
stageunless overridden via--env prod - Project name defaults to
mowbut can be customized via--project
Examples:
./aws/scripts/build-and-push-ecr.sh --profile "$PROFILE"
./aws/scripts/build-and-push-ecr.sh --profile "$PROFILE" --env prod
./aws/scripts/build-and-push-ecr.sh --profile "$PROFILE" --project myapp --env stage
./aws/scripts/build-and-push-ecr.sh --profile "$PROFILE" --images django,caddy
./aws/scripts/build-and-push-ecr.sh --profile "$PROFILE" --sha "$(git rev-parse HEAD)"
./aws/scripts/build-and-push-ecr.sh --profile "$PROFILE" --no-env-tag # only :<SHA>
This will:
- Build images for your backend services
- Push images to:
<account>.dkr.ecr.<region>.amazonaws.com/PROJECT/backend/ENV/SERVICE:<SHA>plus optionally:ENV(moving tag) - Maintain consistent repo structure across local + CI
- Create ECR repositories with proper tagging if they don't exist
12b) Apply ECR Lifecycle Policy (Recommended)¶
After pushing images, apply a lifecycle policy to manage storage growth.
What the policy does
- Keeps the most recent 25 images tagged with the moving
:<ENV>tag (stageorprod) - Keeps total images ≤ 125, so you retain roughly 100 SHA-tagged builds
- Automatically expires older images
This strikes a balance: retains recent deploy history while enforcing cost controls.
Script¶
aws/scripts/apply-ecr-lifecycle.sh
✅ Features
- Auto-discovers repos under
PROJECT/backend/<ENV>/* - macOS + Linux compatible
DRY_RUN=1by default (no-op)- Clear output + safety checks
Example usage¶
# Preview only (default)
./aws/scripts/apply-ecr-lifecycle.sh --profile "$PROFILE" --region "$REGION" --env "$ENV" --project "$PROJECT"
# Apply lifecycle policies
DRY_RUN=0 ./aws/scripts/apply-ecr-lifecycle.sh --profile "$PROFILE" --region "$REGION" --env "$ENV" --project "$PROJECT"
Example:
# Stage
DRY_RUN=0 ./aws/scripts/apply-ecr-lifecycle.sh \
--profile admin-cli-sso \
--region us-east-1 \
--project mow \
--env stage
# Prod
DRY_RUN=0 ./aws/scripts/apply-ecr-lifecycle.sh \
--profile admin-cli-sso \
--region us-east-1 \
--project mow \
--env prod
You may repeat this step at any time to ensure lifecycle rules remain in place.
13. Prepare + Launch the EC2 Instance¶
Get SG ID if needed:
SG_ID=$(aws ec2 describe-security-groups \
--filters "Name=group-name,Values=${SG_NAME}" \
--query "SecurityGroups[*].GroupId" \
--output text \
--region "$REGION" \
--profile "$PROFILE")
Get AMI:
AMI=$(aws ssm get-parameters \
--names /aws/service/canonical/ubuntu/server/24.04/stable/current/arm64/hvm/ebs-gp3/ami-id \
--query "Parameters[0].Value" \
--output text \
--profile "$PROFILE")
Launch:
TEMPLATE="aws/scripts/user-data.sh"
TMP_USER_DATA="$(mktemp -t user-data.XXXXXXXX.sh)"
sed -e "s|__REGION__|${REGION}|g" \
-e "s|__PROJECT__|${PROJECT}|g" \
-e "s|__ENVIRONMENT__|${ENV}|g" \
"$TEMPLATE" > "$TMP_USER_DATA"
chmod 644 "$TMP_USER_DATA"
echo "Rendered user-data → $TMP_USER_DATA"
INSTANCE_ID=$(aws ec2 run-instances \
--image-id "$AMI" \
--instance-type t4g.medium \
--key-name "$KEY_NAME" \
--iam-instance-profile Name="$PROFILE_NAME" \
--security-group-ids "$SG_ID" \
--block-device-mappings '[
{
"DeviceName":"/dev/sda1",
"Ebs":{"VolumeSize":12,"VolumeType":"gp3","DeleteOnTermination":true,"Encrypted":true}
},
{
"DeviceName":"/dev/sdb",
"Ebs":{"VolumeSize":8,"VolumeType":"gp3","DeleteOnTermination":false,"Encrypted":true}
}
]' \
--user-data file://"$TMP_USER_DATA" \
--tag-specifications "[
{
\"ResourceType\":\"instance\",
\"Tags\":[
{\"Key\":\"Name\",\"Value\":\"${PROJECT}-backend-ec2-${ENV}\"},
{\"Key\":\"${PROJECT}:project\",\"Value\":\"${PROJECT}\"},
{\"Key\":\"${PROJECT}:component\",\"Value\":\"backend\"},
{\"Key\":\"${PROJECT}:owner\",\"Value\":\"james\"},
{\"Key\":\"${PROJECT}:managed-by\",\"Value\":\"manual\"},
{\"Key\":\"${PROJECT}:cost-center\",\"Value\":\"ops-aws-ec2\"},
{\"Key\":\"${PROJECT}:repo\",\"Value\":\"https://github.com/thecodejim/${PROJECT}-backend\"},
{\"Key\":\"${PROJECT}:environment\",\"Value\":\"${ENV}\"}
]
}
]" \
--count 1 \
--region "$REGION" \
--profile "$PROFILE" \
--query 'Instances[0].InstanceId' \
--output text)
aws ec2 wait instance-running \
--instance-ids "$INSTANCE_ID" \
--region "$REGION" \
--profile "$PROFILE"
echo $INSTANCE_ID
13b) Tag attached volumes¶
ROOT_VOL_ID=$(aws ec2 describe-instances \
--instance-ids "$INSTANCE_ID" \
--query "Reservations[0].Instances[0].BlockDeviceMappings[?DeviceName=='/dev/sda1'].Ebs.VolumeId" \
--output text --region "$REGION" --profile "$PROFILE")
DATA_VOL_ID=$(aws ec2 describe-instances \
--instance-ids "$INSTANCE_ID" \
--query "Reservations[0].Instances[0].BlockDeviceMappings[?DeviceName=='/dev/sdb'].Ebs.VolumeId" \
--output text --region "$REGION" --profile "$PROFILE")
echo "ROOT_VOL_ID=$ROOT_VOL_ID"
echo "DATA_VOL_ID=$DATA_VOL_ID"
REPO_URL="github.com/<your-org>/${PROJECT}-backend"
aws ec2 create-tags \
--resources "$ROOT_VOL_ID" \
--tags \
Key=${PROJECT}:project,Value=${PROJECT} \
Key=${PROJECT}:component,Value=backend \
Key=${PROJECT}:owner,Value=james \
Key=${PROJECT}:managed-by,Value=manual \
Key=${PROJECT}:cost-center,Value=ops-aws-ec2 \
Key=${PROJECT}:repo,Value="$REPO_URL" \
Key=${PROJECT}:environment,Value="$ENV" \
Key=Name,Value="${PROJECT}-backend-ec2-${ENV}-root" \
--region "$REGION" \
--profile "$PROFILE"
aws ec2 create-tags \
--resources "$DATA_VOL_ID" \
--tags \
Key=${PROJECT}:project,Value=${PROJECT} \
Key=${PROJECT}:component,Value=backend \
Key=${PROJECT}:owner,Value=james \
Key=${PROJECT}:managed-by,Value=manual \
Key=${PROJECT}:cost-center,Value=ops-aws-ec2 \
Key=${PROJECT}:repo,Value="$REPO_URL" \
Key=${PROJECT}:environment,Value="$ENV" \
Key=Name,Value="${PROJECT}-backend-ec2-${ENV}-postgres-data" \
--region "$REGION" \
--profile "$PROFILE"
Tag each (env-aware, cost-center + repo included).
13c) Associate Elastic IP¶
aws ec2 associate-address \
--instance-id "$INSTANCE_ID" \
--allocation-id "$ALLOC" \
--profile "$PROFILE"
14. Create EBS Snapshot Policy for Postgres Data (by Name tag)¶
We use Amazon Data Lifecycle Manager (DLM) to snapshot the Postgres data volume daily.
We assume your Postgres data volume (
/dev/sdb, 8 GB) is already tagged with a Name likePROJECT-backend-ec2-<ENV>-postgres-data. The policy will target that Name tag.
Create DLM Default IAM Role¶
AWS Data Lifecycle Manager (DLM) requires a service IAM role so it can create and manage EBS snapshots on your behalf. This role contains the correct trust relationship and permissions for DLM to function.
Run once per account/region:
aws dlm create-default-role \
--region us-east-1 \
--profile admin-cli-sso
This will automatically create the role:
AWSDataLifecycleManagerDefaultRole
DLM lifecycle policies created later (e.g., for Postgres volume backups) will reference this role. You only need to create it once, unless it is deleted.
Apply Snapshot Policy¶
Script:
aws/scripts/apply-ebs-snapshot-policy.sh
Preview (dry run):
./aws/scripts/apply-ebs-snapshot-policy.sh \
--profile "$PROFILE" \
--region "$REGION" \
--env "$ENV" \
--project "$PROJECT" \
--target-tag-key Name \
--target-tag-value "${PROJECT}-backend-ec2-${ENV}-postgres-data"
Apply policy:
DRY_RUN=0 ./aws/scripts/apply-ebs-snapshot-policy.sh \
--profile "$PROFILE" \
--region "$REGION" \
--env "$ENV" \
--project "$PROJECT" \
--target-tag-key Name \
--target-tag-value "${PROJECT}-backend-ec2-${ENV}-postgres-data"
Customize (optional):
# Keep 14 snapshots, run at 02:30 UTC
DRY_RUN=0 ./aws/scripts/apply-ebs-snapshot-policy.sh \
--profile "$PROFILE" \
--region "$REGION" \
--env "$ENV" \
--project "$PROJECT" \
--target-tag-key Name \
--target-tag-value "${PROJECT}-backend-ec2-${ENV}-postgres-data" \
--retain 14 \
--time 02:30
What this does
- Creates (or updates) a DLM policy targeting the volume with the specified Name tag
- Schedules daily snapshots at the requested UTC time (default 03:00)
- Retains the most recent N snapshots (default 7)
- Copies volume tags to each snapshot (cost/ownership tracking intact)
- Tags the policy itself with your standard PROJECT tags (
PROJECT:project,PROJECT:environment,PROJECT:repo,PROJECT:version, etc.)
Tip: You can pass any tag key/value to the script if you prefer a different targeting tag later.
Verifying the Snapshot Policy¶
1) List DLM lifecycle policies (CLI)¶
aws dlm get-lifecycle-policies \
--region us-east-1 \
--profile admin-cli-sso \
--output table
You should see your policy listed with State = ENABLED.
To view details, including rules, retention, target tags:
aws dlm get-lifecycle-policy \
--policy-id <POLICY_ID> \
--region us-east-1 \
--profile admin-cli-sso \
--output json
Look for:
PolicyType: EBS_SNAPSHOT_MANAGEMENT- Your
TargetTags(e.g.,"Name": "mow-backend-ec2-prod-postgres-data") - Correct
Interval,Times, andRetainRule
2) Check snapshots created by DLM¶
After the first scheduled run, check snapshots:
aws ec2 describe-snapshots \
--owner-ids self \
--filters "Name=tag:aws:dlm:lifecycle-policy-id,Values=<POLICY_ID>" \
--region us-east-1 \
--profile admin-cli-sso \
--output table
You should see snapshots with tags such as:
aws:dlm:lifecycle-policy-id = <POLICY_ID>
aws:dlm:lifecycle-schedule-name = Daily-Postgres-Backups-…
✅ If you see these tags, DLM is working properly.
3) Confirm snapshots exist in AWS Console¶
AWS Console →
EC2 → Snapshots → Filter by Tags →
aws:dlm:lifecycle-policy-id = <POLICY_ID>
You should see snapshots created on schedule.
4) Test early run (optional)¶
You can temporarily change:
- Schedule time → NOW + a few minutes to verify behavior sooner.
Re-run, wait, then confirm snapshot creation. After confirming, set schedule back to your desired time.
5) Confirm retention behavior¶
After >7 days (or manually create test snapshots), verify:
- Count is capped (e.g., last 7 snapshots kept)
- Old snapshots automatically deleted
✅ Expected Indicators¶
| Item | Expected |
|---|---|
| Policy listed | ✅ ENABLED |
| Target tags | ✅ correct Name / env |
| Snapshots tagged | ✅ aws:dlm:lifecycle-policy-id |
| Scheduled snapshots | ✅ appear after policy run |
| Snapshot lifecycle | ✅ retain N, purge old |
Troubleshooting¶
| Symptom | Likely Cause |
|---|---|
| No snapshots | Scheduled time hasn't passed |
| No snapshots | Wrong target tag |
| No snapshots | Role not created (create-default-role) |
| Policy not seen | Wrong region |
| Wrong volume | Tag mismatch on volume |
15. Verify Connectivity¶
Verify the instances are running¶
Stage environment
aws ec2 describe-instances \
--filters "Name=tag:${PROJECT}:environment,Values=stage" \
--query "Reservations[].Instances[].{ID:InstanceId,State:State.Name,IP:PublicIpAddress,Name:Tags[?Key=='Name']|[0].Value}" \
--region "$REGION" \
--profile "$PROFILE" \
--output table
Prod environment
aws ec2 describe-instances \
--filters "Name=tag:${PROJECT}:environment,Values=prod" \
--query "Reservations[].Instances[].{ID:InstanceId,State:State.Name,IP:PublicIpAddress,Name:Tags[?Key=='Name']|[0].Value}" \
--region "$REGION" \
--profile "$PROFILE" \
--output table
SSH¶
ssh -i "${KEY_NAME}.pem" ubuntu@<EIP>
If host changed:
ssh-keygen -R <EIP>
Logs on instance¶
sudo tail -n 200 /var/log/cloud-init-output.log
journalctl -u docker -n 200
docker compose logs -f
SSM command logs (from your machine)¶
aws ssm list-commands --region "$REGION" --profile "$PROFILE"
aws ssm list-command-invocations \
--details \
--query 'CommandInvocations[].[CommandId,InstanceId,Status,StatusDetails,CommandPlugins[0].Output]' \
--output table \
--region "$REGION" \
--profile "$PROFILE"
16. Sanity Tests + Troubleshooting¶
- ✅ SSH access
- ✅ HTTPS request to domain or EIP
- ✅ SSM pull succeeded
- ✅ App + DB healthy
Check:
docker ps
docker compose logs -f
Common issues:
| Issue | Fix |
|---|---|
| SSH timeout | Confirm SG + IP |
| Missing env | Confirm SSM upload |
| HTTPS fails | Confirm DNS + 80/443 open |
| Parameter denied | Confirm env policy correct |
| Container pull fails | Ensure images pushed |
17. Cleanup (Optional)¶
After deployment, you may remove your local bootstrap bundle and delete the uploaded S3 artifacts used only during EC2 bootstrap.
✅ Safe to remove — these artifacts are only needed during instance provisioning.
✅ Bucket itself may remain; cost is negligible unless storing many bundles.
Remove Local Bootstrap Bundle¶
rm -f bootstrap-bundle.zip
Delete Bootstrap Artifacts in S3¶
# Delete ONLY the environment bootstrap artifacts
aws s3 rm \
"s3://${BUCKET}/${PROJECT_PATH}/" \
--recursive \
--region "$REGION" \
--profile "$PROFILE"
This keeps the bucket itself but removes the uploaded bundle(s) under:
s3://PROJECT-bootstrap-artifacts-us-east-1/PROJECT/backend/<env>/
(Optional) Remove Bucket Entirely¶
⚠️ Only do this if no other envs are using the same bucket.
aws s3 rb \
"s3://${BUCKET}" \
--force \
--region "$REGION" \
--profile "$PROFILE"
The --force flag deletes all remaining objects before bucket removal.
< Backend Deployment - AWS SSM Parameter Store
Next: Backend Deployment - AWS EC2 Deploy from GitHub Actions >