Let's Deploy It to AWS and Set Up a CI/CD Pipeline
We have a working Vapor backend and a fully integrated iOS app. But the server is running on localhost, which is only useful to us. Let's fix that.
In this post, we'll Dockerize the Vapor app, switch from SQLite to Postgres, deploy to AWS using ECS Fargate, and set up a GitHub Actions pipeline that deploys automatically on every push to main. By the end, you'll have a production URL that the iOS app can hit from anywhere.
You can find the complete deployment configuration on GitHub.
The Deployment Strategy
Here's the high-level plan:
GitHub Push -> GitHub Actions -> Docker Build -> Push to ECR -> Deploy to ECS Fargate
|
v
RDS Postgres
Docker packages the Vapor app into a portable container. Swift runs on Linux in production, and Docker makes that painless.
ECR (Elastic Container Registry) stores the Docker images. Think of it as a private Docker Hub.
ECS Fargate runs the container without managing servers. You define the CPU, memory, and networking, and AWS handles the rest. No EC2 instances to patch, no auto-scaling groups to configure.
RDS Postgres is a managed database. Automatic backups, failover, and maintenance windows handled by AWS.
This setup costs roughly $30-50/month for a small app. Fargate's per-second billing means you only pay for what you use.
Dockerizing the Vapor App
Swift on Linux requires a specific build environment. Docker handles this with a multi-stage build that keeps the final image small:
# Build stage
FROM swift:6.0-noble AS build
WORKDIR /build
# Copy package files first for better caching
COPY Package.swift Package.resolved ./
RUN swift package resolve
# Copy source and build
COPY . .
RUN swift build -c release --static-swift-stdlib
# Run stage
FROM ubuntu:noble
WORKDIR /app
# Install required runtime dependencies
RUN apt-get update && apt-get install -y \
libcurl4 \
libxml2 \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
COPY --from=build /build/.build/release/App /app/App
# Copy any resources (like Public/ or Resources/)
COPY --from=build /build/Public ./Public
ENV ENVIRONMENT=production
EXPOSE 8080
ENTRYPOINT ["./App"]
CMD ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"]
The multi-stage build is important. The build stage includes the entire Swift toolchain (several GB). The run stage only includes the compiled binary and minimal runtime dependencies. The final image is typically under 100MB.
The --static-swift-stdlib flag statically links the Swift standard library, so you don't need to install Swift's runtime on the production image. This makes the container more portable and the image smaller.
The Package.swift files are copied first, before the source code. Docker caches layers, so if your dependencies haven't changed, the swift package resolve step is cached and subsequent builds only recompile your code. This cuts build times significantly.
Switching to Postgres
SQLite was convenient for local development, but it doesn't work well in containerized environments (the database file is ephemeral). Switch to FluentPostgresDriver for production:
// Package.swift - add the postgres driver
.package(url: "https://github.com/vapor/fluent-postgres-driver.git", from: "2.8.0"),
Update configure.swift to use environment-based configuration:
import Fluent
import FluentPostgresDriver
import FluentSQLiteDriver
import Vapor
public func configure(_ app: Application) async throws {
if let databaseURL = Environment.get("DATABASE_URL") {
try app.databases.use(
.postgres(url: databaseURL),
as: .psql
)
} else {
// Fall back to SQLite for local development
app.databases.use(.sqlite(.file("db.sqlite")), as: .sqlite)
}
app.migrations.add(CreateLandmark())
app.migrations.add(SeedLandmarks())
try routes(app)
}
When DATABASE_URL is set (in production), Vapor uses Postgres. When it's not set (local development), it falls back to SQLite. No code changes needed, no build flags, just an environment variable.
Setting Up AWS Infrastructure
You'll need the AWS CLI configured with appropriate permissions. Here's the infrastructure setup:
ECR Repository
aws ecr create-repository \
--repository-name landmarks-api \
--region us-west-2
This creates a private Docker registry for your images. Note the repository URI in the output - you'll need it for pushing images.
RDS Postgres Instance
aws rds create-db-instance \
--db-instance-identifier landmarks-db \
--db-instance-class db.t4g.micro \
--engine postgres \
--engine-version 16 \
--master-username landmarks_admin \
--master-user-password "your-secure-password" \
--allocated-storage 20 \
--vpc-security-group-ids sg-your-security-group \
--db-name landmarks \
--no-publicly-accessible \
--region us-west-2
A db.t4g.micro instance is plenty for a small app and falls under the free tier for the first year. The --no-publicly-accessible flag keeps the database inside the VPC where only your ECS tasks can reach it.
ECS Cluster and Task Definition
Create the cluster:
aws ecs create-cluster \
--cluster-name landmarks-cluster \
--region us-west-2
The task definition tells ECS how to run your container. Create a task-definition.json:
{
"family": "landmarks-api",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::YOUR_ACCOUNT:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "landmarks-api",
"image": "YOUR_ACCOUNT.dkr.ecr.us-west-2.amazonaws.com/landmarks-api:latest",
"portMappings": [
{
"containerPort": 8080,
"protocol": "tcp"
}
],
"environment": [
{
"name": "ENVIRONMENT",
"value": "production"
}
],
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:us-west-2:YOUR_ACCOUNT:secret:landmarks/database-url"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/landmarks-api",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "ecs"
}
},
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:8080/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
}
}
]
}
Register it:
aws ecs register-task-definition \
--cli-input-json file://task-definition.json
256 CPU units and 512MB memory is the smallest Fargate configuration. For a Swift server handling moderate traffic, this is sufficient. Scale up if you see resource pressure in CloudWatch.
Environment Configuration
Store the database URL in AWS Secrets Manager rather than plain text environment variables:
aws secretsmanager create-secret \
--name landmarks/database-url \
--secret-string "postgres://landmarks_admin:your-secure-password@landmarks-db.xxxxx.us-west-2.rds.amazonaws.com:5432/landmarks" \
--region us-west-2
The task definition references this secret, and ECS injects it as an environment variable at runtime. The secret never appears in your code, Docker image, or task definition.
ECS Service
Create the service that keeps your container running:
aws ecs create-service \
--cluster landmarks-cluster \
--service-name landmarks-api \
--task-definition landmarks-api \
--desired-count 1 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={subnets=[subnet-xxx,subnet-yyy],securityGroups=[sg-xxx],assignPublicIp=ENABLED}" \
--region us-west-2
ECS ensures your container stays running. If it crashes, ECS restarts it. If you deploy a new version, ECS performs a rolling update with zero downtime.
The GitHub Actions Pipeline
Automate the entire build-and-deploy process with a GitHub Actions workflow:
name: Deploy Backend
on:
push:
branches: [main]
workflow_dispatch:
env:
AWS_REGION: us-west-2
ECR_REPOSITORY: landmarks-api
ECS_CLUSTER: landmarks-cluster
ECS_SERVICE: landmarks-api
TASK_DEFINITION: landmarks-api
jobs:
deploy:
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build, tag, and push image
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:latest .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
docker push $ECR_REGISTRY/$ECR_REPOSITORY:latest
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
- name: Download current task definition
run: |
aws ecs describe-task-definition \
--task-definition $TASK_DEFINITION \
--query taskDefinition \
> task-definition.json
- name: Update task definition with new image
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: task-definition.json
container-name: landmarks-api
image: ${{ steps.build-image.outputs.image }}
- name: Deploy to ECS
uses: aws-actions/amazon-ecs-deploy-task-definition@v2
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
Every push to main triggers this pipeline. It builds the Docker image, pushes it to ECR, updates the task definition with the new image tag, and deploys to ECS. The wait-for-service-stability step ensures the pipeline reports failure if the new container doesn't start successfully.
The image is tagged with both the commit SHA (for traceability) and latest (for convenience). You can always roll back to a specific commit by deploying its SHA-tagged image.
Add these secrets to your GitHub repository settings:
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEY
Use an IAM user with permissions scoped to ECR push, ECS deploy, and Secrets Manager read.
Database Migrations in Production
In development, we used autoMigrate() for convenience. In production, you want migrations to run once per deployment, not on every container startup. Add a Docker entrypoint script:
#!/bin/bash
set -e
# Run migrations
./App migrate --yes
# Start the server
exec ./App serve --env production --hostname 0.0.0.0 --port 8080
Update the Dockerfile to use this entrypoint:
COPY entrypoint.sh /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
The migrate --yes command runs any pending migrations and exits. The --yes flag auto-confirms, which is necessary in a non-interactive environment. If a migration fails, the script exits with an error code, and ECS marks the container as unhealthy.
Remove autoMigrate() from configure.swift for production builds, or gate it behind an environment check:
if app.environment == .development {
try await app.autoMigrate()
}
Health Checks and Monitoring
Add a health check endpoint to your Vapor routes:
app.get("health") { req async throws -> HTTPStatus in
// Verify database connectivity
_ = try await LandmarkModel.query(on: req.db).count()
return .ok
}
This endpoint does more than return 200. It actually queries the database, verifying that the full stack is operational. If the database connection is down, the health check fails, and ECS knows to restart the container.
The task definition's health check configuration pings this endpoint every 30 seconds:
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:8080/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
}
The startPeriod gives the container 60 seconds to start up before health checks begin. Swift servers typically start in under a second, but database migrations might need more time on first deployment.
For monitoring, CloudWatch Logs captures everything your Vapor app writes to stdout. The log configuration in the task definition routes logs to the /ecs/landmarks-api log group. You can set up CloudWatch alarms on error rates, response times, or custom metrics.
A basic alarm for 5xx errors:
aws cloudwatch put-metric-alarm \
--alarm-name "landmarks-api-5xx" \
--metric-name "5xxCount" \
--namespace "AWS/ApplicationELB" \
--statistic Sum \
--period 300 \
--evaluation-periods 1 \
--threshold 10 \
--comparison-operator GreaterThanThreshold \
--alarm-actions arn:aws:sns:us-west-2:YOUR_ACCOUNT:alerts
Updating the iOS App
With the backend deployed, update the iOS app to point at the production URL. Use Xcode schemes to switch between environments:
@main
struct LandmarksApp: App {
let services: Services
init() {
let baseURL: URL
#if DEBUG
if ProcessInfo.processInfo.environment["USE_PRODUCTION"] == "1" {
baseURL = URL(string: "https://api.landmarks.example.com")!
} else {
baseURL = URL(string: "http://localhost:8080")!
}
#else
baseURL = URL(string: "https://api.landmarks.example.com")!
#endif
services = .live(baseURL: baseURL)
}
var body: some Scene {
WindowGroup {
ContentView()
.services(services)
}
}
}
In Xcode, create two schemes:
- Landmarks (Local) - The default Debug scheme, hits localhost:8080
- Landmarks (Production) - Debug scheme with
USE_PRODUCTION=1environment variable, hits the real server
Release builds always use the production URL. This setup lets you test against either server without changing code.
For a cleaner approach, define the base URL in a configuration file or Info.plist entry that varies by build configuration. But for most apps, the approach above is simpler and more explicit.
Wrapping Up
Docker makes Swift portable. The same Vapor app that runs on your Mac runs identically in an Ubuntu container on AWS. The multi-stage build keeps images small, and the static Swift stdlib means no runtime dependencies.
ECS Fargate removes server management. You define CPU, memory, and networking. AWS handles provisioning, patching, and scaling. If your container dies, ECS restarts it. If you push a new version, ECS does a rolling deploy.
GitHub Actions ties it all together. Push to main, and your code is built, tested, containerized, and deployed without manual intervention.
The total infrastructure is straightforward: one ECR repository, one ECS cluster with one service, one RDS instance, and one GitHub Actions workflow. As your app grows, you can add a load balancer, auto-scaling rules, and multiple environments. But this foundation handles more traffic than you'd expect.
In the next post, we'll write integration tests that boot a real Vapor server and test every endpoint with actual HTTP requests. Same patterns, same Swift, real confidence.