Skip to content


What is AppPack doing under-the-hood to make this all happen?

With AppPack applications run in your AWS account. The data, secrets, and application code never leave your account. Everything is setup in AWS managed services. It is designed to be secure, resilient, auto-scaling, and self-healing. It should require little to no effort to maintain these systems (because Amazon is doing the heavy lifting for you).

During set up, a handful of EventBridge Rules are set up to send changes in your application state to our control plane. These events only contain basic information about things like a Codebuild build starting or a release task finishing. Our control plane will then determine what needs to happen in response to the event and make the necessary API calls to continue. For example, after a release task is successful, any services that need to be started will be updated or created as needed so ECS can update the application.

Our control plane uses a locked down IAM role to communicate to AWS in your account. This role only has access to do things like update services and task definitions in ECS, read Codebuild metadata, etc. It has been designed with the least necessary privileges to accomplish these tasks.

The stacks

During setup, you'll create multiple Cloudformation Stacks for different resources.


At the account level we setup:

  • An IAM OIDC provider which will allow users to login to AppPack and exchange those credentials for limited time tokens to manage the application.
  • Parameter Store values for your Docker Hub credentials. These are created directly via the CLI because SecureString credentials cannot be created by Cloudformation.
  • An EventBridge rule to monitor changes to the Parameter Store. These events only include the name of the Parameter, not the value and any value that doesn't match AppPack's prefix is ignored.
  • An IAM Role that is used by AppPack to collect further data on events that do not include enough information to know what app they belong to. These include:
  • ECS Task state change events
  • ECS Service action events


At the cluster-level, we setup:

  • A VPC in three availability zones with public and private subnets, an S3 gateway endpoint, an internet gateway, and flow logs.
  • An ECS Cluster
  • [Optional1] An EC2 auto-scaling group to run the container instances.
  • [Optional1] A Capacity Provider to ensure there are always enough EC2 instance to cover the resources needed by the ECS tasks.
  • An Application Load Balancer to route traffic to the application(s) in the cluster.
  • An HTTP listener to redirect all insecure traffic to HTTPS.
  • A Route53 wildcard DNS entry to point a domain to the load balancer.
  • An ACM certificate for the wildcard domain, connected to an HTTPS listener on the load balancer.
  • All the necessary security groups to allow internet traffic to the load balancer and route traffic from the load balancer into the applications.

Note: to avoid the need for (expensive) NAT gateways, the ECS cluster runs on the VPC's public subnet. Security groups are in place to block all incoming traffic from outside the VPC.


The app stack creates:

  • A Codebuild project connected to the code repository for continuous building and testing.
  • A private S3 bucket for storing build metadata and artifacts.
  • An Elastic Container Repository (ECR) for storing container images
  • A listener rule and target group to route traffic to the application from the load balancer.
  • A Cloudwatch Log Group for storing application logs. Another Log Group is used for storing Paaws activity related to the application.
  • A security group to allow the application to make outbound network calls
  • IAM Roles for:
    • ECS Execution
    • ECS Tasks
    • Events (for scheduled tasks)
    • AppPack management
    • User operations


The database stack creates:

  • An Aurora Postgres RDS Cluster
  • Database subnet groups in the private subnets of the VPC
  • One or two (if multi-az) DB Instances in the cluster
  • A Lambda function used to manage the database cluster. It can perform 3 tasks:
    1. create/destroy Postgres databases within the cluster
    2. enable IAM authentication on the admin database user
    3. executing one-off custom SQL as the admin user (useful for installing extensions which cannot be done by the application users)


Authentication is handled via Auth0. After successfully logging in with an account with a verified email, the CLI or web interface will exchange your login token for AWS keys for an IAM Role specific to the application that is being managed. This is done via the WebIdentityCredentials API either in your web browser or via your terminal. It will verify the token and also confirm the user is authorized to access the app. AppPack never has access to these keys or the traffic between your device and AWS.

Web interface

The web interface at uses the AWS JavaScript SDK to make calls directly to AWS from your computer. This allows us to make the website a completely static site (it is served from S3) without any backend that could access your data. After the initial login, all network traffic is direct to AWS.


The CLI works the same as the web interface. Logging in requires using a web browser to login with a custom code which identifies your device. Once logged in, a token is stored in your user's cache directory to allow continued access.

  1. The default cluster setup uses Fargate which does not require any self-managed EC2 instances.