26 min read
Building a Full-Stack Serverless Todo App with CloudFormation

In this post, I want to show you how to set up a full-stack serverless todo application on AWS using Infrastructure as Code. The frontend will be in React, the backend API will be a Lambda, and the data source will be DynamoDB.

Note: The React codebase (created using Claude Code) and the Lambda (written in Go) are not the focus but the Infrastructure as Code templates we are going to create step by step.

We are going to build “Living Infrastructure” using Infrastructure as Code (IaC). To make this enterprise ready, we will decouple the setup into three distinct layers. This ensures that a change to our CSS won’t accidentally trigger a change to our DNS settings.

This is going to be similar to the previous blog post but instead of deploying a static site builder, we are going to deploy a React app, API, and DynamoDB using SAM (Serverless Application Model).

The Roadmap

Building this involves several steps and we can break them down into three stages.

  1. Phase 1: Global Foundation - which will create our SSL certificate
  2. Phase 2: The React infrastructure
  3. Phase 3: The app infrastructure

Phase 1: Global Foundation

The Global Foundation is meant to create global resources which will be consumed by the React/app infrastructure.

The first thing we need is an SSL certificate for our dev/staging/prod domains. Instead of creating different SSL certificates, we will create one wildcard certificate for all our domains.

Note: The app is NOT online but you can clone the GitHub repositories mentioned and update the URLs to test the template.

Let’s start by creating a new repository, I have called this repository example-todo-platform-infra and as always create two files: deployment-file.yaml and infra.yaml.

Inside your deployment file add this

template-file-path: infra.yaml
parameters:
  TodoHostedZoneId: Your-Hosted-Zone-Id

Note: Your-Hosted-Zone-Id is the zone id in AWS and if you bought your domain name with AWS, one will be created for you.

Your infra.yaml file contains all the resources and in our case we need to create one wildcard certificate and also reference the parameter TodoHostedZoneId

AWSTemplateFormatVersion: 2010-09-09
Description: Domain and certificate infrastructure for apitodo.jaik.me and todo.jaik.me

Parameters:
  TodoHostedZoneId:
    Type: String
    Description: The ID of the Hosted Zone created by AWS when the domain was purchased.

Now it’s time to add the certificate as a resource

Resources:
  # Request a certificate for *.jaik.me
  TodoClientCertificate:
    Type: 'AWS::CertificateManager::Certificate'
    Properties:
      DomainName: '*.jaik.me'
      SubjectAlternativeNames:
        - '*.jaik.me'
        - '*.apitodo.jaik.me'
        - '*.todo.jaik.me'
      ValidationMethod: DNS
      DomainValidationOptions:
        - DomainName: '*.jaik.me'
          HostedZoneId: !Ref TodoHostedZoneId
        - DomainName: '*.apitodo.jaik.me'
          HostedZoneId: !Ref TodoHostedZoneId
        - DomainName: '*.todo.jaik.me'
          HostedZoneId: !Ref TodoHostedZoneId

The last thing we need to do is to export the certificate ARN and the hosted zone id so other stacks can use them.

Outputs:
  TodoClientCertificateArn:
    Value: !Ref TodoClientCertificate
    Export:
      Name: Projects-CertificateArn
  TodoHostedZoneId:
    Value: !Ref TodoHostedZoneId
    Export:
      Name: Todo-HostedZoneId

Push your changes to GitHub and follow these steps to create a stack in AWS

Note: It is important that this stack is created in us-east-1 as CloudFront requires ACM certs in that region.

  1. Login to your GitHub account and navigate to Settings -> Applications -> AWS Connector for GitHub and click on Configure
  2. Under Repository access Select your repository so AWS has access to it.
  3. Navigate to CloudFormation -> Stacks
  4. Click on Create stack -> With new resources (standard)
  5. Select Choose an existing template under Prerequisite - Prepare template
  6. Select Sync from Git under Specify template and click Next
  7. Provide a name for your stack
  8. Under Stack deployment file select I am providing my own file in my repository
  9. Under Template definition repository select Link a Git repository
  10. Select GitHub under Select repository provider
  11. Under Connection select your connection name (This is the connection that was created in this blog post)
  12. Under Repository select your repository which contains the two files we created above
  13. Under Branch select your-branch-name
  14. Enter deployment-file.yaml for Deployment file path
  15. Under IAM role select Existing IAM role.
  16. Under IAM role name select CloudFormationGitSyncRole. This is the same role created in the blog post.

Note: If the role in your environment matches the role mentioned above then you need to make one change to your infrastructure file. On line 57 this line Resource: !Sub ‘arn:aws:codebuild:${AWS::Region}:${AWS::AccountId}:project/JaikMeStaticBuild’ needs to say Resource: !Sub ‘arn:aws:codebuild:${AWS::Region}:${AWS::AccountId}:project/*’ So CodeBuild has access to all the projects under that Arn.

  1. Click on Next
  2. Under Permissions - optional select the same role name CloudFormationGitSyncRole
  3. Scroll down on the page and click on Next
  4. Now you are on step 4 and click on Submit and AWS will create your stack

Once the stack is created we should see a certificate created for us under Certificate Manager -> Certificates.

Now it’s time to create our infrastructure for our React app.

Phase 2: The React Infrastructure

I have used Claude Code to create a simple todo app which you can find here.

We will now create a separate repository which will contain our Infrastructure as Code for the React todo application.

Instead of deploying our React site at a URL we will use CloudFormation parameters to create a dev/staging/prod environment.

Note: Creating these environments in an enterprise setting is done via different AWS accounts and this post doesn’t suggest that this is the canonical way to do it

Let’s start by creating a new repository called example-todo-client-infra and as always we will create the following files

  • deployment-dev.yaml Creates a dev stack.
  • deployment-staging.yaml Creates a staging stack.
  • deployment-prod.yaml Creates a production stack.
  • infra.yaml Contains our Infrastructure as Code template.

Let’s set up our production deployment file first. Let’s edit deployment-prod.yaml file

template-file-path: infra.yaml
parameters:
  Environment: prod
  BranchName: main

We create two parameters: Environment and BranchName set to prod and main respectively.

Similarly we can add the following to the deployment-staging.yaml file

template-file-path: infra.yaml
parameters:
  Environment: staging
  BranchName: staging

and deployment-dev.yaml

template-file-path: infra.yaml
parameters:
  Environment: dev
  BranchName: dev

The deployment file sets some parameters and they all point to the infra.yaml. Let’s start creating our infra.yaml now

The first one is going to be an artifact bucket used by the CodePipeline

The file begins with the following

AWSTemplateFormatVersion: 2010-09-09
Description: Infrastructure for todo.jaik.me - S3 Hosting, CloudFront OAC, and CI/CD Pipeline.

Now let’s add our parameters

Parameters:
  Environment:
    Type: String
    AllowedValues:
      - dev
      - staging
      - prod
  BranchName:
    Type: String
    Default: main

Now our infra.yaml can accept the two parameters declared in our deployment file.

Next we are going to add a condition which lets us know if the environment is production or not.

Conditions:
  IsProd: !Equals [!Ref Environment, prod]

Now it’s time for us to define our resources. We will begin by defining an S3 bucket to hold the artifacts.

Resources:
  PipelineArtifactBucket:
    Type: 'AWS::S3::Bucket'
    Properties:
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256
      LifecycleConfiguration:
        Rules:
          - Id: CleanupOldArtifacts
            Status: Enabled
            ExpirationInDays: 7
            NoncurrentVersionExpiration:
              NoncurrentDays: 3
            AbortIncompleteMultipartUpload:
              DaysAfterInitiation: 1

This creates an S3 “artifact” bucket with public access blocked and a lifecycle configuration to automatically delete the files in the bucket.

Now let’s create our S3 bucket meant to hold the React app.

Resources:
  S3WebsiteBucket:
    Type: 'AWS::S3::Bucket'
    Properties:
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      VersioningConfiguration:
        Status: Enabled
      LifecycleConfiguration:
        Rules:
          - Id: AutoCleanupOldVersions
            Status: Enabled
            NoncurrentVersionExpiration:
              NoncurrentDays: 30

This bucket doesn’t have public access and has a lifecycle rule to delete the old versions after 30 days.

Since the bucket does not allow public access let’s create a bucket policy to allow access via CloudFront.

Resources:
  BucketPolicy:
    Type: 'AWS::S3::BucketPolicy'
    Properties:
      Bucket: !Ref S3WebsiteBucket
      PolicyDocument:
        Id: MyPolicy
        Version: 2012-10-17
        Statement:
          # Only allow requests coming from CloudFront and the only action it can access is GET
          - Sid: AllowCloudFrontServicePrincipal
            Effect: Allow
            Principal:
              Service: cloudfront.amazonaws.com
            Action: 's3:GetObject'
            Resource: !Sub '${S3WebsiteBucket.Arn}/*'
            Condition:
              StringEquals:
                AWS:SourceArn: !Sub "arn:aws:cloudfront::${AWS::AccountId}:distribution/${CloudFrontDistribution}"

This creates a bucket policy which is attached to the S3 bucket with our React code.

Now let’s create our build project which will build our React app.

Resources:
  BuildProject:
    # CodeBuild Project to build React app
    Type: 'AWS::CodeBuild::Project'
    Properties:
      Name: !Sub "TodoBuild-${Environment}"
      ServiceRole: !GetAtt CodeBuildServiceRole.Arn
      Artifacts:
        Type: CODEPIPELINE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_SMALL
        Image: aws/codebuild/amazonlinux2-x86_64-standard:5.0
      Source:
        Type: CODEPIPELINE
        BuildSpec: |
          version: 0.2
          phases:
            install:
              runtime-versions:
                nodejs: 20
              commands:
                - npm install
            build:
              commands:
                - npm run build
          artifacts:
            base-directory: dist
            files:
              - '**/*'

Note: The build name appends the environment so we can create a separate build project for dev/staging/prod environments.

This resource tells CloudFormation to create a build project, where the artifacts are managed by the Code Pipeline we will create later.

Let’s create a role which CodeBuild can assume to access logs and the S3 artifact bucket.

Resources:
  CodeBuildServiceRole:
    # IAM Role assumed by CodeBuild to write logs to CloudWatch and read/write to the S3 artifact bucket.
    Type: 'AWS::IAM::Role'
    Properties:
      # TRUST POLICY: Defines WHO (the Principal) can assume this role.
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: codebuild.amazonaws.com
            Action: 'sts:AssumeRole'
      # PERMISSIONS POLICY: Defines WHAT actions this role is allowed to perform
      # once it has been assumed.
      Policies:
        - PolicyName: CodeBuildAccess
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - logs:CreateLogGroup
                  - logs:CreateLogStream
                  - logs:PutLogEvents
                # The * at the end allows CodeBuild to create log streams dynamically for every new build run.
                # The CloudWatch log group where CodeBuild writes build logs
                Resource: !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/TodoBuild-${Environment}*'
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:PutObject
                  - s3:GetBucketVersioning
                Resource: 
                  - !Sub '${PipelineArtifactBucket.Arn}/*'
                  - !GetAtt PipelineArtifactBucket.Arn

Let’s create a deploy project to deploy the built assets to S3 with per-file cache headers

Resources:
  DeployProject:
    # CodeBuild Project to deploy built assets to S3 with per-file cache headers.
    Type: 'AWS::CodeBuild::Project'
    Properties:
      Name: !Sub "TodoDeploy-${Environment}"
      ServiceRole: !GetAtt DeployServiceRole.Arn
      Artifacts:
        Type: CODEPIPELINE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_SMALL
        Image: aws/codebuild/amazonlinux2-x86_64-standard:5.0
        EnvironmentVariables:
          - Name: DEPLOY_BUCKET
            Value: !Ref S3WebsiteBucket
          - Name: CLOUDFRONT_DIST_ID
            Value: !Ref CloudFrontDistribution
      Source:
        Type: CODEPIPELINE
        BuildSpec: |
          version: 0.2
          phases:
            build:
              commands:
                # Hashed static assets — cache forever (hash changes on new builds)
                - aws s3 sync . s3://$DEPLOY_BUCKET --delete --cache-control "max-age=31536000,public,immutable" --exclude "index.html"
                # index.html — always revalidate to pick up new asset references
                - aws s3 cp index.html s3://$DEPLOY_BUCKET/index.html --cache-control "max-age=0,no-cache,no-store,must-revalidate"
            post_build:
              commands:
                - aws cloudfront create-invalidation --distribution-id $CLOUDFRONT_DIST_ID --paths "/*"

Now let’s create an IAM role which will be assumed by DeployProject

Resources:
  DeployServiceRole:
    # IAM Role assumed by the Deploy CodeBuild project.
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: codebuild.amazonaws.com
            Action: 'sts:AssumeRole'
      Policies:
        - PolicyName: DeployAccess
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - logs:CreateLogGroup
                  - logs:CreateLogStream
                  - logs:PutLogEvents
                Resource: !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/TodoDeploy-${Environment}*'
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:GetBucketVersioning
                Resource:
                  - !Sub '${PipelineArtifactBucket.Arn}/*'
                  - !GetAtt PipelineArtifactBucket.Arn
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:PutObject
                  - s3:DeleteObject
                  - s3:ListBucket
                Resource:
                  - !GetAtt S3WebsiteBucket.Arn
                  - !Sub '${S3WebsiteBucket.Arn}/*'
              - Effect: Allow
                Action:
                  - 'cloudfront:CreateInvalidation'
                Resource: !Sub 'arn:aws:cloudfront::${AWS::AccountId}:distribution/${CloudFrontDistribution}'

Now let’s create an IAM role which will be assumed by the CodePipeline

Resources:
  CodePipelineServiceRole:
    # IAM Role assumed by CodePipeline to access S3, GitHub, and CodeBuild
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: codepipeline.amazonaws.com
            Action: 'sts:AssumeRole'
      Policies:
        - PolicyName: PipelineAccessPolicy
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:GetObjectVersion
                  - s3:GetBucketVersioning
                  - s3:PutObject
                  - s3:ListBucket
                  - s3:DeleteObject
                Resource:
                  - !GetAtt PipelineArtifactBucket.Arn
                  - !Sub '${PipelineArtifactBucket.Arn}/*'
              - Effect: Allow
                Action: 'codestar-connections:UseConnection'
                Resource: !ImportValue "GlobalResourcesStack-GitHubConnectionArn"
              - Effect: Allow
                Action:
                  - 'codebuild:BatchGetBuilds'
                  - 'codebuild:StartBuild'
                Resource:
                  - !GetAtt BuildProject.Arn
                  - !GetAtt DeployProject.Arn

Now let’s create our CodePipeline

Resources:
  TodoSitePipeline:
    Type: 'AWS::CodePipeline::Pipeline'
    Properties:
      RoleArn: !GetAtt CodePipelineServiceRole.Arn
      ArtifactStore:
        Type: S3
        Location: !Ref PipelineArtifactBucket
      Stages:
        - Name: Source
          Actions:
            - Name: GitHubSource
              ActionTypeId:
                Category: Source
                Owner: AWS
                Provider: CodeStarSourceConnection
                Version: '1'
              OutputArtifacts:
                - Name: SourceArtifact
              Configuration:
                ConnectionArn: !ImportValue "GlobalResourcesStack-GitHubConnectionArn"
                FullRepositoryId: iJKTen/example-todo-client
                BranchName: !Ref BranchName
                OutputArtifactFormat: CODE_ZIP
        - Name: Build
          Actions:
            - Name: TodoBuild
              ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: '1'
              InputArtifacts:
                - Name: SourceArtifact
              OutputArtifacts:
                - Name: BuildArtifact
              Configuration:
                ProjectName: !Ref BuildProject
        - Name: Deploy
          Actions:
            - Name: S3Deploy
              ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: '1'
              InputArtifacts:
                - Name: BuildArtifact
              Configuration:
                ProjectName: !Ref DeployProject

This pipeline does three things:

  1. Gets the code from GitHub and uploads it to the S3 artifact bucket
  2. Runs our Build project, providing the source code as input
  3. Runs our DeployProject build which uploads the code to the S3 bucket and invalidates the cache.

Now we need to set up Origin Access Control (OAC) so CloudFront can access the private S3 website bucket

Resources:
  CloudFrontOAC:
    # sign every request you make to S3 using SigV4 authentication.
    Type: 'AWS::CloudFront::OriginAccessControl'
    Properties:
      OriginAccessControlConfig:
        Description: !If
          - IsProd
          - "OAC for todo.jaik.me S3 Bucket"
          - !Sub "OAC for ${Environment}.todo.jaik.me S3 Bucket"
        Name: !Sub "${AWS::StackName}-OAC"
        OriginAccessControlOriginType: s3
        SigningBehavior: always
        SigningProtocol: sigv4

The condition IsProd is used here and acts like a ternary operator. If IsProd is true then the Description would be set to “OAC for todo.jaik.me S3 Bucket” else “OAC for ${Environment}.todo.jaik.me S3 Bucket”.

Now we can create our CloudFront distribution and use the above OAC so CloudFront can read from the private S3 bucket.

Resources:
  # CDN distribution that serves the SPA from the private S3 bucket.
  CloudFrontDistribution:
    Type: 'AWS::CloudFront::Distribution'
    Properties:
      DistributionConfig:
        Aliases:
          - !If
            - IsProd
            - todo.jaik.me
            - !Sub "${Environment}.todo.jaik.me"
        DefaultRootObject: index.html
        Enabled: true
        Origins:
          - DomainName: !GetAtt S3WebsiteBucket.RegionalDomainName
            Id: S3Origin
            # Use Origin Access Control (OAC) so CloudFront can read from the private S3 bucket.
            OriginAccessControlId: !GetAtt CloudFrontOAC.Id
            S3OriginConfig:
              OriginAccessIdentity: ""
        # SPA client-side routing: rewrite S3 error responses to serve index.html with HTTP 200,
        # so the frontend router (e.g. React Router) handles the path.
        # 403 is needed because private S3 buckets return Forbidden (not 404) for missing keys.
        CustomErrorResponses:
          - ErrorCode: 403
            ResponseCode: 200
            ResponsePagePath: /index.html
          - ErrorCode: 404
            ResponseCode: 200
            ResponsePagePath: /index.html
        DefaultCacheBehavior:
          TargetOriginId: S3Origin
          ViewerProtocolPolicy: redirect-to-https
          # AWS Managed CachingOptimized policy — best fit for static S3 origins because it:
          #   1. Enables gzip + brotli compression (smaller transfers for JS/CSS/HTML)
          #   2. Does NOT forward query strings, cookies, or headers (S3 doesn't use them)
          #   3. Uses sensible TTLs: min 1s, default 24h, max 1yr (respects Cache-Control headers from origin)
          # Docs: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-cache-policies.html
          CachePolicyId: 658327ea-f89d-4fab-a63d-7e88639e58f6
        ViewerCertificate:
          AcmCertificateArn: !ImportValue "Projects-CertificateArn"
          SslSupportMethod: sni-only

The last thing to do in this stack is to setup our DNS so it sits in front of our CloudFront distribution

Resources:
  # DNS flow: Route53 (todo.jaik.me) -> CloudFront -> S3 bucket
  DomainRecord:
    Type: 'AWS::Route53::RecordSet'
    Properties:
      HostedZoneId: !ImportValue Todo-HostedZoneId
      Name: !If
        - IsProd
        - todo.jaik.me
        - !Sub "${Environment}.todo.jaik.me"
      Type: A
      AliasTarget:
        DNSName: !GetAtt CloudFrontDistribution.DomainName
        HostedZoneId: Z2FDTNDATAQYW2

The same IsProd condition is used at runtime to set the domain name accordingly.

Phase 3: The App Infrastructure

Now it’s time to deploy our API so the frontend site can access the API and save data in our DynamoDB table.

Just like in Phase 2 we are going to create different deployment files which will deploy dev/staging/prod environments of our app.

We will start by creating a new repository and creating our deployment files as follows. Let’s start with deployment-prod.yaml file

template-file-path: infra.yaml
parameters:
  Environment: prod
  BranchName: main

Similarly let’s create deployment-staging.yaml file

template-file-path: infra.yaml
parameters:
  Environment: staging
  BranchName: staging

and finally deployment-dev.yaml file

template-file-path: infra.yaml
parameters:
  Environment: dev
  BranchName: dev

Our App consists of a Go API which uses AWS Lambda and uses DynamoDB as the data store. You can review the Go API here.

When building a serverless API on AWS, it’s tempting to define everything in a single CloudFormation template — the Lambda function, the DynamoDB table, the CI/CD pipeline, the IAM roles for deployment. It all ships together, so why not define it together?

There are many reasons but to list a few:

  1. We would be merging concerns into one configuration file. For example, our Lambda may evolve over time but the roles and the build pipeline will remain the same.
  2. If your CodePipeline is defined in the same template as your Lambda, then every app change (new route, env var tweak) would require updating the stack that contains the pipeline. The pipeline would effectively need to redeploy itself to deploy your app. That’s a circular dependency at the operational level, even if CloudFormation doesn’t flag it syntactically

So for the very first time, we will create two stacks. The “Infra/CI-CD” stack will be created from infra.yaml file and the “app” stack will be defined in app.yaml file.

  • infra.yaml — the CI/CD pipeline itself. The S3 artifact bucket, the CodeBuild project, the CodePipeline, and the IAM roles that grant these services permission to do their jobs.
  • app.yaml — the application resources. The Lambda function, DynamoDB table, API Gateway, custom domain, and DNS records.

Let’s start defining our resources that make up the infra.yaml file.

As usual the file begins with

AWSTemplateFormatVersion: 2010-09-09
Description: Infrastructure for todo app

We will begin by declaring a couple of parameters

Parameters:
  Environment:
    Type: String
    AllowedValues:
      - dev
      - staging
      - prod
  BranchName:
    Type: String
    Default: main
  BuildProjectName:
    # Store the build project name as a parameter to avoid circular dependency between CodeBuildServiceRole and BuildProject
    Type: String
    Default: TodoGoBuild

Let’s create our “artifact” bucket

Resources:
  PipelineArtifactBucket:
    # Artifact bucket to store "artifacts" between stages
    Type: 'AWS::S3::Bucket'
    Properties:
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      VersioningConfiguration:
        Status: Enabled
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256
      LifecycleConfiguration:
        Rules:
          - Id: CleanupOldArtifacts
            Status: Enabled
            ExpirationInDays: 7
            NoncurrentVersionExpiration:
              NoncurrentDays: 3
            AbortIncompleteMultipartUpload:
              DaysAfterInitiation: 1

Now let’s create an IAM role which will be assumed by CodeBuild

Resources:
  CodeBuildServiceRole:
    # IAM Role assumed by CodeBuild to write logs to CloudWatch and read/write to the S3 artifact bucket.
    Type: 'AWS::IAM::Role'
    Properties:
      # TRUST POLICY: Defines WHO (the Principal) can assume this role.
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: codebuild.amazonaws.com
            Action: 'sts:AssumeRole'
      # PERMISSIONS POLICY: Defines WHAT actions this role is allowed to perform
      # once it has been assumed.
      Policies:
        - PolicyName: CodeBuildAccess
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - logs:CreateLogGroup
                  - logs:CreateLogStream
                  - logs:PutLogEvents
                # The * at the end allows CodeBuild to create log streams dynamically for every new build run.
                # The CloudWatch log group where CodeBuild writes build logs
                Resource: !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/${BuildProjectName}-${Environment}*'
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:PutObject
                  - s3:GetBucketVersioning
                Resource: 
                  - !Sub '${PipelineArtifactBucket.Arn}/*' # Give access to the files inside the bucket.
                  - !GetAtt PipelineArtifactBucket.Arn # Give access to the bucket itself.

Now let’s create an IAM role which will be assumed by CodePipeline

Resources:
  CodePipelineServiceRole:
    # IAM Role assumed by CodePipeline to access S3, GitHub, CodeBuild, and CloudFormation
    Type: 'AWS::IAM::Role'
    Properties:
      # TRUST POLICY: Defines WHO (the Principal) can assume this role.
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: codepipeline.amazonaws.com
            Action: 'sts:AssumeRole'
      # PERMISSIONS POLICY: Defines WHAT actions this role is allowed to perform
      # once it has been assumed.
      Policies:
        - PolicyName: PipelineAccessPolicy
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:GetObjectVersion
                  - s3:GetBucketVersioning
                  - s3:PutObject
                  - s3:ListBucket
                  - s3:DeleteObject
                Resource:
                  - !Sub '${PipelineArtifactBucket.Arn}/*' # Give access to the files inside the bucket.
                  - !GetAtt PipelineArtifactBucket.Arn # Give access to the bucket itself.
              - Effect: Allow
                Action: 'codestar-connections:UseConnection'
                Resource: !ImportValue "GlobalResourcesStack-GitHubConnectionArn"
              - Effect: Allow
                Action:
                  - 'codebuild:BatchGetBuilds'
                  - 'codebuild:StartBuild'
                Resource: !GetAtt BuildProject.Arn
              - Effect: Allow
                # CodePipeline triggers CloudFormation to create/update the app stack. These permissions let CodePipeline manage CloudFormation stacks during the deploy stage.
                Action:
                  - cloudformation:CreateStack
                  - cloudformation:UpdateStack
                  - cloudformation:DeleteStack
                  - cloudformation:DescribeStacks
                  - cloudformation:DescribeStackEvents
                  - cloudformation:DescribeStackResource
                  - cloudformation:GetTemplate
                  - cloudformation:ValidateTemplate
                Resource: !Sub 'arn:aws:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/todo-api-*/*'
              - Effect: Allow
                Action:
                  - iam:PassRole
                Resource: !GetAtt CloudFormationDeployRole.Arn

Now let’s create an IAM role which will be assumed by CloudFormation

Resources:
  CloudFormationDeployRole:
    # IAM Role assumed by CloudFormation to create the Lambda, API Gateway, DynamoDB table, and IAM roles
    Type: 'AWS::IAM::Role'
    Properties:
      # TRUST POLICY: Defines WHO (the Principal) can assume this role.
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: cloudformation.amazonaws.com
            Action: 'sts:AssumeRole'
      # PERMISSIONS POLICY: Defines WHAT actions this role is allowed to perform
      # once it has been assumed.
      Policies:
        - PolicyName: CloudFormationDeployPolicy
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - route53:ChangeResourceRecordSets
                  - route53:GetHostedZone
                  - route53:GetChange
                Resource: !Sub 'arn:aws:route53:::hostedzone/*'
              - Effect: Allow
                Action:
                  - route53:GetChange
                Resource: 'arn:aws:route53:::change/*'
              - Effect: Allow
                Action:
                  - lambda:*
                Resource: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:*'
              - Effect: Allow
                Action:
                  - apigateway:*
                Resource: !Sub 'arn:aws:apigateway:${AWS::Region}::/*'
              - Effect: Allow
                Action:
                  - dynamodb:CreateTable
                  - dynamodb:DeleteTable
                  - dynamodb:DescribeTable
                  - dynamodb:UpdateTable
                Resource: !Sub 'arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/todos-*'
              - Effect: Allow
                Action:
                  - iam:CreateRole
                  - iam:DeleteRole
                  - iam:GetRole
                  - iam:PutRolePolicy
                  - iam:DeleteRolePolicy
                  - iam:AttachRolePolicy
                  - iam:DetachRolePolicy
                  - iam:PassRole
                  - iam:GetRolePolicy
                  - iam:TagRole
                  - iam:UntagRole
                Resource: !Sub 'arn:aws:iam::${AWS::AccountId}:role/todo-api-*'
              - Effect: Allow
                Action:
                  - iam:CreateServiceLinkedRole
                Resource: !Sub 'arn:aws:iam::${AWS::AccountId}:role/aws-service-role/ops.apigateway.amazonaws.com/*'
              - Effect: Allow
                Action:
                  - s3:GetObject
                Resource: !Sub '${PipelineArtifactBucket.Arn}/*'
              - Effect: Allow
                Action:
                  - cloudformation:CreateChangeSet
                Resource: !Sub 'arn:aws:cloudformation:${AWS::Region}:aws:transform/Serverless-2016-10-31'

Note: This role is essentially the same as CloudFormationGitSyncRole which we have been using when creating the stack in AWS but here we choose to create an IAM role to deploy the app stack.

Now let’s create our BuildProject resource which will build our Go project and trigger CloudFormation to create our app stack

Resources:
  BuildProject:
    # CodeBuild project to compile the Go binary and package the SAM template
    Type: 'AWS::CodeBuild::Project'
    Properties:
      Name: !Sub '${BuildProjectName}-${Environment}' # Parameter created at the top of the file
      ServiceRole: !GetAtt CodeBuildServiceRole.Arn
      Artifacts:
        Type: CODEPIPELINE
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_SMALL
        Image: aws/codebuild/amazonlinux2-x86_64-standard:5.0
        EnvironmentVariables:
          - Name: ARTIFACT_BUCKET
            Value: !Ref PipelineArtifactBucket
      Source:
        Type: CODEPIPELINE
        BuildSpec: |
          version: 0.2
          phases:
            install:
              runtime-versions:
                golang: 1.21
            build:
              commands:
                - go mod download
                - mkdir -p out
                - GOOS=linux GOARCH=arm64 go build -o out/bootstrap ./cmd/todo-api
            post_build:
              commands:
                - cp $CODEBUILD_SRC_DIR_InfraArtifact/app.yaml .
                - aws cloudformation package --template-file app.yaml --s3-bucket $ARTIFACT_BUCKET --output-template-file packaged.yaml
          artifacts:
            files:
              - packaged.yaml

The last thing to do is to create our pipeline

Resources:
  TodogoGoPipeline:
    Type: 'AWS::CodePipeline::Pipeline'
    Properties:
      RoleArn: !GetAtt CodePipelineServiceRole.Arn
      ArtifactStore:
        Type: S3
        Location: !Ref PipelineArtifactBucket
      Stages:
        - Name: Source
          Actions:
            # Get the Go API code from GitHub
            - Name: GitHubSource
              ActionTypeId:
                Category: Source
                Owner: AWS
                Provider: CodeStarSourceConnection
                Version: '1'
              OutputArtifacts:
                - Name: SourceArtifact
              Configuration:
                ConnectionArn: !ImportValue "GlobalResourcesStack-GitHubConnectionArn"
                FullRepositoryId: iJKTen/example-todo-api
                BranchName: !Ref BranchName
                OutputArtifactFormat: CODE_ZIP
            # Get the infra template from the infra repo
            - Name: InfraSource
              ActionTypeId:
                Category: Source
                Owner: AWS
                Provider: CodeStarSourceConnection
                Version: '1'
              OutputArtifacts:
                - Name: InfraArtifact
              Configuration:
                ConnectionArn: !ImportValue "GlobalResourcesStack-GitHubConnectionArn"
                FullRepositoryId: iJKTen/example-todo-infra
                BranchName: !Ref BranchName
                OutputArtifactFormat: CODE_ZIP
        - Name: Build
          # Get the zipped code from the source artifact, unzip it, and build it using BuildProject and upload it again to the artifact bucket
          Actions:
            - Name: GoBuild
              ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: '1'
              InputArtifacts:
                - Name: SourceArtifact
                - Name: InfraArtifact
              OutputArtifacts:
                - Name: BuildArtifact
              Configuration:
                ProjectName: !Ref BuildProject
                PrimarySource: SourceArtifact
        - Name: Deploy
          Actions:
            - Name: CloudFormationDeploy
              ActionTypeId:
                Category: Deploy
                Owner: AWS
                Provider: CloudFormation # Tell CodePipeline to use CloudFormation
                Version: '1'
              InputArtifacts:
                - Name: BuildArtifact # The build artifact contains packaged.yaml which CloudFormation uses as the deployment template
              Configuration:
                ActionMode: CREATE_UPDATE # Create Stack if New, update if stack exists
                StackName: !Sub "todo-api-${Environment}"
                TemplatePath: BuildArtifact::packaged.yaml
                Capabilities: CAPABILITY_IAM,CAPABILITY_AUTO_EXPAND
                RoleArn: !GetAtt CloudFormationDeployRole.Arn
                ParameterOverrides: !Sub '{"Environment": "${Environment}"}'

Let’s walk through each stage of this pipeline

  1. The GitHubSource stage gets our code from GitHub and it’s zipped and placed in the SourceArtifact which is marked as the output of this call.
  2. The InfraSource stage reads from the current infrastructure repository and stores the yaml files in InfraArtifact which is marked as the output of this call.
  3. The GoBuild stage gets both the artifacts and makes it available to the build environment. The PrimarySource is marked as SourceArtifact so the files in there will be unzipped and made available to the container running the build. The yaml files will be inside the InfraArtifact. The ProjectName specifies which project to run which is set to BuildProject we defined earlier.
  4. Once the build is finished the stage CloudFormationDeploy runs and the BuildArtifact contains the yaml file to create the stack.

Now it’s time to create our app stack which we will define inside app.yaml file.

Let’s start by defining the same environment parameter

AWSTemplateFormatVersion: 2010-09-09
Description: Todo API application - Lambda, DynamoDB, and API Gateway
Transform: AWS::Serverless-2016-10-31

Parameters:
  Environment:
    Type: String
    AllowedValues:
      - dev
      - staging
      - prod

Because we want to create a dev/staging/prod environment based on which “stack” is deployed, let’s create a mapping of the domain names we are going to use to make it easy for us to reference it later. This is how it’s done

Mappings:
  EnvDomain:
    dev:
      Domain: dev.apitodo.jaik.me
    staging:
      Domain: staging.apitodo.jaik.me
    prod:
      Domain: apitodo.jaik.me

Now let’s create our DynamoDB table as a resource

Resources:
  DynamoDbTodoTable:
    # Create a dynamodb table
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: !Sub "todos-${Environment}"
      BillingMode: PAY_PER_REQUEST
      SSESpecification:
        SSEEnabled: True
      AttributeDefinitions:
        - AttributeName: ID
          AttributeType: S
      KeySchema:
        - AttributeName: ID
          KeyType: HASH

Now let’s create our Lambda using Serverless Application Model (SAM)

Resources:
  TodoFunction:
    #Use SAM (Serverless Application Model) to create AWS::Lambda::Function, AWS::IAM::Role, AWS::IAM::Policy, AWS::ApiGateway::RestApi, AWS::ApiGateway::Resource, AWS::ApiGateway::Method, AWS::ApiGateway::Deployment, AWS::ApiGateway::Stage, AWS::Lambda::Permission
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: out
      Handler: bootstrap
      Runtime: provided.al2023
      Architectures: [arm64]
      MemorySize: 512
      Timeout: 10
      Environment:
        Variables:
          TABLE_NAME: !Ref DynamoDbTodoTable
          ENV: !Ref Environment
      Events:
        GetTodos:
          Type: Api
          Properties:
            Path: /todos
            Method: GET
        CreateTodo:
          Type: Api
          Properties:
            Path: /todos
            Method: POST
        GetTodoById:
          Type: Api
          Properties:
            Path: /todos/{id}
            Method: GET
        UpdateTodoById:
          Type: Api
          Properties:
            Path: /todos/{id}
            Method: PUT
        DeleteTodoById:
          Type: Api
          Properties:
            Path: /todos/{id}
            Method: DELETE
        # OPTIONS endpoints are required for CORS preflight requests.
        # Browsers send a preflight OPTIONS request before making cross-origin
        # requests with custom headers, methods like PUT/DELETE, or credentials.
        OptionsTodos:
            Type: Api
            Properties:
              Path: /todos
              Method: OPTIONS
        OptionsTodoById:
            Type: Api
            Properties:
              Path: /todos/{id}
              Method: OPTIONS
      # Grant the Lambda function CRUD permissions on the DynamoDB table.
      # DynamoDBCrudPolicy is a built-in AWS SAM policy template that expands
      # into IAM permissions for GetItem, PutItem, UpdateItem, DeleteItem, Query, Scan, etc.
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref DynamoDbTodoTable

The property CodeUri points to the directory containing the compiled Go binary.

  1. During the build phase, CodeBuild compiles the Go source into a binary inside an /out directory
  2. In the post build phase app.yaml is copied into the build directory, then aws cloudformation package is run against it.
  3. CloudFormation reads the app.yaml file and sees CodeUri: out, zips the contents of the /out directory and uploads it to an S3 bucket and produces packaged.yaml file where CodeUri is replaced with the actual S3 URI.
  4. In the deploy phase, CloudFormation deploys packaged.yaml which now has the S3 location, so Lambda knows where to pull the code from.

The Environment under Properties makes the table name available as an environment variable for the API to read.

While the OPTIONS endpoints handle the browser’s preflight requests, that alone is not enough. Your Go Lambda must also return CORS headers on every response — Access-Control-Allow-Origin, Access-Control-Allow-Methods, and Access-Control-Allow-Headers. Without these, the browser will block the response even though the server processed it successfully. Since the React app is served from todo.jaik.me and the API lives at apitodo.jaik.me, every request is cross-origin. In the Go API, this is handled by setting the response headers before writing the body. You can see how this is implemented in the Go API repository.

The rest of the settings are related to custom domain management

Resources:
  # 1. Bring our own domain: Tell API Gateway "I own <env>.apitodo.jaik.me, here's my SSL cert."
  ApiCustomDomain:
    Type: AWS::ApiGateway::DomainName
    Properties:
      DomainName: !FindInMap [EnvDomain, !Ref Environment, Domain]
      RegionalCertificateArn: !ImportValue "Projects-CertificateArn"
      EndpointConfiguration:
        Types:
          - REGIONAL

  # 2. Set up DNS: Tell Route 53 to point apitodo.jaik.me to the API Gateway endpoint.
  ApiDnsRecord:
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !ImportValue "Todo-HostedZoneId"
      Name: !FindInMap [EnvDomain, !Ref Environment, Domain]
      Type: A
      AliasTarget:
        DNSName: !GetAtt ApiCustomDomain.RegionalDomainName
        HostedZoneId: !GetAtt ApiCustomDomain.RegionalHostedZoneId

  # 3. Route the traffic: When requests arrive at apitodo.jaik.me, send them
  #    to the SAM-generated API (ServerlessRestApi).
  ApiBasePathMapping:
    Type: AWS::ApiGateway::BasePathMapping
    DependsOn:
      - ApiCustomDomain
      - ServerlessRestApiProdStage
    Properties:
      DomainName: !FindInMap [EnvDomain, !Ref Environment, Domain]
      RestApiId: !Ref ServerlessRestApi # Auto-created by SAM from the TodoFunction Api events
      Stage: Prod

Note: ApiCustomDomain uses RegionalCertificateArn which requires the cert in the same region as the API Gateway. Since our certificate is created in us-east-1 the API Gateway stack should also be deployed in the same region.

Now we have successfully defined our infrastructure as code and the last thing to do is to deploy this stack.

Push your changes to GitHub and follow these steps to create a stack in AWS

  1. Login to your GitHub account and navigate to Settings -> Applications -> AWS Connector for GitHub and click on Configure
  2. Under Repository access Select your repository so AWS has access to it.
  3. Navigate to CloudFormation -> Stacks
  4. Click on Create stack -> With new resources (standard)
  5. Select Choose an existing template under Prerequisite - Prepare template
  6. Select Sync from Git under Specify template and click Next
  7. Provide a name for your stack
  8. Under Stack deployment file select I am providing my own file in my repository
  9. Under Template definition repository select Link a Git repository
  10. Select GitHub under Select repository provider
  11. Under Connection select your connection name (This is the connection that was created in this blog post)
  12. Under Repository select your repository
  13. Under Branch select your-branch-name
  14. Enter deployment-prod.yaml if this stack is going to be a production stack. We repeat the same process if we want to create dev/staging environments
  15. Under IAM role select Existing IAM role.
  16. Under IAM role name select CloudFormationGitSyncRole. This is the same role created in the blog post.

After CloudFormation has created this stack it will trigger creation of a new stack from app.yaml file.

GitHub Repositories

AWS Resources